Documenti di Didattica
Documenti di Professioni
Documenti di Cultura
MARKET SIMULATION
INSIGHTS ON A MAJOR MARKET FROM
THE SCIENCE OF COMPLEX ADAPTIVE SYSTEMS
Complex Systems and Interdisciplinary Science
(ISSN: 1793-4540)
Associate Editors:
Brian Arthur Philip Maini
Santa Fe Institute, Spain University of Oxford, UK
Published:
Vol. 1 A Nasdaq Market Simulation: Insights on a Major Market from the
Science of Complex Adaptive Systems
by Vincent Darley & Alexander V. Outkin
Forthcoming:
Large Scale Structure and Dynamics of Complex Networks
Alessandro Vespignani & Guido Caldarelli
A NASDAQ
MARKET SIMULATION
INSIGHTS ON A MAJOR MARKET FROM
THE SCIENCE OF COMPLEX ADAPTIVE SYSTEMS
Vincent Darley
President & CEO of Eurobios, UK
Alexander V Outkin
Los Alamos National Laboratory, USA
World Scientific
NEW JERSEY LONDON SINGAPORE BEIJING SHANGHAI HONG KONG TA I P E I CHENNAI
Published by
World Scientific Publishing Co. Pte. Ltd.
5 Toh Tuck Link, Singapore 596224
USA office: 27 Warren Street, Suite 401-402, Hackensack, NJ 07601
UK office: 57 Shelton Street, Covent Garden, London WC2H 9HE
For photocopying of material in this volume, please pay a copying fee through the Copyright
Clearance Center, Inc., 222 Rosewood Drive, Danvers, MA 01923, USA. In this case permission to
photocopy is not required from the publisher.
Printed in Singapore.
v
January 26, 2007 10:50 WSPC/Book Trim Size for 9in x 6in NasdaqBook2004ws
Preface
This research started in 1998 when Bios Group and Nasdaq entered into a
collaboration to explore new ways to better understand Nasdaqs operating
world. One set of goals was to explore the effects, including unintended
consequences, of the market microstructure, market rules, and changes to
them, on the behavior of participants such as market makers and traders
in the Nasdaq market, and thereby on the dynamics and behavior of the
market as a whole. For reasons that will become clear, the means chosen to
pursue these goals was to create agent-based models of the Nasdaq market.
Stock markets in general, and Nasdaq in particular, have undergone
a period of rapid change in the past few years, which produced sweeping
changes in the markets behavior. The nature and direction of those changes
are still far from certain; however, they are certainly dramatic. Therefore,
it is of the utmost importance to understand what can happen so that
markets and their regulators have sufficient information to prepare for new
changes in the future, potentially by making well-informed adjustments to
some of their rules and infrastructure.
One of the most important issues in 1998 was the forthcoming decimal-
ization. Decimalization means a change to expressing prices in a decimal
system, rather than in dollars and fractions of dollars, and the related ques-
tions of what the minimum tick size (or price change) in the markets should
be, and what effects a particular minimum tick size might have on the dy-
namics of the market, and its fairness, volatility, price discovery, etc. Much
of the recent history of Nasdaq and NYSE prices were quoted in $1/8ths
and $1/16ths, but in 1998 decimalization was being planned; hence this
was a very important issue at the time. The goal of this book is two-
vii
January 26, 2007 10:50 WSPC/Book Trim Size for 9in x 6in NasdaqBook2004ws
Preface ix
Research group, Simon Spencer, Jon Tarnow, Douglas Patterson, and oth-
ers for their valuable comments and suggestions. We are indebted to Rob
Axtell for his continued support and encouragement. We thank Paula Lozar
for editing help. We thank the editorial staff at World Scientific for their
help and patience. Any errors which may remain are of course the sole
responsibility of the authors.
January 26, 2007 10:50 WSPC/Book Trim Size for 9in x 6in NasdaqBook2004ws
Contents
Preface vii
xi
January 26, 2007 10:50 WSPC/Book Trim Size for 9in x 6in NasdaqBook2004ws
4. Spread Clustering 71
4.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . 71
4.2 Spread Clustering . . . . . . . . . . . . . . . . . . . . . . . . 72
4.2.1 Spread Clustering with Learning and Basic Inventory
Market Makers . . . . . . . . . . . . . . . . . . . . . 72
January 26, 2007 10:50 WSPC/Book Trim Size for 9in x 6in NasdaqBook2004ws
Contents xiii
6. Calibration 89
6.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . 89
6.2 Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 90
6.3 Calibration Methodology . . . . . . . . . . . . . . . . . . . . 92
6.3.1 Calibrating to the Trading Volume Distribution . . . 92
6.3.2 Calibration on the Individual Level . . . . . . . . . . 92
6.4 Quantitative Behaviors: Calibrating Simulated Strategies
Against Real-World Strategies . . . . . . . . . . . . . . . . . 93
6.4.1 Parameter Discovery for Basic Dealers . . . . . . . . 95
6.4.2 Parameter Discovery for Volume Dealers . . . . . . . 97
6.4.2.1 Time series comparisons . . . . . . . . . . . 97
6.4.2.2 Statistical analysis . . . . . . . . . . . . . . 99
6.4.3 Identification of Analytic Parasites . . . . . . . . . . 100
6.4.4 Results Basic Dealers . . . . . . . . . . . . . . . . . 101
6.4.5 Results Volume Dealers and Parasitic Dealers . . . 103
6.5 Wealth Effects Analysis . . . . . . . . . . . . . . . . . . . . 108
6.6 Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . 110
Bibliography 145
Index 151
January 26, 2007 10:50 WSPC/Book Trim Size for 9in x 6in NasdaqBook2004ws
Chapter 1
1.1 Introduction
1
January 26, 2007 10:50 WSPC/Book Trim Size for 9in x 6in NasdaqBook2004ws
tion: The core of my argument is a claim that informational technologies and processes
have critical properties which are imperfectly captured by models in the mechanistic
analytical tradition but which are well defined when building blocks of the economy are
represented as abstract automata.
January 26, 2007 10:50 WSPC/Book Trim Size for 9in x 6in NasdaqBook2004ws
We also found, by analyzing the real world data, that the strategies,
employed by the Nasdaq market makers underwent a phase transition as
a result of the previous tick size change. Namely, when the tick size was
changed from $1/8 to $1/16 in 1997, the strategies employed by the market
makers underwent a quantitative and qualitative change.
We observed this change by representing strategies of all the market
makers as a function of strategies of two representative market makers.
That means that the strategy of every market maker in the market can be
expressed as a linear function of two parameters, each expressing depen-
dency on one of the representative market making strategies. Investigating
how those parameters change with tick size allows understanding whether
a phase transition in market makers behavior has happened concurrently
with the tick size change.
There are two ways for this parameter change to occur:
we believe our approach allows investigating what could happen given the
strategies of the market participants employed at present, given the ability
to create a new population of strategies using learning and evolution, and
given the rules and interactions in the market place (and changes to them)
as they are understood by the researchers and implemented in a model.
While such approach may not provide a definitive prediction this is not
a problem for two reasons: 1) As outlined above, a definitive prediction
may not be possible for example in case of phase transitions; 2) This ap-
proach allows investigating a range of what-if scenarios, thus providing
a capability to plan and adapt.
These considerations do not however, invalidate the prediction in general
rather, they impose limits on what can be predicted. Even if the details of
the evolution of a population of strategies may not be predictable, certain
system-level properties may be predictable. We believe, the majority of the
predictions weve made using our model fall into the latter category (see
more below, and in Chapter 8).
Perhaps as important, predicting social systems in general may not be
a self-consistent exercise - once a prediction is made and known, it is likely
to affect the system behavior. Such changes may invalidate the prediction,
make it self-fulfilling, or generally change the interactions or perceptions of
system participants, so that the original model may become invalid.
Additionally, a certain degree of unpredictability is necessary for the
market to operate after all a buyer and a seller to a stock transaction
are unlikely to have the same views of the stock being traded. A need for
uncertainty, as a prerequisite for a viable market or a society in general is
expressed in Peters (1999). Additionally, there is an interesting interplay
between uncertainty, as connected to entropy, and information generation
- for example the real-world ant algorithms implemented by the real-world
ants work in part because the information left in a particular location in
the form of a pheromone trail dissipates to other locations - and thus can
be discovered more easily (see Parunak and Brueckner (2001) for example).
Interestingly, extra randomness may improve behaviors of individual
trading strategies in our simulation. For example, the Volume dealer,
described later, earns better profit when a degree of randomness is added
to its decisions. Perhaps this is analogous to simulated annealing algorithms
that require a degree of randomness to avoid getting stuck on a suboptimal
solution.
While building the simulated model of the market, we interacted ex-
tensively with many market participants: market makers, brokers, traders,
January 26, 2007 10:50 WSPC/Book Trim Size for 9in x 6in NasdaqBook2004ws
(1) Decimalization (tick size reduction) will negatively impact the price
discovery process.
(2) Volume will increase, potentially ranging from 15% to 600%.
(3) Ambiguous investor wealth effects may be observed. (Investors av-
erage wealth may actually decrease in the simulation, but the effect is
not statistically significant).
(4) Phase transitions will occur in the space of market-maker strategies.
(This was observed in the transactional data).
(5) Spread clustering may be more frequent with tick size reductions.
(6) Parasitic strategies3 may become more effective as a result of tick size
reductions.
3 Parasitic
strategies are defined and discussed later. Briefly, we say that a strategy
is parasitic when it attempts to use information and liquidity provided by other market
January 26, 2007 10:50 WSPC/Book Trim Size for 9in x 6in NasdaqBook2004ws
There is strong, albeit indirect, evidence that reducing the tick size
negatively impacts the price discovery process.
There are strong indications that not all investor types benefited from
decimalization. (There are ambiguous wealth effects, like those ob-
served in our simulation.)
There is unambiguous support for our prediction of phase transition in
market maker strategies as a result of decimalization.
Nasdaq Economic Research (2001) reports that there is a significant
natural quote clustering in the decimalized market.
That same source reports that there is evidence of more stepping in
ahead of the customers orders (in particular on electronic communica-
tion networks, or ECNs) and more time when ECNs are on the inside
market, which we interpret as indications of increased activity of par-
asitic strategies.
participants, without providing those itself. We do not offer a value judgement by calling
a strategy parasitic, but rather use commonly accepted terminology.
January 26, 2007 10:50 WSPC/Book Trim Size for 9in x 6in NasdaqBook2004ws
compensated for that role and risk by the spread), to a new world in
which their primary activity is more negative, usually extracting liquidity
from the market by front-running and similar strategies.
The next step is to bring this type of modeling from the realm of research
to the practice of the real-world markets. We believe models of this kind
can be useful for both regulatory and trading or market making purposes
because they offer unparalleled ability to understand market dynamics,
incorporate user knowledge and proprietary data sources into the simulation
and to calibrate the model in the real-time.
oriented programming. However, agents can also be implemented using other program-
ming techniques as well.
January 26, 2007 10:50 WSPC/Book Trim Size for 9in x 6in NasdaqBook2004ws
(1) Agents
(a) Set of actions available (buy, sell, change contract price, etc.).
(b) Decision-making rules and algorithms (how agent makes decisions;
say, sell when the price of stock starts to go down and buy when the
price starts to go up). Those decision-making rules are often called
agent strategies. In practice, they can be implemented as separate
strategy objects, and used by agents by means of strategy pattern
(see Gamma et al. (1995) for example).
(c) Scheduling of decisions and actions - at which intervals decisions are
made (say, every week), or depending on which signal an action may
be executed (say after a trade occurred). Different agents may have
different time scales - some can make decisions every day and others
every month or every second. An agents schedule may incorporate
actions the agent may choose to take (say analyze last hours prices
and volumes) as well as obligations an agent may have to fulfill (say
return the shares he borrowed). Scheduling, however simple it may
sound, is a critical component in agent-based simulations. It affects
such aspects of the simulation as repeatability of results, determin-
ism, and effects of randomness, particularly when simulation uses a
distributed architecture.
(d) State - how agents state is described - amount of capital, inventory,
production capabilities at present, etc. Most often the agents state
is changed as a result of interaction with other agents or with the
environment.
(e) Information in- and out-flows - what information agents take into
account when making decisions, what information they provide to
others.
(f) What other agents/external factors this agent interacts with or af-
fected by.
(2) Interactions and information flows
(a) Who is interacting with whom.
(b) What are the flows of information.
January 26, 2007 10:50 WSPC/Book Trim Size for 9in x 6in NasdaqBook2004ws
(c) What are the results of agents interactions: monetary payoffs, the
commodity, money, etc flows. Example: say, agent A interacts with
agent B, provides information to agent C and pays money to agent
B.
(3) Data available (what historical and other data is available for the whole
system and individual agents).
(4) External factors (environment). Normally those are the factors over
which agents do not have direct control, but may want or may have
to take into account when making their decisions. For an individual
investor agent such factors can be the market price of security, Federal
Reserve board interest rate changes, etc. - all those factors that an
agent is unlikely to be able to affect through his actions and therefore
may have to take as given.
Note that the very nature of modeling interactions between the agent
participants in the system is a step away from most classical economics.
The ability to model interactions explicitly allows the system to contain
or exhibit localized, time-dependent events, and to understand how those
events might propagate throughout the system.
For many practical and theoretical purposes we consider agent-based
modeling an extension of game theory5 . Similarities include: concepts of
agents, strategies, payoffs, information flows, and interaction structures.
The main differences are as following:
von Neumann and Morgenstern (1944). An overview of game theory is outside of the
scope of this book. See, for example, Fudenberg and Tirole (1991) for an introduction.
January 26, 2007 10:50 WSPC/Book Trim Size for 9in x 6in NasdaqBook2004ws
maker buy price and the lowest sell price are called the best bid and best
ask respectively. The difference between the best ask and the best bid is
called bid-ask spread, or just spread. Spread is an important measure, be-
cause it affect the profits made by the dealers and the costs incurred by the
trading public6 .
Tick size is the smallest price increment. Historically, on Nasdaq and
other US markets, tick sizes where quoted in fractions (for example $1/8).
Until 1997, the Nasdaq market tick size was $1/8. In 1997 it was changed
to $1/16. Our research started in 1998 with the goal of understanding
how future tick size reductions may affect the markets. Decimalization
(a change to expressing quotes and prices in decimal system, rather than
in fractions) and a tick size reduction were implemented by Nasdaq in
Spring of 2001. Decimalization affected, directly or indirectly, the market
dramatically. Some of its effects are discussed in Chapter 8.
Nasdaq also has order-driven components for example ECNs and in
some cases the market makers display public (individual investors) limit
orders. Therefore, there are two types of orders market orders and limit
orders. Market orders require execution at the best available price, limit
orders, on the other hand have either the maximum price the order origi-
nator is willing to pay in case of a buy order or a minimum price the order
originator is willing to accept in case of a sell order (thus limit).
In addition to being regulated by government agencies, such as the SEC,
Nasdaq is also regulated by the NASD (National Association of Securities
Dealers). When we begun our research NASD was a parent organization
for Nasdaq, and Nasdaq was a self-regulating entity. Back in 1996 NASD
embarked on radical changes that would allow NASD to concentrate on
market regulation and investor protection, and for Nasdaq to become re-
sponsible for market operations. At present this process is complete, and
Nasdaq operates as a separate entity. More information can be found on
NASD and Nasdaq websites and elsewhere.
dealer profits earned from the spread (and the spread itself) generally declined signifi-
cantly.
January 26, 2007 10:50 WSPC/Book Trim Size for 9in x 6in NasdaqBook2004ws
model many features of real-world markets. The main actors in the model
are dealers, whom we call interchangeably market makers and investors. We
model them as separate, autonomous entities whose interactions within the
marketplace produce price discovery and determine the markets dynamics.
The market contains a single security whose fundamental value, also
sometimes referred to as true value, is exogenously specified: the un-
derlying value of the security is assumed to fluctuate according to a ran-
dom process. We will use fundamental and true value interchangeably
throughout this book. In the context of the simulation, true value is per-
haps more appropriate - while in the real world fundamental value is not
directly observable, its analog is observable in the simulation - thus true
value.
Investors receive a noisy version of that fundamental value and act on
the basis of that information; thus the fundamental value in the simulation
can be interpreted as an aggregate source of quasi-information. There are
investors in the simulation who make their decisions based solely on the
perceived fundamental value of the security, as well as those who follow
trends, etc. Market makers in the simulation do not have access to the
fundamental value of security, but can extract information from the trades
that are coming their way and from other market observations; as a result,
they may and most of the time do possess information that is superior to
that of the individual investors.
Market makers and investors are represented in the simulation as au-
tonomous agents that behave according to their individual strategies: these
strategies may be built in, or can arise as a result of learning or evolution-
ary selection. The built-in strategies were mainly derived from interviews
with market experts (market makers in particular), as well as extracted
from the market data. In the learning domain, we use neural networks
and reinforcement learning to generate strategies for agents. This creative
element is important because it allows us to investigate the possibilities
resulting from strategies that have not yet been discovered by players in
the real-world market.
In our research we initially concentrated on the following topics:
(3) Validating the model (this encompasses and includes the previous two
points).
(4) Designing learning agents, and investigating the behaviors they learn
and their ability to perform profitably in the market.
Our results are significant in two respects. First, the model is robust
in that the simulated market, investors, and market makers perform real-
istically under a wide variety of conditions. Second, the market dynamics
produced by the model have the same qualitative properties as those ob-
served in real markets. Thus the model provides a test bed in which to
investigate the effects of changes in market rules and conditions. For exam-
ple, we have derived results pertaining to volatility, liquidity, spread sizes,
and spread clustering, with our main focus being on the market impact of
reducing the tick size.
One important feature of our results is that often they are not a conse-
quence of a uniquely identifiable feature of the model, or of the actions of
certain market participants. Rather, they result from a relatively complex
set of interactions of market makers, investors, market rules, and market
infrastructure. Thus, even in a relatively simple setting, we can observe
unintended consequences of the markets design.
Through deeper analysis of systems like this, and a commensurate
greater understanding of the relationships between liquidity, volume,
volatility, etc, it may well be possible to build simpler models than ours
which bring the results down to a more fundamental set of criteria. This is
a promising avenue of current research, with Doyne Farmer and his collab-
orators perhaps bringing the best combination of practical experience and
theoretical expertise together in promising new ways (also see the section
1.5 of this Chapter).
Particularly earlier in that time period not much literature on agent-based modeling was
available.
January 26, 2007 10:50 WSPC/Book Trim Size for 9in x 6in NasdaqBook2004ws
achieve that goal, adaptive strategies (defined as strategies that use ratio-
nal as well as heuristic rules and change them according to the players
experiences in the game) and local interactions have to modeled. He ana-
lyzes effects of local interactions and imitation strategies in the Prisoners
Dilemma game, and finds out that in an equilibrium where cooperation co-
exists with non-cooperation (mixed equilibrium), there are cooperators who
receive better payoffs than any non-cooperator. Using simulation studies, it
is shown that dynamics of such models can exhibit chaotic behaviors, self-
replicating structures (comets), as well as convergence to both complete
cooperation and non-cooperation (see also Outkin (2003)). The conclud-
ing part of the dissertation analyzes analytically the stochastic dynamics
of a population of locally interacting best response players and shows that
stationary distribution is of the Gibbs-Boltzmann type and the long-run
equilibria are the maxima of a potential function.
Chapter 2
Based on a Bios Group report written by Vince Darley with Isaac Saias1
2.1 Introduction
We specified analysis of the Glosten-Milgrom (GM) model as an initial
point of departure for building the model of the Nasdaq stock market.
Glosten and Milgrom (1985). The Glosten-Milgrom model is of interest as
an idealized framework for understanding the emergence of spreads based
on a disparity of information between informed and uninformed traders.
The aim in beginning with the one-shot or single-period Glosten-Milgrom
model was to assure ourselves that our agent-based model would behave in
the same manner as this analytically solvable model. Thereafter, Nasdaq
expressed interest in extending the basic Glosten-Milgrom model in several
ways:
21
January 26, 2007 10:50 WSPC/Book Trim Size for 9in x 6in NasdaqBook2004ws
It is hypothesized that there are three primary economic sources for the bid-
ask spread: order-processing costs, inventory costs, and adverse selection
costs. The first consists of the setup price and operating costs of trading and
record-keeping; the second is the cost associated with carrying inventory,
including the cost to hold inventory subject to price fluctuations. Adverse
selection costs arise because some investors are better informed about a
securitys value than the market maker is, and trading with such investors
will, on average, be a losing proposition for the market maker. Since mar-
ket makers have no way to distinguish between informed and uninformed
traders, they are forced to engage in losing trades and must be compensated
accordingly. Therefore, a fraction of the bid-ask spread can be viewed as a
compensation for having to trade occasionally against informed traders.
(+(1) B )
computes P [V = V |Buy] = +(1) B
.
same beliefs), not risk-averse, and involved in perfect competition, the bid-
ask prices quoted by the market makers are the fair value of the asset,
taking into account all the information available to the market makers. A
semi-mathematical transcription of this is:
We discuss here some of the complications that are buried in the estimation
of this term.
In this expression, V is the random variable describing the value of the
underlying asset. The expectation E is computed with the a-priori proba-
bility describing the dynamics of the system as seen by the market maker.
This probability measure takes into account all the many details of the mar-
ket environment: how V is selected by nature, how the market operates,
and how traders are selected and act. As we see, much is hidden behind
this apparently innocuous expectation! As seen by the market maker in-
dicates that the market maker might not know, in general, all the markets
dynamics. Similarly, the information about trade execution is the his-
torical information available to the market makers: the market makers in
general only partially know the past execution. These details illustrate the
intricacies of the specifications of a market simulation. The computation of
the term E[V |information about trade execution] is correspondingly very
challenging in any extension of the Glosten-Milgrom model.
The notion of information/knowledge is very important. One must dis-
tinguish between the a-priori knowledge of the dynamics of the system and
the on-line information revealed as the simulation unfolds. The dynam-
ics of the system encompass the rules and regulations of the market, the
strategies of the players, the laws of evolution of the population of players,
and the a-priori dynamics of the price(s). These concepts correspond to
parameters that are typically loaded before any simulation begins. The on-
line information has to do with the resolution of non-deterministic choices,
as they occur during the execution of the simulation. Such resolution of
non-deterministic choices comes in two guises. On the one hand, coins
are being flipped. More formally, random variables are being realized
according to their prescribed probability law. On the other hand, some
game-theoretic choices are being made. This last category often includes
the scheduling choices which player takes the next step. It also includes
the branching points of the computation.
These complications appear in the iterative version of the multinomial
January 26, 2007 10:50 WSPC/Book Trim Size for 9in x 6in NasdaqBook2004ws
500
450
400
350
Convergence Time
300
250
200
150
100
50
0
0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1
mu
Fig. 2.1 Convergence times: defined as the length of time between when a new under-
lying price is set in the market, and the time by which the market maker has adjusted
its bid and ask prices to within a small neighborhood of that true price. The horizontal
axis represents the parameter , and the vertical axis the mean time to convergence.
Error bars of one standard deviation are shown.
The standard GM model assumes that the assets price V can take only
two values, V and V . In this section, we extend the GM mechanism to
situations where the asset can take any (finite) number of values. With
appropriate discretization, this extension therefore allows us to account for
continuous distributions of the asset V . The multinomial situation differs
significantly from the binomial situation in that an informed trader will
buy the asset only if V is bigger than the Ask price, and sell only if V
is less than the Bid price. Hence, if the value V lies between Bid and
Ask, the informed trader will do nothing we say that it sits. Note that,
if B + S = 1, then uninformed traders will always trade (never sit),
so that a sitting trader must be informed. We assume here that trades
are anonymous; this implies that the market maker cannot infer over time
which trader is likely to be informed, and preserves the GM setting.
January 26, 2007 10:50 WSPC/Book Trim Size for 9in x 6in NasdaqBook2004ws
As the execution unfolds and information about trades trickles in, the mar-
ket makers update both their bid-ask prices and the probability laws used
for their Bayesian updates. Section 2.4.2 discusses the probability updates.
Section 2.4.3 discusses the bid-ask price updates. Both are based on similar
equations.
a Buy, a Sell, or a Sit. The market maker estimates the probability that
V is equal to a value vi by P [V = vi |T1 = t1 , . . . , Tk = tk ]. For simplicity,
def
we let Pk [V = ] = P [V = |T1 = t1 , . . . , Tk = tk ] denote the conditional
probability of V conditioned on the first k trades: Pk [V = ] depends
implicitly on the values t1 , . . . , tk . Let Askk and Bidk denote the ask and
bid prices after the first k trades.
The distribution Pk is updated into Pk+1 as follows. Assume that
Tk+1 = Buy. A simple application of 2.1 and 2.4 gives:
Pk [V = v; v > Askk ] + (1 ) B Pk [V = v]
Pk+1 [V = v] = . (2.7)
Pk [V > Askk ] + (1 ) B
Similarly, if Tk+1 = Sell, combining 2.2 and 2.5 gives:
Pk [V = v; v < Bidk ] + (1 ) S Pk [V = v]
Pk+1 [V = v] = . (2.8)
Pk [V < Bidk ] + (1 ) S
And if Tk+1 = Sit, combining 2.3 and 2.6 gives:
Pk [V = v; Bidk v Askk ]+(1)(1 S B )Pk [V = v]
Pk+1 [V = v] = .
Pk [Bidk V Askk ]+(1)(1 S B )
(2.9)
This shows that, if Tk+1 = Sit, then Pk+1 [Bidk V Askk ] = 1, i.e.,
the distribution Pk+1 is concentrated on the interval [Bidk , Askk ]. This
means that, after the k+1th trade Tk+1 = Sit, the market maker has learned
that the value V is inside the interval [Bidk , Askk ]. This phenomenon can
be easily explained as follows: the event Tk+1 = Sit means that the k + 1th
trade was refused by the trader involved. The assumption B + S = 1
implies that this trader must be informed. As informed traders refuse to
trade precisely when Bidk V Askk , the market-maker learns from that
refusal that the value V is located in the interval [Bidk , Askk ].
Our formulae show that subsequent probability measures Pl , l t+1 are
derived from the measure Pk+1 . They will consequently all be concentrated
January 26, 2007 10:50 WSPC/Book Trim Size for 9in x 6in NasdaqBook2004ws
Ek [V ; Tk+1 = Buy]
=
Pk [Tk+1 = Buy]
vi Pk [V = vi ; vi > Askk ] + (1 ) B Ek [V ]
P
i
=
Pk [V > Askk ] + (1 ) B
and
vi Pk [V = vi ; vi < Bidk ] + (1 ) S Ek [V ]
P
i
=
Pk [V < Bidk ] + (1 ) S
Thus the new prices Bidk and Askk are defined by an implicit relation. This
can be solved dynamically by using root finding methods. These methods
are very intricate, so we implement instead the following iterative approach,
where Askk is derived from Askk1 and Pk , and where Bidk is derived from
January 26, 2007 10:50 WSPC/Book Trim Size for 9in x 6in NasdaqBook2004ws
Bidk1 and Pk :
and
Bidk = Ek [V |Tk+1 = Sell]
i vi Pk [V = vi ; vi < Bidk1 ] + (1 ) S Ek [V ]
P
= .
Pk [V < Bidk1 ] + (1 ) S
Our simulations show that this iterative scheme converges well and rapidly.
This means that the imprecision of the scheme is very small, but it still
allows the market maker to learn efficiently the true value of V from its
trades.
We show here a situation where the approximation costs money. Con-
sider the case where B + S = 1, and assume that the k + 1th trade is a
Sit. We saw in the previous section that the probability measure Pk+1 is
concentrated on the interval [Bidk , Askk ]. The new price Askk+1 is set by:
def ( + (1 ) B )
fB () =
+ (1 ) B
January 26, 2007 10:50 WSPC/Book Trim Size for 9in x 6in NasdaqBook2004ws
and
def (1 ) S
fS () = .
(1 ) + (1 ) S
As stated in Section 2.3, fB (respectively fS ) is the function allowing the
market maker to update the a-priori probability that V equals V , using the
a-posteriori information that a Buy (respectively a Sell) just occurred. The
function fB is concave increasing from 0 to 1: fB (x) x. The function fS
is convex increasing from 0 to 1: fS (x) x. The probability distribution
conditioned on the first k trades T1 , T2 , . . . , Tk is then:
Once we have extended the multinomial theory to deal with the additional
complication of a sit, we observe that convergence occurs very rapidly:
within a small number of time-steps the probability density is concentrated
at a single price point (which coincides with the true value), with all other
price points having probability at or near zero.
Note, however, that the GM model, in assigning zero probability to a
large number of price points, is assuming a stationary world. Once we
January 26, 2007 10:50 WSPC/Book Trim Size for 9in x 6in NasdaqBook2004ws
impose a random walk on the underlying price, such zero probabilities are
no longer valid: at isolated times the GM market makers model is suddenly
invalidated. Within the basic GM framework, invalidation is not an issue,
but in our extensions it certainly is: the problem is that events that occur,
but to which the agent assigns zero probability, are simply ignored. This
means that, once the price has moved away from the converged value, it is
too late for the agent to learn that fact.
It is tricky to decide the best way to resolve this issue: the GM market
maker agent must notice when its model has become incorrect, and then
adjust its probabilities accordingly. If the agent knows that the price is
undergoing a random walk, perhaps it should model that walk and attach
non-zero probabilities to a range of prices (this in turn would encourage a
non-zero spread, as we would expect). Alternatively, if the agent does not
know about the random walk, and instead simply knows that its old model
is most likely incorrect, it should reset its beliefs to a uniform state.
A separate issue is exactly how the agent should make its decision on
whether or not its model has been invalidated. The most sophisticated
approach would be for the agent to estimate the probability of the sequence
of events observed (buys, sells, and sits) in the context of its model and
its knowledge of the proportion of informed to uninformed traders. When
the probability drops below a given sensitivity threshold, the agent can
conclude that its model is no longer correct.
A less mathematical but perhaps more general approach would be for
the market maker to maintain a number of different models of the world (of
GM and other types), and use the model that has been the most accurate
historical predictor over the recent past.
We postpone such decisions for future work. We have implemented a
simple approach in which the agent estimates the probability of the last
observed event. If the probability is sufficiently low in the context of its
model (although the event occurred), the agent will reset its beliefs to a
uniform state with a given low probability (its belief sensitivity).
This simple solution allows the agent to continue to learn in dynamic
environments, but has the drawback that it will sometimes reset the agents
beliefs even when its model is still accurate. One can adjust the sensitivity
parameter of the agents beliefs to compensate one way or another.
January 26, 2007 10:50 WSPC/Book Trim Size for 9in x 6in NasdaqBook2004ws
(a) Bid-ask timeseries for market (b) Bid-ask timeseries for market
maker with priority maker without priority
Fig. 2.3 Basic empirical results for a GM two-market-maker system with inventory and
fixed agent priority.
this system, the priority rules were chosen so that the first market maker
would always take priority over the second, if their bid or ask prices were
equal2 .
This implies that the first agent is more likely to accumulate inventory,
and hence more likely to adjust his prices in unusual ways.
When we change the priority rule so that ties are broken randomly, we
can see that both agents adjust their prices in disadvantageous ways to deal
with their inventory problems (Figure 2.4).
We also note that market makers with inventory tend to accumulate
less profit (in fact, they lose money most of the time). This leads to the ob-
servation that an intelligently strategizing market maker who cares about
inventory will be forced to increase his spread to compensate for his inven-
tory aversion, or else lose money. Under evolutionary dynamics, therefore,
we would expect agents like those we have modeled here to be replaced over
time by agents who maintain larger spreads.
(a) Bid-ask timeseries for market- (b) Bid-ask timeseries for market-
maker 1 maker 2
Fig. 2.4 Basic empirical results for a GM two-market-maker system with inventory and
randomized priority.
mation about that true value diffuses (at a rate studied above) from the
informed traders to the market makers.
Clearly, in reality, prices are not fixed in this way. We must therefore
relax this assumption. Ideally we would let price emerge from the mar-
ket itself, from the accumulation of trades between traders and dealers,
representing a flow of supply and demand. The price would then be an en-
dogenous observable of the simulation. However, such a situation requires
significantly more sophisticated models of traders and dealers than the GM
model provides.
Rather than leaping directly to such an extension, we consider an in-
termediate case: that in which the price is still given exogenously, but
fluctuates over time in a random walk.
In the basic GM model, fluctuations can consist only of flips between
the two extremal values, V and V , so this model is not so interesting in the
random-walk case. Therefore, we investigate this model in the multinomial
GM case. We use fixed price points between 10 and 20 in unit steps, and
impose a random walk on that range. At each time step with probability p,
the price is adjusted by 1 (paying attention to end effects)3 . We sample
the observed volatility of the market makers bid and ask prices as a function
of the average price-jump frequency 1/p.
3 We also consider the case in which updates occur deterministically every f time-
steps, which is approximately equivalent to the p = 1/f case.
January 26, 2007 10:50 WSPC/Book Trim Size for 9in x 6in NasdaqBook2004ws
Chapter 3
3.1 Introduction
39
January 26, 2007 10:50 WSPC/Book Trim Size for 9in x 6in NasdaqBook2004ws
1 Consciously means that their rules may not explicitly imply any parasitic behavior.
January 26, 2007 10:50 WSPC/Book Trim Size for 9in x 6in NasdaqBook2004ws
3.2.1 Market
The market contains a single security whose value is exogenously specified:
in our models, the underlying value of the security is assumed to fluctuate
according to a random walk with a drift. While the fundamental value of
a company may be affected by the market dynamics, specifying it exoge-
nously, is sufficient for the purposes of our research. Weve developed and
experimented with a large number of objects representing the fundamental
January 26, 2007 10:50 WSPC/Book Trim Size for 9in x 6in NasdaqBook2004ws
3.2.2 Strategies
The second phase of model development was to develop agents fitness func-
tions and strategies. We have two types of strategies: Dealer strategies and
Investor strategies. Both types of agents are concerned with making money;
however, Dealers, as the mediators of all trading, have much more latitude
in their strategies. Dealers must set both bid and offer prices and volumes.
Investors must decide only on the course of action at any given moment,
namely, whether to buy or sell at a given price and how many shares. With
2 An
interesting issue which we leave to the future is to investigate information time
lags as well as errors.
January 26, 2007 10:50 WSPC/Book Trim Size for 9in x 6in NasdaqBook2004ws
A Basic Dealer only reacts to trades which come his own way. He
maintains his quotes until he receives a certain number of trades on the
buy (or sell) side, and then adjusts the quote on that side appropriately.
It is a simple strategy which manages to track price very effectively, in
the absence of parasites.
A variant of Basic Dealer strategy, called Basic Inventory Dealer
uses hardwired rules to adjust the center of its quotes to balance its
inventory. This is a simple, robust strategy which pays close attention
to inventory and can cope well with nearly all market scenarios. It
isnt very pro-active, however, so if the market isnt doing very much,
neither will this strategy. The components of this strategy decision
making process are: recent order flow, number of quote changes in the
same direction, inventory, its current spread.
Basic No Inventory Dealers is a variant of the Basic Inventory Dealer
strategy - it uses similar decision rules, but does not attempts to balance
3 As discussed later, there is not always a clear-cut distinction between the two.
January 26, 2007 10:50 WSPC/Book Trim Size for 9in x 6in NasdaqBook2004ws
ing the learning strategies. Generally more successful Spread Learner and Q-Learner
strategies are described below and in Chapter 5.
6 This strategy is due to Bill Reynolds.
January 26, 2007 10:50 WSPC/Book Trim Size for 9in x 6in NasdaqBook2004ws
We should point out that the names of these strategies are somewhat ar-
bitrary and even misleading: for instance, the New Volume dealer turned
out to be a very effective parasite7 was certainly not our intention when we
designed it.
(1) The reward structure how short-term outcomes translate into reward
or punishment.
(2) The actions available to the dealer.
(3) What information about the market and the dealer will be part of the
state for learning.
(4) The method used for learning what rewards to expect from particular
states and actions.
attempts to learn actions that will maximize the value of the stream of
reward signals expected in the future. However, the values of rewards in the
future are discounted by an exponential factor based on the field discount
factor in the agent.
The type of reward given to the agents can have a very strong effect
on the strategies they develop. It is particularly difficult to create well-
behaving dealer strategies, whose rewards are based on the value of their
stock holdings at any given time; however, dealers whose reward signal
is derived from an asset valuation based on the purchase price of stocks
tend to be better behaved. This can be attributed to the stability of the
reward signal, as the valuation of the dealers assets is insulated from market
fluctuations and from the impact of the dealers own actions on the market
value of the stock.
We have developed several kinds of reinforcement-learning dealers (for
example Spread Learner and Q-Learner, as well as Classifier and
Matching dealer). All share the same infrastructure; they differ mainly
in what aspects of their internal state and the state of the market they
pay attention to, and in what actions they can choose between. For these
dealers, the possible actions are raising or lowering their bid and offer quotes
by some amount. This creates a very large number of possible actions, so
the actions are typically limited to raising or lowering quotes by a small
amount (e.g., one tick) or changing the spread of the quotes by a small
amount.
Reinforcement-learning dealers take some time to learn, and they go
bankrupt quite frequently at the beginning of learning. By default, they are
resurrected after bankruptcy and are thus given the opportunity to continue
learning, rather than being replaced by a clone of some other dealer. Each
time a dealer is resurrected, this is said to be a new incarnation of the dealer.
The most interesting region for the dealer to learn in is at the beginning
of an incarnation, because there it is most vulnerable to mistakes. Once a
dealer has become successful, it has more assets and is far less vulnerable to
mistakes. Consequently, reinforcement-learning dealers can be configured
to commit suicide if they survive for some number of market steps.
In order to evaluate the viability of the learning strategies with that of
the hard-wired ones, we compared the assets of different market makers dur-
ing the simulation. We found that, under a variety of conditions, learning
strategies outperform the hard-wired ones. However, there are also market
scenarios, in particular with many parasites or with high volatility, where
the learning strategies at times find it difficult to survive. In such scenar-
January 26, 2007 10:50 WSPC/Book Trim Size for 9in x 6in NasdaqBook2004ws
ios, relatively simple strategies can perform better than very sophisticated
ones. We conjecture that this is not an artifact of the model, but rather an
indication of the difficulty of learning from scratch in the complex market
environment. There, the evolution may provide better guidance than indi-
vidual learning: we had rather more success with strategies that embodied
simple rules and only required the agents to learn which parameters allowed
those rules to function effectively.
Our results so far indicate that the choice of strategy building blocks is
of vital importance in allowing evolutionary learning approaches to function
effectively (analogous results are well known in the fitness landscape and
optimization literature). The literature supports our result that a sophisti-
cated strategy is by no means certain to be more successful. For instance,
the Double Auction Market work by Richard Rust et al. found that the
simplest parasitic strategy was the most effective. Furthermore, as sug-
gested by Darley and Kauffman (1997), there may be some bound on the
sophistication of a strategy for it to be effective in a system of co-evolving
agents. We discuss the development of learning strategies in more detail in
the Ch 5 of this book.
At the highest level, the simulation consists of four types of software objects:
Price, Dealers, Investors, and the Market itself. At the next level down,
all of these objects have member objects that provide communication and
bookkeeping capabilities.
The timing of the simulation is determined by a scheduling agent, which
assigns to every player (dealer or investor) a period of time in which the
player is able to take his next action8 . A dealer can also update his quotes
after every transaction he performs. There is much flexibility in the model
with respect to the relative frequency with which each dealer and investor
acts. In most simulation runs, dealers acted at a fast rate, with investors
having a larger range of rates, from very fast to quite slow. The scheduling
process could be adjusted to model the order flow of different securities.
We have enabled an ability to replicate the results of a simualtion by
setting a random seed (or a set of random seeds) that controls the random
variables used in the simualtion. This capability is used for example to
display the effects of different tick sizes against the same randomly gener-
8 An
action for a dealer is to choose his bid and ask prices, and for an investor is to
buy or sell at the best currently available price.
January 26, 2007 10:50 WSPC/Book Trim Size for 9in x 6in NasdaqBook2004ws
and many of its outputs. Below we describe the analysis that was carried
out.
3.4 Results
Our results fall into a number of different areas. The primary results,
reflecting the focus of our investigation, pertain to tick size effects, i.e.,
how the markets performance depends on the tick size. However, we have
also investigated a variety of other issues, and we present some of those
results in this book. The results presented in this chapter are based on
a population of dealer strategies that we believe are representative of the
market, which this is also supported by our analysis of trading and quotes
data in the Chapter 6 on Calibration.
Fig. 3.2 Fundamental value tracking in the presence of parasitic strategies (tick size
$0.01), yellow = offer price, red = bid price, green = fundamental value. For successful
price discovery, the fundamental value falls between the bid and offer prices.
Fig. 3.3 Fundamental value tracking with the same market conditions as in Figure 3.2,
except that the tick size is $1/16.
tracking is significantly improved when the tick size is increased from $0.01
to $1/16.
The analogous Figures 3.4 and 3.5 demonstrate that, under the same
market conditions and with a smaller number of parasites, market tracking
is almost unaffected by the same change in the tick size. On the first glance
those Figures 3.4 and 3.5 even look identical - however, there are subtle
differences between them.
We would like to point out that, although we investigate parasitic dealer
strategies in our models, in the real market the strategies that one might
call parasitic are also often observed in day traders operating via electronic
execution systems. However the lack of a limit-order quotation and elec-
tronic execution system in the simulation model limits us to investigating
parasitism in the context of dealer strategies. We consider this a reason-
able approximation to the more complete scenario. Additionally, we found
January 26, 2007 10:50 WSPC/Book Trim Size for 9in x 6in NasdaqBook2004ws
Fig. 3.4 Fundamental value tracking with a few weakly parasitic strategies, and a tick
size of a penny.
Fig. 3.5 Fundamental value tracking with the same market conditions as in Figure 3.4,
except that tick size is $1/16.
out calibrating the simulation to trading and quote data that many real
world ECNs (which provide a capability for individual investors, includ-
ing day traders, to submit limit orders) can be described well by parasitic
strategies in our simulation
Since we are using a true value for the price, we are able to quan-
tify the impact of different strategies and market structures. In particular
we have examined data on the statistical distribution of (best bid true
value)2 , (best offer true value)2 , average value of the spread and standard
deviation of the spread over time as estimates of the markets ability to
track price.
The preceding statements and analysis about price discovery and track-
ing the market are intuitively clear. A quantitative criterion, nevertheless,
would be useful, especially to distinguish similar cases. We will define mar-
ket tracking as consisting of two different components:
January 26, 2007 10:50 WSPC/Book Trim Size for 9in x 6in NasdaqBook2004ws
(1) The market bid and ask prices are close to the true value,
(2) The true value is inside the market bid and ask.
As one can see, these two conditions do differ: it is possible for the
market to follow the true price while the true price is outside the spread.
It is also possible to observe that the market reacts very slowly to the true
price although the true price stays inside the bid-ask spread, essentially
because the spread size is large. Obviously, some regimes of tracking the
price are more beneficial for Dealers and others for Investors. One potential
goal for future research would be to find a set of criteria for good market
rules, where the interests of Dealers are balanced against the interests of
Investors, and where the underlying goal of the market to perform price
discovery is reasonably satisfied.
To quantify the previous fundamental value tracking results, we have
run similar scenarios 100,000 times. In particular, we calculated the stan-
dard deviation of the differences between the fundamental value and the
average market price. Those results are presented in Figure 3.6. The value
of the standard deviation of (price fundamental value) for a population
with an insignificant number of parasites remains basically flat. The same
data, with a significant amount of parasitic market makers in the popula-
tion, is different: up to a certain level, decreasing the tick size reduced the
standard deviation of the price-value difference (effectively improving the
markets tracking and price discovery abilities); however, the standard de-
viation reaches a minimum at tick size $1/32, and then starts rising sharply
with further decreases in tick size.
This leads us to the conclusion that, in our simulation, there exists a
certain tick size that is close to optimal in terms of the markets ability to
track the price. Decreasing the tick size significantly below that level, in
the presence of parasites, will greatly impair the markets ability to perform
price discovery.
In addition, we observed that most parasitic strategies also seem to
create excess volatility in the market. Visually, it looks as if the true value is
roughly approximated by the dealers quotes, but the swings become larger
in magnitude and at times appear unjustifiable from the point of view of the
underlying true value dynamics. These excess volatility phenomena may
be a contributing factor to the poor value tracking in presence of parasites
and are illustrated in Figure 3.7.
We therefore conclude that there are two main effects of parasitic strate-
January 26, 2007 10:50 WSPC/Book Trim Size for 9in x 6in NasdaqBook2004ws
Fig. 3.6 The standard deviation of (Price Fundamental Value), with a significant
number of parasites (magenta) and a few weakly parasitic strategies (blue line).
Fig. 3.7 Market dynamics in the presence of a significant number of parasites. Tick
size $1/4.
gies on the price discovery process: (1) they impede the price discovery
process, especially with smaller tick sizes, and (2) they seem to contribute
to the creation of excess volatility in the market when the tick size is large.
It is necessary to note that this excess volatility is present only in certain
scenarios; it is affected by market conditions and the populations of investor
and market maker types.
The above results were obtained for a population of investors who use
only fundamentals to make their decisions. Thus there is always noisy infor-
mation available, whether the market tracks the true value or not. However,
ways to use this information to the agents advantage are not straightfor-
January 26, 2007 10:50 WSPC/Book Trim Size for 9in x 6in NasdaqBook2004ws
ward. Even if there are detectable deviations of the market price from the
true value, they may be relatively short-lived, insufficient in magnitude,
and also, most importantly, do not necessarily indicate the direction that
the market price and true value will take in the next period. It may be
possible to develop a strategy that can take advantage of significant devia-
tions of the market price from the true value, but in practice this does not
seem to be easy to do and may not be theoretically always possible.
There thus appear to be two main impediments to the price discovery
process in the model with fundamental investors:
turn depend upon the tick size. The exact nature of this connection, if any,
will require further investigation.
One of the main factors affecting, and at times facilitating, the process of
the price discovery is the behavior of investors. Roughly, there are two main
types of investors: those who make decisions according to the fundamentals,
and those who do not. Of course, there are various shades of gray between
these two, but the important factor is whether or not the investors rely on
fundamentals to make their decision.
Most of our results are derived under the assumption that investors have
access to the fundamental value of the security, plus some noise. We also
discuss in this book the implications of various non-fundamental strate-
gies on the market, but that will be necessarily more arbitrary for reasons
of model calibration. Besides, adding non-fundamental types of traders
tends in general to negatively affect the price discovery process, and thus
strengthen our main results.
Our model can help us understand how global characteristics of the mar-
ket (such as interaction structure and rules) affect market behavior. These
effects often propagate through the market by affecting individuals behav-
ior, and only then does their effect appear on the global level. Interaction
structure matters: for example, it seems that many effects we observe in
the real-world stock market (such as spread clustering and fat tails) can
be partly explained just by the markets particular form of organization
(existence of dealers, spreads, etc.).
It is clear that the markets infrastructure and rules are among the
most important determinants of global market performance. They are also
important factors affecting the strategies of individual market participants,
and the profitability of these strategies. As we noted above, often the
outcome of a strategy is not uniquely associated with any particular feature
of the model or the behavior of a set of market participants, but rather with
a complex set of their interactions.
Certain patterns in the real-world stock market seem to be important
attributes of market dynamics. These include the existence of fat tails,
persistent volatility, and spread clustering (although the reasons for the
latter are still highly arguable). To begin to establish the validity of our
model, we have conducted a few computer experiments to see whether we
can observe these effects in our model. We have observed fat tails, persis-
January 26, 2007 10:50 WSPC/Book Trim Size for 9in x 6in NasdaqBook2004ws
Current stock market literature (see Lux and Marchesi (1999), for example)
asserts that variations in stock price follow a distribution that is similar to
a Normal one, but has a higher probability of extreme events ( fat tails).
In other words, fat tails are considered aberrations from normality in price
data.
The precise meaning of normality is somewhat unclear i.e., where is
the assumed normality derived from? In the simulation setting, however,
normality is easy to derive among other possible alternatives, we investi-
gated various scenarios with the underlying fundamental value following a
log-normal walk. We have observed that, in many cases, the resulting mar-
ket price dynamics resemble a Levy distribution rather than a log-normal
one. For example, Fig. 3.8 is a histogram of the differences in logarithms
of the true value. This looks (and is designed to be) normally distributed10
4000
3000
2000
1000
Fig. 3.8 Frequency histogram of the distribution of differences in the logarithms of true
value.
10 The data for most of the histograms was derived from running the simulation for
100,000 periods. The details of most of the histograms (such as fatness of their tails)
will depend on the exact composition of the dealer and investor populations, as well as
on other market characteristics.
January 26, 2007 10:50 WSPC/Book Trim Size for 9in x 6in NasdaqBook2004ws
Fig. 3.9 Frequency histogram of the distribution of the differences of the logarithms of
the average price.
Even if there are no herding effects, and every market participant is act-
ing based only on the true value (or a noisy version of it), the resulting
price distribution may differ from a Normal one just because there is
a bid-ask spread . This is a factor that has not been well addressed in
current literature.
With or without explicit herd effects, market participants are affected
by the actions of others, and even by their own actions in previous
periods. The mechanism for this is very simple: When an agent makes
a decision, even if it is based purely on the fundamental value, he
takes into account the current price of the security, which is potentially
affected by his own previous behavior, and by actions of other agents.
Sampling effects: There is no reason to believe that the market price is
updated synchronously with the underlying true value (for instance, in
the simulation, the former will be affected by the actions of the market
participants). Therefore, sampling effects by themselves can plausibly
January 26, 2007 10:50 WSPC/Book Trim Size for 9in x 6in NasdaqBook2004ws
The difference is that, in Fig. 3.12, the price data is sampled 50 periods
apart.
Fig. 3.12 Histogram of the differences in the logs of average price, with data points set
50 periods apart.
fied, we will see clustering of the mean. The next chapter discusses and
analyses spread clustering in more detail.
In the real world, different market making strategies will have different
levels of profitability for a given set of market conditions. In addition,
depending on these conditions, both the absolute and the relative levels of
profitability for different strategies will change. These changes in profitabil-
ity are deeply affected both by the investors and market makers learning
processes, and by the evolution of the system as a whole.
In a larger setting, changing market circumstances may bring, not just
an incremental change to the profitability level, but rather a more dramatic
one: some strategies may become extinct and others, potentially qualita-
tively different, may appear. (This may be similar to the methods followed
by the large Nasdaq market maker, Knights Securities, who seem to use in-
formation in an innovative way that differs from that of traditional dealers
and is often superior to theirs in profitability.)
This research will not only help us to better calibrate the model, but also
to potentially apply the model to developing better strategies for dealing
with changing market circumstances.
January 26, 2007 10:50 WSPC/Book Trim Size for 9in x 6in NasdaqBook2004ws
First, our model can help us understand how global characteristics of the
market (such as interaction structure and rules) affect market behavior.
These effects often propagate through the market by affecting individuals
behavior, and only then does their effect appear on the global level. In-
teraction structure does matter: for example, it seems that many effects
we observe in the real-world stock market (such as spread clustering and
fat tails) can be in part be explained just by the markets particular form
of organization (existence of dealers, spreads, etc.). Potentially, the infras-
tructure may be very important for the markets aggregate behavior, and
this may have implications for other environments than markets.
Our main goal in this section is to compare our model, not necessarily
to the real world, but rather to some of the stylized facts in the literature.
And for a good reason it is highly arguable whether the real world is in
an equilibrium state. However, under controlled conditions in the simula-
tion, we can at least enforce the necessary conditions for the existence of
equilibrium, and then determine whether our market participants will be
able to find it.
Even though our simulation is relatively simple compared to the real world,
it does not looks like that the behavior of the market price is easily (if at
all) predictable. We investigated the relationship between the temporal
differences in the excess volume (the difference between buys and sells) and
differences in the logarithms of the market price. Resulting scatter-plot
diagram looks, depending on the tick size, similar to sometimes slightly ro-
tated ellipse and sometimes more like a sphere centered on the origins. From
there we conclude that on average (for the simulation run 100000 times)
differences in excess volume are not a very good predictor of the temporal
January 26, 2007 10:50 WSPC/Book Trim Size for 9in x 6in NasdaqBook2004ws
differences in prices. Similar data has been obtained for the differences in
the total volume and logarithms of prices.
It is clear from the discussion above that the markets infrastructure and
rules are among the most important determinants of global market perfor-
mance. They are also important factors affecting the strategies of individual
market participants, and the profitability of these strategies. As was noted
above, often the outcome of a strategy is not uniquely associated with any
particular feature of the model or behavior of a set of market participants,
but rather with a complex set of their interactions.
Our primary results concern the influence of tick size and parasitic
strategies on the markets effectiveness at price discovery. Within the cur-
rent model, the underlying value of the single stock is exogenously specified,
which allows a precise measurement of how effectively the market follows
that underlying value, as we will discuss later.
Many of the hard-coded dealer and investor strategies can evolve in the
current system: when an agent goes bankrupt, the market selects one of
the other agents, with more profitable agents more likely to be selected,
and clones it. A small random mutation is applied to the parameters of
the new clone. If the mutated parameters lead to higher profitability for
the new clone, then it is more likely to be selected for cloning to replace a
bankrupt agent in the future. This constitutes a blind evolutionary search
for parameters that result in high profitability. The search is blind because
January 26, 2007 10:50 WSPC/Book Trim Size for 9in x 6in NasdaqBook2004ws
3.15 Conclusions
(1) the ability of the market to track price11 increases somewhat with tick
size, and
(2) parasitic strategies impede the process of price discovery, especially for
a small tick size.
developed. This and potentially other parts of the project would require
collaboration with the NASD and one or more current Nasdaq dealers,
especially in the initial stages.
Other possibilities which could perhaps be incorporated in a richer
model are to allow the investors to choose between the single stock and
a surrogate of the market index.
Simply increasing the complexity of some parts of the model without
pursuing a more complete, realistic market would not be too productive.
The current model provides very suggestive results on tick size and para-
sitism in a simplified context, but these must be examined in as realistic
a situation as possible to draw useful conclusions that may be applied to
policy issues. Simply investigating a slightly more complex version of the
present model would not accomplish this goal. Although we have pointed
out a few other questions that the current model could be used to address,
it is clear that a subsidiary benefit of a more precise, realistic model would
be its ability to address a much wider range of interesting market ques-
tions than the model as it stands. The end product could be considered a
powerful What if? tool for market regulation.
At a higher level, using such a model, one could attempt to discover
those market rules and structures which most enhance the markets ability
to perform price discovery, impede parasitism, and balance the interests of
Dealers and Investors.
January 26, 2007 10:50 WSPC/Book Trim Size for 9in x 6in NasdaqBook2004ws
Chapter 4
Spread Clustering
4.1 Introduction
71
January 26, 2007 10:50 WSPC/Book Trim Size for 9in x 6in NasdaqBook2004ws
mean that spread clustering can occur naturally in certain markets, not
only by the mechanism of explicit collusion , but also as a result of specific
strategies employed by individual market participants. We discuss briefly a
variety of mechanisms that may contribute to clustering. We cannot argue
convincingly that collusion is not taking place in the markets, but we can
argue strongly against those who leap to the conclusion that illegal collusion
has occurred when presented with evidence of clustering.
be found in Chapter 3 and the Users Manual for Nasdaq Market Simulation, 1999.
January 26, 2007 10:50 WSPC/Book Trim Size for 9in x 6in NasdaqBook2004ws
Spread Clustering 73
Looking at Figure 4.1, one cannot help noticing the obvious cluster-
ing effects. Sometimes the effect is even more dramatic: for a population
including the same Basic Inventory Dealers or Learning Dealers, we have
observed histograms where a high value on one tick is almost always fol-
lowed by a low value on the next tick. An example of such a histogram is
shown in Figure 4.2. The observed pattern definitely looks like a form of
collusion3 .
Fig. 4.2 Spread clustering with Learning (and other) Dealers. Frequencies are shown
on a logarithmic scale on a linear scale, the smaller bars would be almost invisible.
3 Christie and Schulz (1994) explanation, however, does not apply to our model: just
by the simulation design, we can definitely eliminate any explicit (but not necessarily
implicit or emergent) collusion among the Market Makers.
January 26, 2007 10:50 WSPC/Book Trim Size for 9in x 6in NasdaqBook2004ws
Fig. 4.3 Spread size histogram. Dealer population consists of Basic No Inventory Deal-
ers.
Figure 4.4 shows the same dealer population with only one (fundamen-
tal) type of investor:
Fig. 4.4 Dealer population of Basic No Inventory Dealers; investor population of only
one type of investor.
Spread Clustering 75
Fig. 4.5 Same as Figure 4.4, with two parasitic dealer strategies added to the simulation.
Fig. 4.6 Same as Figure 4.3, with two parasitic strategies added to the simulation.
Fig. 4.7 Spread histogram for a population of BasicPriceDealers and a few parasites.
sells when it goes down, trying to limit his inventory exposure at the same
time.
(1) Merely showing that a set of agents using hard-wired strategies (and
with no collusion/cooperation whatsoever) may achieve a certain degree
of implicit coordination in choosing certain spread sizes may diminish
claims that the same effect in the real world necessarily derives from
malicious and intentional (explicitly or tacitly) behavior.
(2) It seems that part of the explanation for endogenous generation of
spread clustering in the simulation lies in the interaction of various
rules followed by Market Makers and investors. (An example of a rule
would be, If such and such conditions are satisfied, I will do so and
so: for example, raise the offer by 2 ticks and leave the bid the same.)
From the information we have, derived from various conversations with
Market Makers and other market participants, it seems that human
Market Makers use a set of rules arguably ad-hoc, but apparently
profitable. In addition, Market Makers are constrained by the rules of
the market as well; thus these market rules may also imply a specific re-
sponse to a specific trading situation. Further investigation is necessary
into how human Market Makers make their decisions. Encoding these
January 26, 2007 10:50 WSPC/Book Trim Size for 9in x 6in NasdaqBook2004ws
Spread Clustering 77
rules into our simulation could very well explain some of the patterns
observed in the real world.
(3) We observe clustering in populations with Learning agents as well as
in simulations with evolutionary selection. It is too soon to draw a
conclusion, but perhaps clustering may positively affect the profitability
and survivability of the agents strategies.
Spread Clustering 79
4.6 Conclusions
Chapter 5
1
Based on a Bios Group report written jointly with Tony Plate
(1) Learning and evolution should allow us to better calibrate the model
so that it will be more representative of real-world markets.
(2) Learning can produce types of behavior that are improvements over
hard-wired agent behaviors in terms of realism and profitability. This
is very important for validating our model and for its potential appli-
cations.
81
January 26, 2007 10:50 WSPC/Book Trim Size for 9in x 6in NasdaqBook2004ws
(1) The reward structure how short-term outcomes translate into reward
or punishment.
(2) The actions available to the dealer.
(3) What information about the market and the dealer will be part of the
state for learning.
(4) The method used for learning what rewards to expect from particular
states and actions.
(1) The current market value, i.e., the mean of the best current bid and
best current offer.
(2) The current market value as above, but disregarding the dealers own
bids and offers.
(3) The average purchase price of stock in the inventory, i.e., the price paid
at the time stock was added to the inventory (with suitable modifica-
tions for negative inventory).
Once we have a method for calculating the value of the assets held by
a dealer, we can derive a reward signal to be given at time step t from
the assets at at time step t and the assets at the previous time step at1
in one of the two following ways: 1. The change in assets, i.e. at at1 .
2. The logarithm of the proportional decrease or increase in assets, i.e.,
log(at /at1 )
Both of these reward signals will be positive if the value of assets has
increased and negative if the value of assets has decreased. The second is
more appropriate when we expect that the earnings of a dealer should be
proportional to their assets. The first is more appropriate when we expect
the earnings of a dealer to be unrelated to their assets. In the current
state of the simulator and agents, dealers have limited opportunities for
trading in each time-step, so the absolute change in value of assets is more
appropriate than the log-change.
The method chosen for valuing stocks can be critical. If stocks are val-
ued at the overall bid/ask mean, including the dealers own quotes, dealers
tend to learn that the best course of action is to raise prices uncondition-
ally. This results in the dealer having a large positive inventory. The dealer
achieves high rewards by constantly pushing up the value of the stock in
its inventory. Eventually the dealer reaches such a highly leveraged posi-
tion that it is very vulnerable to small movements in the market value of
the stock, and the dealer goes bankrupt. At bankruptcy there is a large
negative reward. However, the prospect of a large negative reward is not
sufficient to discourage dealers from this strategy of price inflation because
January 26, 2007 10:50 WSPC/Book Trim Size for 9in x 6in NasdaqBook2004ws
Many of the hard-coded dealer and investor strategies can evolve in the
current system: when an agent goes bankrupt, the market selects one of
the other agents and clones it, with more profitable agents more likely to
be selected. A small random mutation is applied to the parameters of the
new clone. If the mutated parameters lead to higher profitability for the
new clone, then it is more likely to be selected for cloning in the future to
replace a bankrupt agent. This constitutes a blind evolutionary search for
parameters that result in high profitability. The search is blind because no
feedback about performance is considered when adjusting parameters the
only feedback is from the agents survival and long-term profitability. If a
mutation results in a good set of parameter settings, the agent will survive
January 26, 2007 10:50 WSPC/Book Trim Size for 9in x 6in NasdaqBook2004ws
and be cloned, thus spreading its parameters; if the adjustment is bad, the
agent will go bankrupt and die off, taking its parameters with it.
The reinforcement-learning dealers learn from their experience of differ-
ent internal states and states of the market, and from the results of different
actions. At each time step, the reinforcement-learning dealers are given a
reward (or punishment) signal related to the change in value of the dealers
total assets. In effect, these dealers learn the long-term profitability of dif-
ferent states or actions. After sufficient learning, the reinforcement learning
dealers have some idea of which states and actions lead to greater rewards,
and, at each time step, they choose the action that they think will lead to
the greatest long-term rewards.
Dealers that can learn strategies are very important in the future de-
velopment of the model because self-tuning strategies will increase market
realism and the robustness of the agents under a variety of market condi-
tions. Having adaptive agents in the model is also very useful for discovering
unknown loopholes and unintended consequences of changes in market rules
and agents incentive structures.
We have also investigated learning market makers behavior and strate-
gies, and found that, under a variety of conditions, artificial learning strate-
gies outperform those extracted from the data or from the expert knowl-
edge. However, there are also market scenarios, in particular with many
parasites or with high volatility, where the learning strategies at times find
it difficult to survive.
January 26, 2007 10:50 WSPC/Book Trim Size for 9in x 6in NasdaqBook2004ws
Chapter 6
Calibration
1
Based on a Bios Group report written jointly with Manoj Gambhir
6.1 Introduction
The previous chapters describe the workings of a robust and realistic model
of the Nasdaq stock market and the results obtained by using it. That
model allowed us to obtain many important and often unexpected results
pertaining to decimalization in particular, as well as to other aspects of
the Nasdaq stock market in general. One major remaining criticism of the
model is that, while it represents the markets structure and rules in a
very realistic fashion and is capable of reproducing many real-world stock
market patterns and observations, otherwise it is largely not calibrated to
real-world data.
Certainly using the uncalibrated model we can investigate periods of
high volatility and low volatility; populations of agents with different strate-
gies; agents who learn and adapt over time; agents with parasitic strategies,
etc. However, even though each of the scenarios we investigate is designed
and tuned to represent a particular real-world situation, it does not rely
on real data to calibrate the rate of arrival of new information, the number
and type of each agent strategy being used, the assets available to each
agent, and similar issues.
Our results are sufficiently robust across many such possible scenar-
ios, so we do not believe that this is an important criticism of the general
validity of our results. But it is a valid criticism of the utility of our re-
sults under some circumstances: we cannot, for example, take trading data
1 See Darley, Outkin et. al. (2000)
89
January 26, 2007 10:50 WSPC/Book Trim Size for 9in x 6in NasdaqBook2004ws
from a given day or week, calibrate our model using that data, and then
answer questions about how the market would have behaved during that
time period if the tick size had been different. Hence, to use our model
to investigate particular circumstances in detail, and then to reach quanti-
tative measures of risk, uncertainty, and probabilities over possible future
price/volatility, etc., is currently a difficult task.
The goal of the current chapter is twofold: 1) describe the process of
calibrating the model with real-world trade and quote data, and 2) evaluate
tick size effects in the calibrated model; in particular, provide an estimate
of changes in possible volume and number of transactions as a result of
decimalization.
We performed the calibration process in two steps: 1) Ensure that the
distribution of trade sizes in the simulation is similar to that in the real
world. 2) Extract the behaviors of real-world market makers and investors
from data sets of historical quotes and trades, and use those behaviors in
the simulation. The latter is a very important part of the process: as we
learned in previous phases, the results of the simulation crucially depend
upon the strategies used by the agents.
Our study of decimalization effects is very different from existing ones,
because we approached the problem on the level of individual strategies and
fine-tuned market microstructure. This approach allows us to investigate
evolutionary, non-linear, and emergent properties of the system. This is
especially important for a system that may undergo a phase transition
(as demonstrated below), when standard approaches may be completely
inapplicable.
We have also developed a version of the simulation that can semi-
automatically self-calibrate itself against real-world transactional data.
This capability allows investigating the market in far greater detail than
the manual data-mining process weve used in preparing result described
in this chapter.
6.2 Results
Calibration 91
undergone a phase transition as a result of the tick size change (see the next
chapter for details). This has very significant implications for any type of
research that tries to predict what will happen to the market as a result of
decimalization: the observed phase transition implies that there is a great
deal of unpredictability in how the market will change its state. Therefore,
standard linear or non-linear extrapolation techniques will probably prove
unreliable for this task; techniques based on analysis of behavior, like those
described in this report, have a better chance of capturing the essential
features of the post-transition state. In using any technique, however, we
must point out that there is a great deal of uncertainty in how the market
will adjust to change.
During previous phases of the project, we have extensively investigated
the effects of tick size reduction on aggregate market behavior. Our main
finding was that, in the presence of parasitic strategies, reducing the tick
size below a certain level may severely impede the price discovery process.
Those effects are discussed elsewhere in the text.
In this chapter, we discuss building calibrated strategies and evaluating
possible volume changes as a result of decimalization. We have made very
significant progress in extracting strategies from the quotes and trades data
set, as well as in identifying some of the existing simulated strategies in the
same data set. We were pleasantly surprised that generally the results
described before hold for the calibrated version of the simulation. As for
volume changes, when the tick size moved from $1/16 to $1/64, we observed
volume increases ranging from a modest 15% up to a few hundred percent
(500-600%) depending on the strategies involved and a variety of other
parameters in the simulation. Similar changes have been observed in respect
to the number of trades, with an indication that the increase in number of
trades will probably be significantly larger than the increase in volume;
however, we consider the volume data more reliable
We investigated how the investors average assets will be affected by
decimalization. The results are not so clearly in favor of the small tick
size as it is normally assumed. In a simulation with no parasitic strategies,
the investors average assets may increase (albeit not uniformly) as a result
of going to a smaller tick size, while in a simulation with parasitic market
makers the investors average assets may actually decrease as a result of the
reduced tick size (as illustrated in Figures 6.6 and 6.7 below). We believe
that the main contributing factors are the impeded price discovery and
information transmission resulting from decreased tick size and parasitic
activity. In other words, the market is not a zero-sum game; rather, it is
January 26, 2007 10:50 WSPC/Book Trim Size for 9in x 6in NasdaqBook2004ws
Calibration 93
makers will stay away from the best quotes if the tick size is really
small. Many of the strategies in the original simulation were derived
using this approach. More importantly, and somewhat unexpectedly,
we have been able to identify those strategies in the real data using
data mining techniques and optimization (see the following section of
this report).
(2) Quantitative behaviors. These are derived from real-life behavior using
proper data-mining techniques (particularly learning neural networks),
regression analysis, and optimization.
Basic Dealer
Quotes a bid and ask price and a quantity available for sale and
purchase.
The quantity of stock which has been purchased from the dealer at
the ask price and bought from the dealer at the bid price is logged.
When the quantity bought from the dealer equals or rises above
January 26, 2007 10:50 WSPC/Book Trim Size for 9in x 6in NasdaqBook2004ws
a certain value, the bid and ask are raised by the same quantity
(price increment), thus maintaining the spread between them.
When the quantity bought by the dealer at the bid equals or rises
above the same value, the bid and ask are lowered by the price
increment, maintaining the spread.
Analytic Parasite
In what follows, the simple volume dealer and analytic parasite are both
referred to as volume dealers.
January 26, 2007 10:50 WSPC/Book Trim Size for 9in x 6in NasdaqBook2004ws
Calibration 95
Note that the volume threshold parameter of the basic and volume
dealers govern their behavior in a similar way but differ because the basic
strategy takes its own trading volume into account whereas the volume
dealers monitor the entire trading volume of the entire market.
Quotes
Trades
In this sequence taken from the data, MMM1s bid is hit at 10:04:57,
and it subsequently changes its quotes by lowering both bid and ask in two
stages. A statistical analysis of this data sequence, of the type described
above, will result in a value for the change in the quote of -0.375 (three
ticks, since the tick size was 0.125 for this data set) as a result of MM1s
purchase of 1000 units of stock. A data point of (-0.375, 1000) is therefore
logged.
The results obtained for two of the data sets are shown in the results
section below. We have also calibrated the simulated strategies against the
third data, but omitted it here. The tables contain specific market maker
identification codes and are organized according to the percentage of the
quotes data that may be explained by basic dealer behavior. In other words,
the percentages signify the proportion of the quotes data for each market
maker that is explained by assuming that they are basic dealers. One would
never expect a market maker to act according to the basic dealer strategy
all of the time, since quote changes can occur as a consequence of many
pieces of information which may come to a dealer from sources external to
the market.
Market makers at the top of each table, due to their high percentages of
data accounted for by the basic dealer strategy, have much of their behavior
explained by this strategy, and market makers at the bottom of the tables
are less well explained by this strategy. In all cases, it is possible and,
indeed, likely that any external data sources might be modeled as noise on
top of the systematic strategy.
The second and third columns of the tables contain the price increment
value that most often occurs (see the basic dealer strategy outline above)
and the corresponding trade volume that most often triggers price changes.
These values most often occur in the sense that they correspond to the
peaks of histogram plots of the data sets obtained using the above method.
The values in these columns, may be used as the parameters governing basic
dealer behavior for those market makers that seem to be well explained by
this strategy.
January 26, 2007 10:50 WSPC/Book Trim Size for 9in x 6in NasdaqBook2004ws
Calibration 97
where qi,real is the ith quote in the time series data of the real market and
qi,constructed is the ith quote constructed by the model strategy using the
trades as inputs.
The aim now becomes the minimization of this distance, with respect to
the parameters, yielding parameter values that generate the set of quotes
that best fit the real-world data. Such a procedure was attempted for the
discovery of volume dealer parameters. As outlined in the introduction
to this section, the simple volume dealers behavior is governed by three
parameters: the trading volume threshold (V olthreshold ) above which the
level of trading is considered significant, the time decay constant (), and
the price change parameter ().
Following a time discretization for the quotes time series data, wherein
quote values were interpolated for five-second intervals throughout each
January 26, 2007 10:50 WSPC/Book Trim Size for 9in x 6in NasdaqBook2004ws
day, the distance minimization procedure was attempted for several market
makers with data from May 1997. It was found that the distance function
landscapes, explored by altering the parameter values, were very rugged
with many local maxima and minima. An example of a typical landscape
profile is shown below.
Fig. 6.1 A profile of the error function outlined above (equation 6.1) over the decay
constant range shown (i.e., 0-0.1). The volume threshold parameter is set at 200 shares,
and the price change constant at 0.1. The market maker code is MMM26 and the data
set consists of four trading days in May 1997.
Calibration 99
N1
X
Vi e(tq2 ti ) V olthreshold (6.2)
i
N
X1
(tq2 ti )
Vi e = V olthreshold (6.3)
i
N
X2
(tq3 ti )
Vi e = V olthreshold (6.4)
i
N N
X1 X2
(tq2 ti ) (tq3 ti )
Vi e = Vi e (6.5)
i i
All of the quantities in the above equation are known, with exception
of and so a solution for the decay constant can be found. However, due
to the inequality of equation 6.2, equation 6.5 does not always hold and
may not always yield a solution. Therefore, with the time t measured in
seconds, the method used to find the decay constant from equation 6.5 is to
calculate the left-hand side for values of between 0 and 0.1 and to search
for either a value of zero or, failing this, a minimum of the left-hand side
function.
Once a suitable value of is found, it can be inserted back into equa-
tions 6.3 and 6.4 to find the implied values of V olthreshold . Both of these
quantities - the decay constant and threshold volume, are then collected for
the entire data set of quotes for a particular market maker.
Note that , the price change parameter This parameter, scaling the
size of the price change for a volume dealer when a market volume signal is
significant, may be estimated by simply observing the distribution of price
changes for a market maker over a period of time.
It remains to identify any market makers that act according to rules sim-
ilar to those of the analytic parasite strategy described in the previous
section. A clear pattern of behavior of such a market maker is to keep its
bid and/or ask at very unattractive levels for investors when the opportu-
nity for clear profits (by taking the best bid/ask positions) does not present
itself. Extremely large spreads and jumps in quote levels would therefore
be in keeping with this strategy.
January 26, 2007 10:50 WSPC/Book Trim Size for 9in x 6in NasdaqBook2004ws
Calibration 101
June 1997
MM % BD Price Volume
inc. Thresh.
MMM3 55.2 0.25 10
MMM1 51.8 0.375 10
MMM35 45.4 0.375 10
MMM8 42.6 0.25 10
MMM12 40.6 0.25 10
MMM6 38.8 0.25 10
MMM4 36.2 0.25 10
MMM11 35.8 0.25 10
MMM7 34.4 0.25 10
MMM14 33.5 0.625 10
MMM18 29.9 0.25 10
MMM17 29.8 0.25 10
MMM5 27.8 0.25 10
MMM23 26.1 0.25 10
MMM13 26.0 0.375 10
MMM15 25.6 0.25 10
MMM22 25.5 0.25 10
MMM9 23.7 0.5 10
MMM36 22.2 NONE 10
MMM10 19.7 0.25 10
MMM21 18.4 0.25 10
MMM20 13.8 0.25 10
MMM19 13.7 0.25 10
MMM16 13.4 0.25 10
MMM28 12.7 0.375 10
MMM25 11.9 0.25 10
MMM24 11.4 0.25 10
MMM26 10.9 0.25 10
MMM37 9.1 NONE 10
MMM27 7.0 0.375 10
MMM29 5.6 0.25 10
MMM32 3.7 0.125 10
MMM33 2.9 NONE 10
MMM34 2.0 NONE 10
January 26, 2007 10:50 WSPC/Book Trim Size for 9in x 6in NasdaqBook2004ws
Calibration 103
The methods that were used to extract information about the parameters
governing volume dealer behavior are also statistical. For each market
maker, a distribution of possible parameter values is obtained. For example,
the left-hand side of equation 6.5 was calculated for every consecutive pair
of quotes of a market maker, and for all trades occurring between these
quotes, for the time series of May 1997. For each pair, an attempt was
made to find a value of the decay constant that caused the function to
be either at a local minimum or at a value of zero (which would satisfy
equation 6.5). A collection of values of the decay constant over the entire
time series is the result. A histogram representing such a set of results for
the decay constant is shown in Figure 6.2. As can be seen, a value of about
0.002 is much more common than the others and may therefore prove to be
a good estimate of the time decay constant for this market maker during
May 1997. In the tables that follow in this section, the peak, or mode, of
this distribution is entered for each market maker.
Through equations 6.3 and 6.4, values of the volume threshold for vol-
ume dealers are implied by values of the time decay constant. Working
each decay constant data set through these equations, a distribution of the
volume threshold parameter is obtained. Such a distribution is shown in
Figure 6.3. The distribution shown is typical for the market makers and is
multi-modal, clustering around certain values of the parameter. Therefore,
in order to give a fair representation of the likely values of this parameter,
the values corresponding to the three tallest peaks of the distribution are
noted.
The final values included in the volume dealer parameter tables are the
mean and standard deviations of the distribution of the absolute value of
the price change for each market maker. Two price change distributions are
shown in Figures 6.4 and 6.5. These price changes have been calculated by
simply subtracting consecutive ask quotes from one another, then doing the
same for bid quotes. The true distributions are shown and not the absolute
value distributions.
January 26, 2007 10:50 WSPC/Book Trim Size for 9in x 6in NasdaqBook2004ws
The distribution for the market maker MMM28 (Figure 6.4) is roughly
centered on zero and has two large peaks on either side, indicating very
common price changes of about +/ 0.125, which correspond to one tick
up and down for the sample period (May 1997). There is a striking dif-
ference in the distribution for the market maker MMM33 (Figure 6.5): a
cluster of price changes occurs around zero, indicating shifts of a few ticks;
but there are two extra sub-distributions, centered at roughly +/ 5.0.
These common price changes are completely compatible with the behavior
expected for a parasitic dealer, who moves into the best bid-ask spread when
there are immediate profit-making opportunities and moves well out of the
market when there are not. The tables of results show that much larger
values of the mean price change and standard deviation than are common
for the other market makers, may therefore act as quite good signatures for
analytic parasite behavior.
Fig. 6.2 Histogram illustrating the number of occurrences of each of the values of the
time decay constant given by the midpoint of the bars. The code of the market maker
here is MMM28 and the statistics were gathered from 21 trading days in May 1997.
January 26, 2007 10:50 WSPC/Book Trim Size for 9in x 6in NasdaqBook2004ws
Calibration 105
Fig. 6.3 Histogram showing the distribution of the volume threshold parameter calcu-
lated by inserting values of the time decay constant whose distribution is shown in
Figure 6.2, into equations 6.3 and 6.4. The market maker code is MM28 and the time
series is 21 trading days in May 1997.
Those market makers whose behavior is less than 20% accounted for
by the basic dealer strategy were considered to be candidates for other
strategies. Therefore, their volume dealer attributes were analyzed. In
each case, a distribution of the parameters was calculated, and the values
given in the tables below are the mode or the mean and standard deviation
of the distribution. The measure used in each case is specified.
May 1997
June 1997
Calibration 107
Fig. 6.4 Histogram showing the distribution of changes in bid and ask between consec-
utive quotes for the market maker MMM28 during 21 trading days in May 1997.
The results in the rightmost columns of the tables above show that
the majority of market makers have low values of the mean price change
and standard deviation (generally <1). But a few have markedly larger
values, for instance MMM34 and MMM33 from May 1997, and MMM34,
MM33, and MMM37 from the June 1997 data set. These high values are
accompanied by the kind of price change distribution shown in Figure 6.5
that, as explained above, may indicate parasitic behavior.
The number of market makers falling into the three dealer categories
whose strategies are explored here may be estimated for any given data set
of quotes and trades. In addition, a substantial amount of information on
the parameters that govern the dealers behavior may also be determined,
and this information can be fed directly into a run of the market simulation.
Such a run is populated by a group of market makers, defined by their
strategy types and the parameters of these strategies, which have been
calibrated to a particular data set, and the results obtained from this run
impinge directly on a real-world scenario.
January 26, 2007 10:50 WSPC/Book Trim Size for 9in x 6in NasdaqBook2004ws
Fig. 6.5 Histogram showing the distribution of changes in bid and ask between consec-
utive quotes for the market maker MMM33 during 21 trading days in May 1997.
We have investigated how the average assets of the investors are affected
by decimalization in the calibrated version of the simulation. While more
research is necessary, particularly because the bankruptcy rules in the sim-
ulation are somewhat different from those in the real world, the observed
effect seems to be quite striking. While reduced tick size may have benefi-
cial effects on the investors wealth when there are no parasites, when there
are parasites the effect is completely reversed investors average wealth
may actually decrease with smaller tick sizes (it is necessary to note that
both effects are relatively small in comparison to the investors average as-
sets). We explain that effect by a combination of two factors: impeded price
tracking and poorer information transmission in the presence of parasites.
Those effects are illustrated in Figures 6.6 and 6.7.
January 26, 2007 10:50 WSPC/Book Trim Size for 9in x 6in NasdaqBook2004ws
Calibration 109
Fig. 6.6 Effect of tick size on investors average assets in a simulation with no parasites.
Fig. 6.7 Effect of tick size on investors average assets in a simulation with parasites.
January 26, 2007 10:50 WSPC/Book Trim Size for 9in x 6in NasdaqBook2004ws
6.6 Conclusions
Chapter 7
1
Based on a Bios Group report written jointly with Vladimir Makhankov
111
January 26, 2007 10:50 WSPC/Book Trim Size for 9in x 6in NasdaqBook2004ws
The transactions for a given trader are of two types: sells and buys, and
can be treated as separate series. For the preliminary analysis, reported
here, we disregarded this difference. For the no-trades-windows we put
s=v=0
p(i) = p(i 1)
where i stands for the window number. In the case of trades there are only
a few of these.
The statistical analysis of the price distributions inside the windows has
also been performed. We obtained the mean, STD, kurtosis, and skew for
the majority of intervals. Deviations from the normal distribution are rare;
for instance, for 10/04/99, the daily average kurtosis is 1.73 and the average
skew is -0.49. A few windows, however, showed essential deviations.
In order to judge the systems memory, we obtained the autocorrelation
functions from the price time series described above for five consecutive
days in October 1999. Excluding Monday 04, the rest were very similar
in shape and showed the memory in price to be half an hour on Mondays
up to nearly two hours on Fridays, with one and a half on Tuesdays and
Wednesdays. The Mondays curve has an additional hump, which, however,
lies outside the 90% confidence interval.
Characteristics
Results can be formulated using two most essential characteristics: the fit
and the scatter. The first is seen from the curve giving the regression of
the real data vs. simulated ones defined by the response function. In the
linear approximation it is
n
X
fi (resp) = 0 + j xji (7.1)
j=1
Trades
Here we have three time series for each day, s=N(i), v=V(i), and p=P(i).
The standard statistical analysis of these series, e.g., for 10/04/99, shows
sufficiently strong correlations between the first two series (N and V)on the
order of 0.75, and very weak correlations among price and the other two,
i.e., 0.13 and 0.16. This means that linear regression 7.1 for the price
response as a function of N and V turns out to be poor as well as R2 .
The fit and scatter become better if we involve time lag (allowing for
the system memory) while constructing the fit
p3 (i + k) = 0 + 1 P1 (i) + 2 P2 (i)
for different k. The best fit turns out to be at values of k between 3 and 4,
i.e. with the effective time lag of 9-12 minutes. Nonlinear approximation
also (not surprisingly) improves the fit and scatter. For example, using
the polynomial function up to the fifth order, we achieve ten times better
performance of the model, although it involves about 20 constants instead
of two or three.
Quotes
Now we can address one of our chief problems: build a response function
for an individual trader, contingent on the whole environment. We studied
this problem using the example of ask price quotes for the months of May
and June 1997. First, the above procedure was applied to create a regular
time series via averaging over 3-min. windows. As a result, we obtained
January 26, 2007 10:50 WSPC/Book Trim Size for 9in x 6in NasdaqBook2004ws
35-40 series for various days reflecting the dynamics of the three overall
factors mentioned above (N, V, and P of trades), and the rest of the series
mirrored the behavior of individual traders. The preliminary results are
the following. The behavior of any sufficiently active trader is described
with very good accuracy by a linear regression 7.1 with nearly perfect fit
and large R2 ( 95% ). It is interesting to note that the most influential
factor is the series of the traders active colleagues. The effect of the first
three factors (the trades) is surprisingly small. Moreover, a trader almost
instantly follows environmental changes, and the information contained in
the other-traders series is sufficient to build a very good linear approxima-
tion (regression). The behavior of passive traders is nearly impossible to
predict from the endogenous environment, which is likely a reflection of the
fact that their behavior is mainly defined by exogenous factors.
This technique is based on the real data, and the statistical analysis is
derived from some data mining techniques developed at the Bios Group in
1998.
To proceed with the phase transitions discussion, a note on what con-
stitutes a phase transition is in order. Historically, phase transitions have
been studies in physics, particular in statistical physics. While the con-
cept is applicable to social sciences, some of the mathematical formalisms
employed originally in physics may not be applicable to agent-based simu-
lation. Here follow Weidlich and Haag (1983) and define phase transitions
as a radical change in the global state of the system as a result of a change
in a control parameter.
The procedure is as following:
(1) We used a data set with quotes and trades data for May June 1997.
So the data set is inclusive of the time when the tick size changed from
$1/8 to $1/16. There are approximately 30-35 sufficiently active market
makers in the data set.
(2) A 3-minute regularization procedure was applied to all the series men-
tioned, so that we obtained a homogeneous short-time-scale series.
(3) We found the regression coefficients of the effects of the market mak-
ers quotes on the behavior of two other market makers and on the
amount of shares traded. (In our previous report, we showed that an
January 26, 2007 10:50 WSPC/Book Trim Size for 9in x 6in NasdaqBook2004ws
We were able to reduce the linear regression from thirty factors to only
three. We obtained the coefficients of linear regression for every market
maker and every day in the data set. Those linear regression constants
constitute a sort of the state space of the system. Investigating how those
constants changed with the tick size change allows us to understand whether
a phase transition in market makers behavior has happened concurrently
with the tick size change. There are two possible cases here:
As we see from the following two pictures (Figure 7.1 and Figure 7.2),
there is very strong evidence that a phase transition in the market occurred
at the same time as the tick size change.
7.3 Conclusions
We have discovered using the data from the previous tick size change from
$1/8 to $1/16 that there are strong indications that the market underwent
a phase transition at the time of the change (see the Chapter Phase Tran-
sitions Analysis). An important characteristic of a system undergoing a
phase transition is that the resulting state is very uncertain and the system
is highly vulnerable to small influences and noise. We expect that, in the
case of the transition from $1/16 to $0.01 the effects will be much greater
than in the previous tick size change, simply because of the magnitude of
the change.
January 26, 2007 10:50 WSPC/Book Trim Size for 9in x 6in NasdaqBook2004ws
Fig. 7.2 The fact that the groups of coefficients for May and June 1997 look significantly
different supports the notion of a phase transition in the market during that time period.
January 26, 2007 10:50 WSPC/Book Trim Size for 9in x 6in NasdaqBook2004ws
Chapter 8
8.1 Introduction
When we started this endeavor we did not know whether and when will
we be able to compare the predictions made using our model with the real
world outcomes, because the future of decimalization (at least timing of
Nasdaq decimalization) was uncertain. Nasdaq implemented decimal over
a six week period starting on March 12, 2001. This allowed comparing our
model with initial results of decimalization study provided by the Nasdaq
Economic Research (2001). We found that the majority of our predictions
are strongly supported by the available data. We calibrated the model in
two steps to:
1) ensure that the simulated distribution of trade sizes, volumes, prices
and other statistical parameters is similar to that observed in the real world.
2) simulate real-world behaviors of market makers and investors using data
sets of historical quotes and trades1 .
The following information summarizes our most important predictions:
(1) Decimalization (tick size reduction) will negatively impact the price
discovery process.
(2) Volume will increase, potentially ranging from 15% to 600%.
(3) Ambiguous investor wealth effects may be observed. (Investors average
wealth may actually decrease in the simulation, but the effect is not
statistically significant).
1 In some cases, much of the quote change behavior (and thus of the market maker
behavior) can be explained by very simple strategies. For example, for certain market
maker strategies, precision of calibration constituted 60-70%.
119
January 26, 2007 10:50 WSPC/Book Trim Size for 9in x 6in NasdaqBook2004ws
2 We conjectured in Bios Group (2000) report that tick size reduction can bring in-
creased price volatility. As reported by the Nasdaq Economic Research (2001) the intra-
day price volatility did not increase in the first two weeks of the decimalization, however,
more research is necessary to see if the price volatility was affected on longer time scales
(which would relate more directly to our conclusion). and market impact, and the speed
and quality of information propagation as parameters that are indicative of the quality
of the price discovery process.
January 26, 2007 10:50 WSPC/Book Trim Size for 9in x 6in NasdaqBook2004ws
Chapter 9
The model has possibilities for use as a tutorial and trainer: a Sim-Market
with a pull-down menu of scenarios such as a new IPO, a stock market crash,
Far East disasters, and lesser events like breaking news, stock splits, etc.
127
January 26, 2007 10:50 WSPC/Book Trim Size for 9in x 6in NasdaqBook2004ws
Additions to this model may include data gathering, analysis tools, and
capture of simulation data for whatever purposes the users wish. These
features would enable sophisticated users to examine fat tails, persistent
volatility, and GARCH, so that they could understand how the market
microstructure, agent composition, and market scenario might explain these
macro-observables.
firms producing such Soups daily for use by firms to improve their risk
assessment, market impact modeling, as well as less frequent models for
longer term planning or training.
Such an approach would provide an environment/tool for sophisticated
risk analysis: not just probabilistic, VAR, but also including herd effects,
fat tails, and agent-based responses: We dont model the numbers, but
the people behind the numbers.
Once such tools are the norm, the boundary between models and mar-
kets is likely to begin to blur further, extending the existing trend driven
by extensive use of automated trading programs in todays markets.
January 26, 2007 10:50 WSPC/Book Trim Size for 9in x 6in NasdaqBook2004ws
Appendix A
131
January 26, 2007 10:50 WSPC/Book Trim Size for 9in x 6in NasdaqBook2004ws
In addition to these two analysis modes, the simulation model can also
be run as a networked game (with individual investor or dealer agents
running on different machines on the network), and people can enter the
market to play the market in the role of either investor or dealer.
This section will first explain the control of the simulation via the graph-
ical user interface, followed by the batch mode features.
Fig. A.1 The high level control window. This provides easy access to more information
about the Nasdaq market and the modeling technology, and allows the user to launch a
market simulation.
January 26, 2007 10:50 WSPC/Book Trim Size for 9in x 6in NasdaqBook2004ws
The software provides quick access to learn more about Nasdaq, about
the technology behind the software, etc. To run a simulation, one clicks on
the first button, which then prompts the user to choose a parameter set or
scenario of interest, as shown in Figure A.2, followed by specification of the
desired simulation duration and speed, Figure A.3.
Fig. A.2 Choosing a parameter set or scenario of interest. These represent different
populations of dealer, investor agents, and different scenarios for the information stream
which drives the market. Each can have a short explanatory text attached.
Finally the main control window, Figure A.4 appears. This window
displays interactive feedback on current price and dealer behaviour during
the run and can be used to adjust parameters, and launch a range of analysis
and charting tools. The title of this window shows the parameter set or
scenario which was used to create it (it is possible to run multiple such
simulations simultaneously, so this feature makes it easy to differentiate
between the windows which are otherwise superficially similar).
This main control window has the Quote Montage to the bottom.
The quote montage contains each dealers bid and each dealers offer, ranked
vertically from best to worst (so the largest bid and smallest offer are at the
January 26, 2007 10:50 WSPC/Book Trim Size for 9in x 6in NasdaqBook2004ws
Fig. A.3 Specifying the duration of the simulation and the speed. This can also be
controlled during the simulation.
Fig. A.4 The main control window, which mimics the Nasdaq Workstation II screen
used on Wall Street.
the simulation (pressing either Step or Run will continue from that
point). As the simulation is run or stepped, the quote montage and current
price are continually updated to reflect the current situation in the market.
The main screen also contains two checkboxes which allow the user to
enter the market and participate within it, either buying and selling as an
investor, or setting bid/ask quotes as a dealer. This is described later.
Finally the current time is shown towards the top of the window.
A.2 Menus
The main simulation control panel contains one main Action menu and a
help menu which can be used to get detailed instructions on use of the
software (in more detail than we provide here).
The Action menu, as shown in Figure A.5, allows the user to perform
a variety of simple actions:
A.3 Charts
Three sets of charts are available within the agent based simulation itself. In
addition, extremely rich data can be captured to file during the simulation
to allow further analysis and charting. The three sets of charts are:
All of these are accessed through the Show Charts menu item. This
presents a charting desktop on which the user can create all of the various
charts using a number of reasonably self-explanatory menu items. Once
created, the charts can be moved, minimised, closed, resized, etc, to enable
easy comparison and analysis.
We should note that the charts shown here are for a small market with
just a few participants. This helps to make it easier to see what is going
on, but should not of course be considered realistic.
Fig. A.8 The trades that have been executed, with best bid and offer superimposed.
January 26, 2007 10:50 WSPC/Book Trim Size for 9in x 6in NasdaqBook2004ws
Fig. A.9 Trades, best bid and offer and the current true value.
as shown in Figure A.9. Note that, since within each individual time-step,
the available best bid/offer may change as trades come in, not all of the
trades plotted will lie directly on either the bid or the offer traces.
Finally the individual bids and offers of each dealer may be superim-
posed as shown in Figure A.10 (the charts shown here are for a simple
market with just three dealers, all of the Basic Inventory type). These
can be colour-coded in various ways (by individual, by type, etc).
(a) All 3 dealers cash and asset posi- (b) The dealers inventory through
tions. time.
(c) The dealers assets. (d) The dealers cash, inventory and
assets at the end of the simulation.
Set Human Dealer Prices will, if the current market contains a human
dealer, open a control window that allows the user to control that dealer
and thus join in the simulation. By clicking with the mouse, the user can
adjust either bid, offer, or both up or down by a single tick. Clicking
multiple times will adjust by progressively large amounts. To run a simula-
tion with a human dealer, double click on the HumanDealerMarket icon.
Due to the memory consuming nature of Java interfaces, the simulations
January 26, 2007 10:50 WSPC/Book Trim Size for 9in x 6in NasdaqBook2004ws
(c) Investor assets through time. (d) Cash, inventory, assets for a sin-
gle investor.
Almost all of the data we examined to draw our conclusions were generated
by running the simulation in batch mode. This has several advantages: first
January 26, 2007 10:50 WSPC/Book Trim Size for 9in x 6in NasdaqBook2004ws
To carry out this work we (with our colleagues at Bios Group) developed
an object-oriented simulation of a single-security market using the Java
programming language. Three principal high-level classes make up the
simulation: the Market class, the Dealer class, and the Investor class. In
addition, there are numerous support and communications classes to im-
plement the different kinds of trades (market order, limit order, etc) that
can take place, the quote montage, etc. Finally there are data capture and
analysis objects, scenario management objects, and of course those which
tie everything together into a simulation run with an optional graphical
user interface.
In fact since Dealers and Investors share basic requirements for keeping
track of their inventory and capital and executing trades, these capabilities
are implemented in a base class called Player.
Dealers have the additional ability to interact with the Markets Quote
Montage object, while Investors are able to query the Market for the current
true value of the security. Both Dealers and Investors have strategy
functions that determine their trading desires. Investors decide whether to
buy or sell the security, while Dealers decide what bids and asks to post on
the Markets Quote Montage.
A relatively sophisticated set of objects are provided to manage the Mar-
ket parameter set-up, which includes initialization of the number and types
of dealers (BD basic dealer, PD parasitic dealer, etc) and number and
types of investors (FI foolish investor, etc.); setting the number of time
iterations for a simulation; setting initial cash and inventory for investors
and dealers and their response time statistics; setting the true price of
the stock. This is all documented within the software package itself.
January 26, 2007 10:50 WSPC/Book Trim Size for 9in x 6in NasdaqBook2004ws
Bibliography
145
January 26, 2007 10:50 WSPC/Book Trim Size for 9in x 6in NasdaqBook2004ws
Shu-Heng Chen, Thomas Lux, and Michele Marchesi. Testing for non-linear
structure in an artificial financial market. Journal of Economic Behavior
& Organization, 46(3):327342, 2001.
W.G. Christie and P.H. Schulz. Why do Nasdaq market makers avoid odd-
eight quotes? Journal of Finance, 49:181340, 1994.
Michel M. Dacorogna, Gencay Ramazan, Ulrich Muller, Richard B. Olsen,
and Olivier V. Pictet. An Introduction to High-Ferquency Finance. Aca-
demic Press, London, 2001.
Vince Darley, Alexander Outkin, Tony Plate, and Frank Gao. Sixteenths
or pennies? Observations from a simulation of the Nasdaq stock market.
In Proceedings of IEEE/IAFE/INFORMS 2000 Conference on Compu-
tational Intelligence for Financial Engineering (CIFEr), 2000a.
Vince Darley, Alexander Outkin, Tony Plate, and Frank Gao. Learning,
evolution and tick size effects in a simulation of the Nasdaq stock mar-
ket. In Proceedings of The 5th World Multi-Conference on Systemics,
Cybernetics and Informatics, 2001.
Vincent Darley. Towards a theory of autonomous, optimizing agents. PhD
thesis, Harvard University, 1999.
Vincent Darley, Alexander Outkin, Manoj Gambhir, Vladimir Makhankov,
and Gerard Weisbuch. Calibrating the Nasdaq stock market simulation:
A report on effects of decimalization. Technical report, Bios Group,
2000b.
Vincent Darley, Alexander Outkin, Tony Plate, and Frank Gao. A Nasdaq
market simulation. Phase III. Technical report, Bios Group, 1999.
Vincent M. Darley and Stuart Kauffman. Natural rationality. In
David Lane W. Brian Arthur and Steven N. Durlauf, editors, The Econ-
omy as an Evolving Complex System II. Addison-Wesley, Reading, Mas-
sachusetts, 1997.
Sanmay Das. Intelligent market-making in artificial financial markets. Mas-
ters thesis, Harvard University, 2003.
J. M. Epstein and R. Axtell. Growing artificial societies: Social science
from the bottom up. The Brookings Institution, 1996.
J. D. Farmer, L. Gillemot, G. Iori, S. Krishnamurthy, D. E. Smith, and
M. G. Daniels. A random order placement model of price formation in
the continuous double auction. In L. Blume and S. Durlauf, editors, The
Economy as an Evolving Complex System, III, pages 133173. Oxford
University Press, New York, 2005a.
J. D. Farmer, P. Patelli, and I. I. Zovko. The predictive power of zero
January 26, 2007 10:50 WSPC/Book Trim Size for 9in x 6in NasdaqBook2004ws
Bibliography 147
Bibliography 149
Index
151
January 26, 2007 10:50 WSPC/Book Trim Size for 9in x 6in NasdaqBook2004ws
Java, 40, 41, 49, 69, 140, 142 market rules, vii, 2, 4, 13, 15, 39, 53,
68, 70, 76, 87
learning, 3, 6, 9, 14, 15, 19, 32, 48, 55, market structure, 52, 66, 69
56, 65, 67, 68, 72, 73, 77, 78, 81, 93
limit order, 69, 142 Nasdaq Stock Market, 12, 16, 21, 89
liquidity, 24, 29, 30, 50, 57, 65, 121, non-equilibrium dynamics, 79
123
log-normal, 59, 60, 62 parasitic strategies, 6, 7, 18, 39, 40,
43, 44, 50, 53, 55, 57, 67, 69, 74,
market 91, 103, 106, 120, 123125, 139, 142
dealer-mediated, 12, 13 phase transition, 3, 4, 6, 7, 90, 110,
quote-driven, 12 111, 120, 124
market clearing, 2, 18, 39 price discovery, viii, 6, 7, 14, 4044,
market clearing price, 2 50, 69, 70, 91, 110, 119, 121
market efficiency, 17
market infrastructure, 2, 76 random walk, 32, 33, 37, 42
market maker, 6, 12, 119 randomness, 26
definition, 12 rationality, 8, 22
functions, 12 reinforcement learning, 14, 46, 67, 81,
learning, 72 86, 87
responsibilities, 12
market maker strategy signal-to-noise ratio, 55
Analytic Parasite, 45, 94 simulated annealing, 5
Basic Dealer, 43, 93 simulation
Basic Inventory Dealer, 43, 72 supplemental materials, viii, 131
Basic No Inventory Dealer, 44, 74 spread, 13
Classifier Dealer, 45 spread clustering, 3, 6, 15, 58, 59, 63,
Dynamical System Dealer, 46 66, 71, 72, 76, 77, 79, 120, 124
Learning, 72 and learning, 72
Matching Dealer, 45 and market maker strategies, 72
Parasitic Dealer, 44 strategy, 3
Price Volume Dealer, 44
Q-Learner, 47, 82 unintended consequences, vii, 41, 68,
Spread Learner, 47, 82 87
Volume Dealer, 44, 94 unpredictability, 1, 91
market making, 8, 12, 17, 124
market making strategies, 4, 65, 66 what-if scenario, 5
market microstructure, 18, 23, 90, 131
market regulation, 13 zero intelligence, 18