Sei sulla pagina 1di 273

Dynamics of Neural Field Models

Nikola A. Venkov, BSc.

Thesis submitted to The University of Nottingham for the degree of Doctor of Philosophy September 2008

Dedicated to my late father who set me on this path

Abstract
In recent years signicant progress has been made in understanding the brain electrodynamics using mathematical techniques. At the level of neural tissue brain cortex could be represented as a two-dimensional sheet of densely interconnected neurons. A suitable mathematical description in this context are integro-differential nonlinear equations called neural eld models. We study the capability of these models to exhibit complex spatio-temporal dynamics. To this end we adapt techniques developed for the analysis of nonlinear PDEs. We focus on the class of models including space-dependent delays. Delays arise due to the nite propagation speed of information along the axon, or the diffusive nature of propagation in the dendritic tree. In the rst part of the Thesis we develop the analysis of Turing-type pattern formation in one dimension. Thanks to the delay mechanism patterns can be dynamic: global oscillation, travelling or standing waves. We use multiple scales analysis to derive the normal form of the bifurcation. We obtain the mean-eld Ginzburg-Landau equations. We establish the parameter windows of various dynamical regimes and secondary instabilities. In the second part, we look at neural elds dened over two spatial dimensions, again including delays. We propose a method for constructing PDE systems that approximate the dynamics but are far easier to solve numerically. We also consider pattern formation in a model with anisotropic patchy connectivity, in view of the novel concept for the crystalline micro-structure of neocortex. We complete the Thesis with some results about 1D localised solutions (bumps) which are analogs of autosolitons in reactiondiffusion systems. We clarify the scope of validity of the Amari method for establishing bump stability. For a solution composed of a number of bumps interacting weakly, we derive reduced equations of motion which govern the bumps positions. ii

Acknowledgements
To my supervisor Stephen Coombes for being kind and positive in all sorts of situations that have I served to him. I am especially grateful for his unselsh approach to us, his PhD students. He insisted that we undertake many activities which other supervisors often consider as distracting from project work: to produce our academic websites, to publish our results in due course, to attend a variety of summer schools, to present on as many international conferences as we could win funding for! I think this not only improved our career opportunities but also made the PhD more enjoyable. To my second supervisor, Paul Matthews, for helping me out on a number of occasions when I got stuck with the theory. Also to my internal examiners Helen Byrne and Paul Houston for their involved and constructive criticism at the end of each year. Finally, I still remember the exceptionally cordial meeting with Dr Richard Tew the very rst time I came to Nottingham to look for a PhD position.

iii

Contents
1 2 Introduction and contents Context to theoretical neuroscience 2.1 2.2 Brief philosophical background . . . . . . . . . . . . . . . . . . . . 1 4 4

Brief biological background . . . . . . . . . . . . . . . . . . . . . . 11 2.2.1 2.2.2 2.2.3 2.2.4 The typical neuron . . . . . . . . . . . . . . . . . . . . . . . 11 Subcellular processes . . . . . . . . . . . . . . . . . . . . . . 13 Networks and microcircuits . . . . . . . . . . . . . . . . . . 17 Neural populations and brain areas . . . . . . . . . . . . . 18 21

Introduction to neural network modelling 3.1

Single neuron . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21 3.1.1 3.1.2 3.1.3 3.1.4 Action potential generation . . . . . . . . . . . . . . . . . . 22 Integrate-and-re neuron . . . . . . . . . . . . . . . . . . . 27 A simple synapse . . . . . . . . . . . . . . . . . . . . . . . . 30 Cable theory for the dendrite . . . . . . . . . . . . . . . . . 32

3.2

From spiking networks to rate models . . . . . . . . . . . . . . . . 36 3.2.1 3.2.2 Derivations for the ring rate model . . . . . . . . . . . . . 37 Advantages and limitations of rate models . . . . . . . . . 40

3.3

Neural eld models . . . . . . . . . . . . . . . . . . . . . . . . . . . 41 3.3.1 Two-population models and Mexican hats . . . . . . . . . 43

iv

3.4

Pattern formation in neural eld models . . . . . . . . . . . . . . . 44 3.4.1 3.4.2 Globally periodic patterns: Turing instability . . . . . . . . 45 Localised solutions: bumps . . . . . . . . . . . . . . . . . . 48

3.5

Applications of neural elds . . . . . . . . . . . . . . . . . . . . . . 51 3.5.1 3.5.2 3.5.3 3.5.4 Travelling waves in slices . . . . . . . . . . . . . . . . . . . 52 Functional architecture of visual cortex . . . . . . . . . . . 54 Pattern formation in 2D and the visual system . . . . . . . 60 Relevance and validity of neural eld models . . . . . . . 64 68

Neural elds with space-dependent delays 4.1 4.2 4.3 4.4 4.5

The route to delays . . . . . . . . . . . . . . . . . . . . . . . . . . . 69 Neural eld with time-dependent connectivity - formulation . . . 70 Conditions for Turing instability . . . . . . . . . . . . . . . . . . . 74 Types of Turing instability . . . . . . . . . . . . . . . . . . . . . . . 77 Examples axonal and dendritic models . . . . . . . . . . . . . . . 79 4.5.1 4.5.2 The model with axonal delay . . . . . . . . . . . . . . . . . 81 The models with dendritic and axo-dendritic delays . . . . 84

4.6 4.7 4.8 5

Conversion to PDE. Brainwave equations . . . . . . . . . . . . . . 85 Scales at the bifurcation point . . . . . . . . . . . . . . . . . . . . . 88 Chapter summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . 92 93

The amplitude equations 5.1

Derivation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 94 5.1.1 5.1.2 5.1.3 Scale separation . . . . . . . . . . . . . . . . . . . . . . . . . 94 Fredholm alternative . . . . . . . . . . . . . . . . . . . . . . 96 The mean-eld GinzburgLandau equations . . . . . . . . 101

5.2

Analysis of the amplitude equations . . . . . . . . . . . . . . . . . 103 5.2.1 Travelling wave versus standing wave solutions . . . . . . 104

5.2.2 5.2.3 5.2.4 5.2.5 5.3

Structure of the TW and TW/SWgenerators . . . . . . . 106 Cubic and sigmoidal ring rate function . . . . . . . . . . 107 BenjaminFeir instability . . . . . . . . . . . . . . . . . . . 109 Examples revisited . . . . . . . . . . . . . . . . . . . . . . . 114

Spike frequency adaptation . . . . . . . . . . . . . . . . . . . . . . 117 5.3.1 5.3.2 A linear combination of convolutions . . . . . . . . . . . . 119 Analysis of models with SFA . . . . . . . . . . . . . . . . . 121

5.4 6

Recapitulation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 125 128

Planar neural elds with delay 6.1

Integral and PDE form in 2D . . . . . . . . . . . . . . . . . . . . . 129 6.1.1 6.1.2 Derivation of the PDE form in 2D . . . . . . . . . . . . . . 130 Improved approximations . . . . . . . . . . . . . . . . . . . 133

6.2 6.3

Linear instability analysis . . . . . . . . . . . . . . . . . . . . . . . 139 Spatially modulated connectivity . . . . . . . . . . . . . . . . . . . 144 6.3.1 Example with modulation on a square lattice . . . . . . . . 146

6.4 7

Chapter summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . 149 151

Localised solutions in a model with dynamic threshold 7.1

Comparison of Evans and Amari analysis . . . . . . . . . . . . . . 151 7.1.1 7.1.2 Evans approach . . . . . . . . . . . . . . . . . . . . . . . . . 152 Amari approach . . . . . . . . . . . . . . . . . . . . . . . . . 154

7.2

Weakly interacting bumps . . . . . . . . . . . . . . . . . . . . . . . 158 7.2.1 7.2.2 A model with threshold accommodation . . . . . . . . . . 159 Derivation of equations of motion for bumps . . . . . . . . 161

7.3 8

Chapter summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . 171 172

Discussion

vi

Appendix 9.1

176

The dispersion relation for 1D models . . . . . . . . . . . . . . . . 177 9.1.1 9.1.2 9.1.3 Axonal delay . . . . . . . . . . . . . . . . . . . . . . . . . . 177 Axo-dendritic delay . . . . . . . . . . . . . . . . . . . . . . 180 Models with adaptation . . . . . . . . . . . . . . . . . . . . 182

9.2 9.3

1D neural eld numerics in integral framework . . . . . . . . . . . 183 Simulation of the PDE form . . . . . . . . . . . . . . . . . . . . . . 192 9.3.1 9.3.2 9.3.3 9.3.4 Models with rational Fourier transform . . . . . . . . . . . 192 Dendritic delay with xed synapses . . . . . . . . . . . . . 198 Axo-dendritic delays . . . . . . . . . . . . . . . . . . . . . . 200 Dendritic delay with correlated synapses . . . . . . . . . . 201

9.4 9.5

The adjoint operator . . . . . . . . . . . . . . . . . . . . . . . . . . 205 TWSW . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 207 9.5.1 9.5.2 9.5.3 Data at the linear order . . . . . . . . . . . . . . . . . . . . 208 The nonlinear step . . . . . . . . . . . . . . . . . . . . . . . 218 Two-parameter plots . . . . . . . . . . . . . . . . . . . . . . 223

9.6

Linear analysis of neural elds in 2D . . . . . . . . . . . . . . . . . 227 9.6.1 9.6.2 9.6.3 Isotropic neural eld models . . . . . . . . . . . . . . . . . 228 The optimal model . . . . . . . . . . . . . . . . . . . . . . . 231 An anisotropic neural eld . . . . . . . . . . . . . . . . . . 234

9.7 9.8

2D neural eld numerics in PDE framework . . . . . . . . . . . . 238 Methods for localised solutions . . . . . . . . . . . . . . . . . . . . 238 241

References

vii

C HAPTER 1

Introduction and contents


The present work belongs to the eld of study often termed theoretical neuroscience. Workers in the eld are interested in building hypotheses about how exactly the anatomical complexity of the brain gives rise to the electro-chemical functionality observed in ne-tuned experiments. The hypotheses often serve as a basis for arguing how the observable electro-chemical functionality could be implementing specic steps of the processing of information by the brain. For example, in experiments of electrical stimulation of brain slices in a dish, or in EEG and MRI scans in vivo, one can observe large-scale coherent oscillations or well-shaped travelling waves of neuronal activity. In the past various interpretations and speculations have been put forward relating to the causes of such highly organised activity and its possible signicance in the processes of thought. Recently, by proposing models describing generalised neural tissue and analysing them mathematically it has been shown that such sorts of activity are a natural consequence of the tissue structure and are simpler than it might have been expected. A useful tool for modelling the brain at the level of neural tissue, where individual neurons are invisible, are the so-called neural eld models. These are simplied, mathematically elegant descriptions of the very large neuronal networks (containing hundreds of millions of neurons) that make up the vertebrate brain. They are written as one or a small number (depending on the number of neuronal types we differentiate in the model) of integral or integro-differential equations. Their dynamic variable is the local activity, the averaged dynamics of many hundreds of neurons at a network location. The mammalian cortex,

in particular, could be thought of as a two-dimensional sheet of tissue through which spatially structured impulses of neuronal activity propagate. Thus the continuous neural eld models could be used to interpret experiments showing activity synchronisation throughout the cortex, spread of waves or formation of stable patterns of activity. Thus, it becomes necessary to know more about the mathematical properties of neural eld equations, especially about the dynamics of their spatially inhomogeneous solutions. The theory of pattern formation provides methods to determine inhomogeneous stable states (they could be also dynamic in time, for example, periodic) in spatially extended systems. Most of the methods are developed for systems described by partial differential equations. The focus of this Thesis is to adapt these methods to nonlocal integral equations, in parallel with providing new insights into the general dynamics of neural eld models. The main thread connecting the novel contributions in all Chapters is the analysis of pattern formation in neural elds with non-trivial delay terms. Here follows a resume of the Thesis contents by Chapters. In Chapter 2 we introduce readers to the ideas that motivate the endeavour of building a mechanistic theory of the brain. We give a short historical exposition of the philosophical arguments shaping the directions of inquiry pursued by theoretical neuroscience. We also review some basic information about the biology of the brain which the reader will need in order to understand the models found throughout the Thesis. In another introductory chapter, Chapter 3, we extensively discuss a derivation of the neural eld framework. We start off with models of the individual building blocks of a neuron, pass on to single neuron models, and the statistical averaging of these when connected in very large networks. For every transition from one type of model to another we discuss the experimental and theoretical evidence for its validity. Our main concern in this Chapter is to show that despite the large degree of abstraction neural elds are well justied models that give valid results relating to brain dynamics. With the same aim we review a number of studies where they have been successfully applied for the tentative resolution of specic problems in neuroscience. In the same Chapter we also review the classical studies on the pattern formation properties of neural eld equations with one spatial dimension (describing a line brain). It is these 2

studies on patterns that we expand on and elaborate in our work. Our own results are presented in Chapters 4 and later. We introduce a more complicated 1D neural eld equation accounting for the delays associated with nite conduction speed of the neuronal signal across the network. The necessity of these delay terms has been appreciated since early on, but the mathematical difculties they bring about has led to only a partial analysis until now. In this Chapter we present and discuss the linear stability of the homogeneous steady state to periodic non-homogeneous patterns. We go beyond linear analysis in Chapter 5 where we develop the apparatus that can differentiate between several types of solutions. A lot of attention is given to translating the theoretical predictions back to the original parameters of the model, plotting portraits of the parameter space, as well as illustrating these with simulations of actual model solutions. In Chapter 6 we extend our analysis to neural eld models dened in two spatial dimensions. It is discussed there why a planar brain model is largely sufcient to describe the dynamics of the most interesting part of the mammalian brain, the cortex. Again, we are most interested in investigating the instabilities to periodic inhomogeneous solutions (only at the linear level this time). Additionally, we go into the history of replacing the integral neural eld equations with partial differential equations that approximate the formers dynamics but are easier to analyse. We suggest an improved approximation. Finally, in Chapter 7 we complement the discussion about globally periodic pattern formation with a review of some of the methods for analysing the local properties of patterns. We focus on the emergence and movement of isolated bumps or spots of activity in 1D neural elds with adaptation. We review a new type of neural eld model describing dynamic nonlinear properties of the constituting neurons. Unlike simpler models it can support interesting bump dynamics in two dimensions as well, including ring solutions, labyrinths. We do not develop the analysis of bumps in 2D models but the Chapter sets the stage for a future project to do that. A short review of our results and possibilities for further research is given in Chapter 8. Finally, in the Appendices are listed and amply discussed the computer codes that solve/analyse the models in this Thesis. We wish you an interesting and gratifying read! 3

C HAPTER 2

Context to theoretical neuroscience


This general introductory Chapter is composed of two broadly complementary Sections: one on the epistemological arguments that set the directions of development of theoretical neuroscience (Section 2.1), and another that introduces the elds object of study, the brain (Section 2.2).

2.1

Brief philosophical background

According to Daugman [1] (see also [2, 3]) every technological era brings its metaphor for imagining the structure and operation of the brain. In the GrecoRoman world with its catapult, crane, lever and water technology (fountains, pumps, water clocks and other hydraulic contrivances) the theory for the physiology of mind reached its most developed form with Galen (129 - ca. 200). He was the rst to demonstrate experimentally that the body is controlled not by the heart but by the brain by severing nerve bres at the neck and observing that the respective muscles were paralysed [4]. He construed man as a hydraulic machine where thin rareed uid, called animal spirits, owing in the hollow nerve bres passed messages mechanistically between the brain and the organs [5]. In the Renaissance the most widespread metaphor is again the technology of the time the mechanical clock, reected in Descartes animal automata and De Mettries LHomme Machine [5]. Soon after the advance of Newtons theory of the ether and his reduction of the multiplicity of physical events to a single principle, David Hartley envisages the multiplicity of mental phenomena as resulting from vibratory motions in the brain and 4

nerves. He develops a theory of the organism essentially as an elastic machine, the external events acting upon it to give rise to specic vibratory responses. The internal resonances are interpreted as biological basis for the mental and psychic states [4, 5]. Another example from more recent times, those of the Industrialisation and the steam engine, is Freuds psychodynamic theory. The individuals conscious experience and behaviour are the manifestation of surging unconscious libidinal struggle between desire and repression. Similar to steam pressure which cannot build up indenitely, the internal psychic pressures must inevitably express themselves in one form or another [1]. It is speculated that every metaphor is benecial toward better understanding of reality, as it brings new viewpoints, new investigations and therefore insights. While a metaphor staying for too long might stie scientic enquiry, it is its replacement with a new metaphor that triggers large scientic dynamism, as it signies a shift of the scientic paradigm [6]. The 20th century metaphor for the brain has denitely been The Computer [1]. The central nervous system (CNS) is imagined by the majority of scientists as a very complicated, highly parallel, noisy yet robust to interference by noise, computing machine. This notion has led to the birth of an entire discipline cross-seminating the Neuro- and Psychological sciences with Mathematics and Computer Science the discipline of Computational (or Theoretical) Neuroscience. Its main object of study might be summarised as understanding how the brain and the various brain subsystems represent and process information as physical systems. The present Thesis also belongs to that tradition of thought prevailing today. At the inception of the computer metaphor from 1940s up to the 1970s, it was taken quite literally by many eminent researchers. Thus, fashionable were models composed of simple abstract neurons (often binary cells holding 0 and 1 values) interacting in articially prescribed manners. They were successfully shown to be able to perform many computational tasks, in principle. In 1943 McCulloch and Pitts [7] demonstrated that any input-output relation could be realised in a circuit of such binary neurons and further that it is equivalent to a Turing machine with nite storage capacity [8], which in turn is equivalent to any digital computer. In 1954 Uttley built a two-layer network with a simple unsupervised learning rule (postulated earlier in relation to CNS by the neu5

ropsychologist Donald Hebb [9, 10] if input from neuron A led to neuron B producing output, the connection between A and B is strengthened, otherwise, weakened). The neural network was capable of categorising input and of learning the rules generating a pattern, through trial-and-error [11]. The latter led to a model for Pavlovian conditioning [12, 13]. Rosenblatt created the Perceptron with a supervised learning rule in 1958, which could be trained to categorise novel input [14]. For a survey on the important early steps in the study of neural networks as building blocks for intelligent systems, see [15]. While this approach today might look too simplistic and abstract [16] (but for a more recent defence of it, see [17]), the initial ideas developed in those models continue to be very inuential and inform many of the current directions of investigation in Neuroscience. In fact, the unexpected discovery that simple neural networks can recognise and sort unfamiliar patterns, an ability previously reserved only for biological brains, led to a wave of enthusiasm that science could penetrate into the principles of operation of the brain and possibly recreate them [18]. It also led to the branching out of the eld of Articial Neural Networks which has since brought to us a rich harvest of results and applications [9, 19]. In the 1970s David Marr [20] characterised a hierarchy of three levels for the computational theory: 1) level of the abstract problem analysis, wherein are determined the constituent properties of the task which the neural system has to solve 2) level of the algorithm, a formal procedure which for given input will return the correct output and thus performs the task; 3) the lowest level of physical or biological implementation of the algorithm. Marrs claim was that each level was largely independent of the ones below it. Thus, discovering the algorithms with which the brain accomplishes the tasks at hand was possible despite our scant knowledge of its anatomical and functional details. However, although Marrs program t well with the spirit of the times, it gradually became clear that if we want to understand the algorithms which the brain uses, we need to give serious consideration to the actual biological implementation [21]. Importantly, one has to note that the biological brain is an evolved system, which a) has had to face and deal with new types of problems by small adaptations of already existing structures and b) has evolved as a single unit, with unclear, possibly innite-dimensional, optimisation function. Arguably, it 6

is designed to distribute efciently an unknown to us multitude of tasks sharing the same resource, rather than solve any one separate task efciently. Thus, algorithms used by the brain for specic problems might be very different and in fact far less direct and efcient than the ones proposed by the scientist or the engineer. Today it is still widely believed in the community that the brain computes. However it is explicitly recognised that a computer need not operate in any way similar to the serial symbol-logic devices we have on our desktops [21, 22]. Notably it is recognised that, while algorithms can be transferred between hardware implementations, they are most efcient and elegant in their native architecture. Considering algorithms of computation separately from their possible biological implementation has led mostly to developing ideas informed by the principles of symbol logic that are familiar from Computer Science [16]. These could be quite inefcient in a biologically evolved architecture and thus seem less likely candidates for truthful description of how the brain operates. In the past thirty years a new conceptual viewpoint has come to the rescue. It stems from the advances in Nonlinear Dynamics and Dissipative Systems namely, the mathematics of emergent behaviour and complexity [23]. Since these concern the unexpected properties of complex systems emerging as one passes from a ner to a larger scale, it is particularly suitable for application to the neural brain [24]. Computations here might involve many scales, from the molecular and subcellular dynamics, to neural circuits, neural populations, to the scale of the whole brain (in Section 4.6 we discuss Hakens brainwaves which emerge on the latter scale). Models of all of these have been steeped more deeply in the non-symbolic and messy side of our brain as a physical system. The shift of focus to the lower levels of Marrs hierarchy was helped also by advances in experimental and observational techniques. The anatomical organisation of the brain is established in fairly good detail thanks to plenty of specialised tracer methods, microscopy studies and new technologies, such as diffusion tensor imaging (DTI). Previously the only instruments available to the scientist in order to study the operating living brain were lesion studies, singlecell single-electrode recordings or electro- and magnetoencephalography (EEG and MEG). The latter had very low spatial resolution and problematic source determination [25, 26]. Lesion studies are often overlooked today as too prim7

itive, see [27] for a discussion how they still can benet Computational Neuroscience. Single-cell recordings provide very precise and accurate information, however there are severe problems arising with the interpretation of that data what is the role of the other unmeasured cells; how much of the cells activity is in connection to the computation under study? Today, it is somewhat easier to resolve these questions using techniques with large observational area and yet very ne spatial and temporal resolution these include optical imaging [28] and massively parallel recording by multi-electrode arrays [29]. When studying human subjects one has to contend with non-invasive techniques like positron emission tomography (PET) [30], functional magnetic resonance imaging (fMRI) [31, 32] and DTI [33], with medium spatial but poor temporal resolution. A comparison of the relative spatial and temporal scopes of different experimental methods is sketched in Figure 2.1. Finally, see [34] for a review of the exciting new imaging techniques that are just beginning to enter the neuroscientic investigation! The above numerous instruments of investigation have made possible the testing of hypotheses about the low-level implementation of algorithms in the neural circuitry and have fostered close collaboration between experimentalists and theoreticians. This has led to sharp increase of our knowledge of the functional organisation of the brain. In fact for sites popular among experimentalists, such as the primary visual cortex (V1), starting with the work of Hubel and Wiesel [36] in the 1970s there is a very detailed description of the structure even on the microscopic scale and quite good understanding of its functional organisation [37, 38, 39, 40, 41, 42, 43, 44]. We will mention V1 in some detail in Section 3 as an application area for neural eld modelling. As of yet, it is not clear whether the shift of interest in Theoretical Neuroscience to models more rmly based in the biological basis of neural systems, and making use of the mathematics of continuous dynamical systems, reects the introduction of a new metaphor the Dynamical Brain, or merely an extension and renement of the computing metaphor [45]. At one end of the spectrum, authors have often conceptualised their low-level dynamical models simply as providing mathematical and physical mechanisms for embedding certain types of computation within the neuronal (or sub-neuronal) architecture. This view resonates with advances in a more abstract eld of mathematics, analog com8

Log size (mm)

3 EEG & MEG

fMRI PET Lesions

Brain 2

Map

Optical imaging

Column

2-deoxyglucose 0 Microlesions

Layer Multi-electrode recording Neuron Single cell recording

Dendrite Light Microscopy Synapse Patch clamp recording

0 Millisecond Second

2 Minute

3 Hour

5 Day

Log time (sec) Invasive Non-invasive

Figure 2.1: Left: Comparison of the spatial and temporal resolution and scope of some imaging techniques used in Neuroscience. The shading signies the degree of damage inicted to the subject. After Cohen and Bookheimer [35].

puting, where traditional continuous dynamical systems have been studied as a means to solving algorithmic problems, such as sorting and linear programming [46, 47, 48]. On the other end of the spectrum lie views [49, 50, 51] that the functionality of the neural systems is established in ways that cannot be captured by computational paradigms (or at best can be captured only partially) because they overlook an important part of the nature of the biological neural systems. For example Globus [50] proposed that neural function is produced hierarchically at each level by pressures: cytoplasmic, chemical, adaptive or dynamic. Importantly these are involved in two processes seen as inherently noncomputational [49]: neural Darwinism and functional productivity. The term neural Darwinism stands for a self-organisation process within the neuronal pool driven by pressures arising from the variance, diversity and imprecision of the synaptic contacts between neurons. During brain development it leads to the formation of neural groups with specic functional properties through 9

degeneration of the initial pool of variability [52]. However in the mature brain both structural and functional modules are still modied continuously, as the Darwinian process is coupled to an adaptive pressure termed functional productivity. The latter denotes the changing of a neurons properties evoked by its own and the surrounding activity [50]. This change may be realised at the edge between the molecular and the cellular level through activation of various mechanisms modulating the neuronal excitability or the efcacy of its synaptic contacts to other neurons. While a computational theory necessarily works with predened or pre-existing rules and algorithms governing the evolution of the neural system, neural Darwinism and functional productivity are multihierarchically driven self-organisation processes that are highly malleable to changes in the neural environment. Yet, there is already a long list [45] of reduced dynamical models in neuroscience which seem to capture quite plausibly essential qualities of isolated brain subsystems at various hierarchical levels. Erdi [53] believes the abovementioned seemingly noncomputational processes would not be an obstacle to a more comprehensive theory of neurodynamics cutting across several hierarchical levels. In this vein, a formalism developed by Cheuvet with the aim to integrate several levels of organisation [54] is a starting point. Thus a Dynamic Brain metaphor might not be divorced from the Computational Brain. In the present Thesis we subscribe to the moderate dynamical/computational view. We are concerned with establishing in more detail the properties of a somewhat simplied and abstracted dynamical system relating the levels of neural populations/cortical macrocolumns with the next higher level of neural tissue on the centimetre scale. Using methods from the mathematical eld Pattern Formation we predict the emergent phenomena at the larger scale. In Section 3.5 we will review how these have been put in use in computational algorithms by various authors, as well as compare them with some experimental observations. It is tting at this point to conclude the Section with the following quote of Piccinini [55]: By itself, developing computational models of neural phenomena does not commit neuroscientists to the conclusion that brains compute any more than developing computational models of, say, the Big Bang commits cosmologists to the view that the universe 10

is a computer. If neural systems compute, this needs to be established by more than the existence of computational neuroscience.

2.2

Brief biological background

In this Section we will describe how the brain looks at different scales. We introduce the minimum background information necessary for the theoretician prior to embarking upon modelling. We begin with the scale which seems the most natural to us, that of the basic unit building up the central nervous system the neuronal cell. In Section 2.2.2 we will dip briey into the universe of subcellular processes that bring about the neurons unique properties. Finally, in Sections 2.2.3 and 2.2.4 we will progress on to coarser scales of neuronal populations and brain areas as a prelude to later chapters.

2.2.1 The typical neuron


The brain contains two major classes of cells. The neurons are the active signalling elements, while glial cells have various servicing roles such as mechanical support, maintenance of balanced chemical environment in the intercellular space, etc. Neurons number around 100 billion, while glial cells about 10 times as many [56]. Let us note here that a typical neuron does not exist. Until now neuroanatomists have identied an enormous amount of neuronal types which might differ both anatomically (Figure 2.2, left) and electrodynamically (Figure 2.2, right). The number is so high in fact that, recently, automated clustering algorithms have been used for classication of neurons by shape [57]. Still, the basic features of which we will take account in our models are shared by all neurons. We describe those here. The thicker part of the cell which contains the nucleus and most of the proteinsynthesising machinery is called cell body or soma. Out from it protrude from one up to tens of thin lament-like processes, called dendrites. They branch extensively forming the dendritic tree, with up to 400 tips in some types of cells [59]. Dendrites collect the signal input to the neuron through contacts with other neurons. The neurons output travels down another type of lament 11

a.

b.

Figure 2.2: Left: Examples of diverse morphology of neuronal types. Right: Diverse behaviours of neurons from mouse barrel cortex under the same stimulus. Classied as regular spiking (RS), intrinsically bursting (IB) and fast spiking (FS) in Agmon and Connors [58].

called axons. The signal in the nervous system has an electrical nature. It is carried by the difference of local ionic concentrations across the cell membrane and has a relatively small velocity typically around or below 10m/s [56]. There are two important differences between dendrites and axons. The transmission of electric signals along the dendrite is mostly passive (linear), see Section 3.1.4. Since such a signal decays with distance dendrites cannot be very long. In the axon this hurdle is overcome by employing a nonlinear, or active, transmission process (described mathematically in Section 3.1.1). The nerve impulse propagating along the axons length is a solitary waveform. Thus, dendritic inputs are summed linearly and continuously, while axonal output consists of discrete events, with a maximum frequency determined by the time necessary for the nonlinear mechanism to restore to operational range. The second important difference is that the points of contact between neurons, called synapses, pass information unidirectionally a dendrite holds the postsynaptic site while an axon terminal the presynaptic. For our purposes it sufces to imagine that the integration of synaptic input occurs at the soma. We should mention that the activation of synapses in the 12

dendritic tree could either contribute to increasing voltage, in which case the synapses are called excitatory, or detract from it (inhibitory synapses). Nonlinear transmission is triggered at the axonal base if the combined voltage there exceeds some threshold value. In simplied models where the neuron is treated as a single input-output unit, this can be interpreted as a nonlinear gating property.

2.2.2 Subcellular processes


The models we will develop in this Thesis are on the level of neural populations discussed in Section 2.2.4 and necessarily will use a very simplied notion of a neuron. In that regard the naive description we gave above more than sufces for us. We would like to mention however that that description is a few decades old. With the improvement of tools, the accumulation of experimental and theoretical data on the subcellular scale has shown that in reality the neuron is a vastly more complex entity. The complicated processes supporting the transport of signal across the cell make it capable of carrying out chains of computations on its own. The mechanism of exchanging signals across the synaptic cleft makes possible several very different types of informational and functional input which interact with each other. First, let us summarise the molecular basis of the neuronal signal. As with most other cells in the body the concentrations of certain ions (Na+ , K+ , Ca2+ , Cl , . . . ) differ greatly in the intra- and the extracellular space. This is due to specialised ion channels in the cell membrane which pump out some ions (Na+ , Ca2+ ) and bring in others (K+ ) in order to maintain cellular homeostasis. This creates concentration gradients for these ions each of which can diffuse back through the semi-permeable membrane at a different rate (depending on the number of active ion channels and static pores for that ionic species) what is called selective membrane permeability. As ions diffuse across the membrane they also transfer charge across it and create a gradient of electric potential. In general the interior of cells has an excess of negative charge. Mobile charges such as anions repel each other and build up on the inside of the cell membrane. Here electrostatic forces attract an equal density of positive charges to the outside of the membrane, thus it takes the role of a capacitor. The 13

concentration and potential gradients for a single ion species act in opposite directions and balance out at some value determined by the selective membrane permeability for that ion. This value of the potential is called reversal potential. For example, Na+ has higher concentration in the extracellular uid and therefore is balanced by a positive reversal potential, around 50mV (extracellular Because all ions interact with the electric eld the effective resting potential at which the neuron is at equilibrium is an average of the individual resting potentials with weights determined by the selective membrane permeabilities. It is about 65mV. Signals arrive at a postsynaptic site in the form of chemical messengers (neurotransmitters) diffusing across the cleft separating the two neurons. These temporarily bind to membrane receptors and alter the dynamics of ion channels. The intracellular ionic balance is locally disturbed and consequently the membrane potential deviates from rest. Since the membrane is generally impermeable to ions, the disturbance in ionic concentrations and electric potential propagates along the length of the dendrite. Its spread is governed by a diffusion equation with additional leak term which is known as the cable equation (Section 3.1.4) because it describes also the propagation of electrical charge along a wire. Illustratively, the axial resistivity in the dendrite is measured in the range of 70 to 250cm while the membrane resistivity lies between 5000 and 50000cm2 [59] (the latter is still 10 000 times lower than the values for a lipid bilayer without permeating ion channels [60]). Charge propagates mainly along the dendritic core. Some ion channels have voltage-dependent properties. At 65mV their operation maintains the equilibrium; at higher voltage values their conductance increases allowing more cations in and raising the voltage further until a threshold value is reached at which they shut off. Thus, if a signal arrives locally they create a positive feedback loop that amplies it and helps it spread further. Other channels are then activated which bring the voltage back to rest. This is at the heart of active signal transmission typical for axons (Section 3.1.1). There are a number of channel types with different time constants and activation/deactivation thresholds (as well as channels that are not voltage-dependent but Ca2+ -activated) [61]. The interaction of these creates the diverse output dy14 potential is 0mV). Conversely, for K+ the reversal potential is around 77mV.

namics of neurons, a set of which was shown on Figure 2.2, right. Moreover, today it is known that voltage-dependent channels are also common in stretches of the dendritic tree [62, 63]. This provides for a complex interplay between passive and active transmission [64, 65] which is just beginning to be explored. For example, dendrites with distributions of active ion channels have been found to support propagation of the nerve impulses generated at the axon hillock also back toward the dendritic tips [66], a process that might underlie the Hebbian learning scheme mentioned in Section 2.1. The Hebbian rule is the only computational hypothesis proposed for the realisation of long-term memory and learning in the brain. It can be summarised as: if the recent input to neuron B from A has correlated with the output produced by B then the synaptic connection between A and B is strengthened, otherwise, weakened. A detailed discussion of the rule in synaptic and neuronal modelling context is given in [67], while more about its limitations and extensions in [68]. In order to have change of synaptic properties (synaptic plasticity) in this way, the synapse needs to obtain information about whether the neuron produced output shortly after the synaptic activation or not. Backpropagation is the rst biophysical mechanism uncovered which permits implementation of the Hebbian scheme. The dendritic tree can also execute computations due to its spatial structure (the importance of dendritic geometry was demonstrated well by [69, 70]). Its most obvious and earliest recognised function is integration of input from thousands of sources [71, 72]. Depending on the neuron type it might contain between 500 and 200 000 synaptic contacts [59]. However the position of the point of contact along the tree is also of signicance. Dendritic trees are electrically distributed rather than isopotential and various voltage gradients exist along the branches. The signal in passive dendrites is very strongly attenuated before it reaches the soma. This implies that many synapses have to be activated at similar times if an output nerve impulse is to be triggered. Thus the dendritic tree may serve as a coincidence detector. Additionally, due to the smearing out of signal in passive dendrites, the time windows for coincidence integration are different for different parts of the tree. A number of authors have proposed the single neuron coincidence detector property for solving various higher level processing tasks in the brain [73, 74, 75]. The modularity of the dendritic tree can be exploited also as a nonlinear pattern recognition system, for example to perform 15

orientation tuning [76] and binocular disparity [77] in the visual cortex. An important role is played by inhibition. Between 10% and 40% of the synapses are inhibitory and they are mostly placed closer to the soma where they can be effective in suppressing input from more distal excitatory synapses. A strategically placed inhibitory synapse can block certain parts of the tree and not others as was conrmed by [78, 79]. The current perspectives to dendritic computation studies are discussed in [80]. Detailed modelling of dendrites is one of the present challenges to neuroscience geometric shape, distribution of channels and synapses all make important contributions to the dynamics [81]. The mathematical methods employed in neuroscience are not yet powerful enough to analyse the effects of internally interesting neurons when incorporated into neural networks or populations. For now, either one studies the properties of the complex neuron on its own, forgetting completely about the higher hierarchical layers of the nervous system, or replaces all its internal dynamics by a few simple differential equations (see Section 3.1) or even by an input-output device with predetermined behaviour (Section 3.2). As we learn more and more about the richness of cellular dynamics the notion of successfully approximating it with a set of input-output relations becomes ever more distant. What is not clear yet and gives some hope for the reductionist method is whether the irregularities on a microscopic scale have signicant functional impact on the dynamics of large networks and neuronal populations, especially ones operating at high levels of noise and spontaneous activity. If microscopic effects turn out to be important then Poznanski ([49] and Section 2.1) may well be correct that the computational paradigm has little value for understanding the brain. In this Thesis we will shun such thoughts and ignore intra-neuronal dynamics or replace them with very simple relationships. Our aim will be to derive properties of large assemblies of neurons which we hope will survive in some modied form even in the context of fully realistic neurons. To use the noncomputationalists language large assemblies are likely to generate dynamical pressures acting toward the lower hierarchical levels, forcing certain types of large-scale order onto the seemingly haphazard bottom-up dynamics. A number of experimental paradigms, some of which will be presented in Chapter 3, seem to justify such approach. Neuronal assemblies (networks, popula16

tions and beyond) are introduced next.

2.2.3 Networks and microcircuits


Neurons can form very complex networks. It is generally believed that the extraordinary abilities of the central nervous system are realised mostly through these networks, although higher and lower hierarchical levels also contribute with various types of computation. Even in a small-sized network the ways to combine excitatory and inhibitory connections with different conduction delays are virtually endless. Additionally, the dynamical behaviour for any given network is very difcult to establish. There is a lot of work devoted to studying such specic local networks (often called microcircuits). As an illustration, a prototypical microcircuit is the central pattern generator in the lamprey (where it is most well studied) [82]. The swimming motion of the lamprey, an eel-like marine animal, one of the most primitive extant vertebrates, consists of lateral undulations of the body propagating from the head towards the tail. The lampreys spinal cord has about a hundred segments each of which is a simple neural circuit capable of inducing alternating contraction and relaxation of the muscles on either side of the body. The swimming motion is generated by a rhythmic pattern of activity of those segments. It has been modelled as a row of coupled oscillators and shown to naturally lead to a travelling wave of activity [83]. Central pattern generators (CPGs) have been identied in many species, from invertebrates to mammals. In more than 50 investigated species a single or a combination of CPGs is responsible for the habitual locomotor movements swimming in marine animals and gaits in terrestrial ones [84, 85]. Recently, a mathematical model for these has been proposed by [86]. CPGs are often employed by nature where rhythmical activity has to be generated more or less autonomously from the brain for example gastric contractions or breathing. Circuits generating rhythms are also under study within the brain in cerebellum, neocortex and in hippocampus (rhythms generated in the latter are for example the commonly known gamma and theta frequency brainwaves) [87]. Networks with rhythmic properties are more prominent in the literature because the regularity of their dynamics makes it easier to make inroads into the 17

Figure 2.3: Left: Different types of cortical architecture a. neocortex is horizontally isotropic;
b. in cerebellar cortex all three dimensions are distinct; c. mossy bres run unidirectionally across otherwise isotropic hippocampus; d. nuclei often are isotropic in three dimensions. Reprinted from Schijtz [92]. Right: Simplied depiction of a cortical macrocolumn showing the layer arrangement with the predominant neuronal types. In reality a macrocolumn might contain hundreds of minicolumns and tens of thousands of neurons. Reprinted from Szentagothai [93].

networks boundaries and role, both experimentally and theoretically. However many more microcircuits have been investigated in invertebrate organisms [88]. As these simple systems might have ganglia of hundreds of neurons or even down to tens, they are amenable to complete tracing of the connectivity and detailed mapping of the network processing, for example during a ight reex [89, 90, 91].

2.2.4 Neural populations and brain areas


There are some typical structural features in various parts of the brain that can easily be recognised at both the macroscopic and the microscopic level. For example, neuronal circuits can aggregate into lumps, which are often called nuclei and assigned specic information-processing or cognitive roles lateral geniculate nuclei, pontine nuclei etc. Or they can show a two-dimensional, layered arrangement as in the so called cortices the outermost shells (lat. cortex) of sev18

eral parts of the brain. The models on which this Thesis focuses, neural elds, while usually quite generic and overlooking anatomical specics, are most suitable for modelling just such laminar architecture. Therefore we will give some more information about the role and structure of cortices. The cerebellar cortex is known to be responsible for motor control ne-tuning and adaptation, procedural memory and learning [94]: acquiring and perfecting of skills, such as playing the violin or riding a bicycle, is likely realised by plasticity in the cerebellum. Another laminar cortex, the hippocampus is implicated in new memory consolidation and in exible or novel use of memories [95]. Cerebral cortex (neocortex) is involved in high-level processes such as cognition and intelligence. All sensory information apart from olfaction (smell) arrives here, while motor areas activate and control the muscles for voluntary movements. The cerebral cortex is subdivided in numerous spatially dened cortical areas depending on the specialisation of tasks they perform - primary visual cortex (V1), visual association cortex (V2), primary motor cortex (M1), etc. Typically sensory areas contain orderly two dimensional topographic maps of the respective receptor space. Conversely, the motor cortex contains a map of skeletal muscles of the body such that stimulating a specic location contracts an individual muscle or group of muscles with a specic direction. Indeed, the layered modular arrangement of cortices suggests that the same kind of operation is performed over the whole surface of an area [92]. According to Mountcastle, the brain is a system of interconnected neurons organised into local microcircuits, which are then linked into distributed systems [96]. The different computational roles of these cortices are reected in some differences of architecture, illustrated on Figure 2.3, left. Within the folded hippocampal layer the dendritic trees and part of the axons are spreading isotropically (meaning that one cannot distinguish directions other than vertical across-layer and a horizontal plane that of the layer), however there are some axonal systems (mossy bres) superimposed onto these which run in one direction only following the folds of hippocampus. In cerebellum the dendritic trees are vertical and attened into two dimensional planes while the axons run perpendicular to these planes. The neocortex seems most unspecialised and versatile; it is isotropic in the horizontal plane, being made up of stacked minicolumns, narrow vertical chains 19

of neurons interconnected across the layers of the cortex. In primates minicolumns typically contain around 80 to 100 neurons. Minicolumns are bound together into macrocolumns by extensive local horizontal connections. Macrocolumns (Figure 2.3, right) have a diameter between 0.3mm and 0.6mm which is invariant to the size of the brain between species. In evolution larger cortical surface has been generated by an increase in the number of macrocolumns, not their size [97]. All cerebral cortices of extant mammals consist of six layers and display a modular organisation [98]. Mammals of several radiations have reached sizeable cortices by enlarging primary sensory and motor areas and elaborating new cortical areas. Identied areas vary from ten to twenty in small mammals to perhaps one hundred in humans [96]. Areas are differentiated by their functional specialisation for example the parts of cortex devoted to processing visual stimuli are topographically well-dened (visual cortex) and can be subdivided into areas that respond to different hierarchical features of the visual scene (area V1, V2, etc.) Macrocolumns are thought to be the main computational units of the neocortex. Such modular organisation allows more than two variables to be mapped recursively onto its two-dimensional surface. We will describe in detail this mechanism in Section 3.5.2 as it is at its scale that neural eld models are most readily applied. On the other hand, the vertical dimension of the column might be a way to realise the richness of nonlocal connectivity to very diverse output destinations. Cells from different layers in the same column are known to project axons to different brain areas and subcortical structures. However, relatively little is known about the distribution of a computation in depth in the column (although a typical pattern of cell arrangement and connectivity has been partially identied sketched in Figure 2.3, right, but not discussed here). To read further about the columnar structure of neocortex consult Mountcastle [99].

20

C HAPTER 3

Introduction to neural network modelling


In this Chapter we review a large portion of the established models in theoretical neuroscience. Our aim is to gradually build from the easily veriable models of single neuron dynamics (Section 3.1) up towards the level of continuous neural populations where the focus of our work lies. On the way we discuss all the arguments that justify the introduction and use of the next hierarchy of models, as well as, their limitations. Section 3.2 does this specically for the neural eld models, our nal goal. Some classical neural eld models are reviewed in more detail in Section 3.3, their basic pattern formation properties are demonstrated in Section 3.4. Finally, we conclude this introductory Chapter with a different kind of argument for the use of the neural eld model the many applications it has found in interpreting neuroscientic data and useful hypotheses it has generated about the functional organisation of cortex (Section 3.5).

3.1

Single neuron

Here we present some classical models for the electrodynamics of a single neuron. It is abstraction from these that will lead us later, in Section 3.2, to develop models for neuronal populations. We discuss a realistic model of the output spiking dynamics (Section 3.1.1), a commonly used simplied one (the integrate-and-re neuron, Section 3.1.2), a model of the synaptic action (Sec21

tion 3.1.3) and of the dendritic dynamics (Section 3.1.4).

3.1.1 Action potential generation


Due to active channel transport there are different ion concentrations in and out of the cell, as described in Section 2.2.2. The impermeable cell membrane becomes a capacitor. If input current I (t) is injected into the cell it would either add further charge on the capacitor or leak through the ion channels: I (t) = Icap (t) + Ik (t),
k

where the sum runs over the different types of ion channels. Differentiating the denition for capacitance C = Q/V where Q is charge and V the voltage across the capacitor, we nd the charging current Icap = C dV/ dt. Hence C dV = Ik (t) + I (t). dt k (3.1.1)

This is the standard dynamical system based on conservation of electric charge that is used to model a section of axonal length. It describes also an isopotential cell (a point neuron) where I (t) will be the sum of synaptic currents.

HodgkinHuxley neuron To explain their measurements from the squid giant axon Hodgkin and Huxley had to hypothesise [100] three types of channels sodium (Na+ ), potassium (K+ ) and a membrane leakage. These are sufcient to generate an actionpotential, the large soliton-like excursion of membrane voltage that constitutes a nerve impulse, and to t very accurately the data. All channels can be characterised by their resistance R or, equivalently, conductance g = 1/R. The conductance of sodium and potassium channels is time- and voltage-dependent, with maximum values gNa and gK . The leakage channel groups unknown channel types and passive ion pores and can be described by a voltage-independent conductance gL . If all channels are open, they transmit current at their maximum conductance, normally however some of the channels are blocked. The probability of channels being open is described by the additional gating variables m, n, h, taking values between 0 and 1 and approaching asymptotic values 22

m (V ), n (V ), h (V ) with time constants m (V ), n (V ), h (V ) respectively. They can be specied by the differential equations dX = X (V ) X, X = m, n, h. (3.1.2) dt The combined action of m and h controls the Na+ channels, and n the K+ chanX (V ) nels. By using Ohms law one can write the three membrane current components as

Ik = gNa m3 h(V VNa ) + gK n4 (V VK ) + gL (V VL ),


k

(3.1.3)

where VNa , VK , VL are the effective reversal potentials. At the reversal potential for given ionic species the electric forces exactly balance its cross-membrane concentration gradient. When the membrane voltage goes through that value ions reverse from inow to outow or vice-versa. The surprising form of the gating variable terms can be interpreted physically by assuming that K pumps can transport only packets of four K atoms at a time, and Na pumps packets of three Na atoms if they are not inactivated by a single control molecule from a pool with concentration described by 1 h. The six functions X (V ) and X (V ) are obtained from ts with experimental data and have been plotted on 1 , X (V ) + X (V ) X (V ) X (V ) + X (V ) Figure 3.1. They are typically written in the form X (V ) = with m (V ) = 0.1(V + 40) , 1 exp[0.1(V + 40)] 0.01(V + 55) , n (V ) = 1 exp[0.1(V + 55)] 1 h (V ) = , 1 + exp[0.1(V + 35)] h (V ) = 0.07 exp[0.05(V + 65)], m (V ) = 4.0 exp[0.0556(V + 65)], n (V ) = 0.125 exp[0.0125(V + 65)]. X (V ) =

The remaining constants are C = 1F cm2 , gL = 0.3mmho cm2 , gK = 36mmho cm2 , gNa = 120mmho cm2 , VL = 54.402mV, VK = 77mV and rents in A per cm2 ). Dynamics of the HodgkinHuxley model We see from Figure 3.1 that with rising membrane voltage, the asymptotic values m and n increase whereas h decreases. Thus, if there is some positive 23 VNa = 50mV. (All potentials are measured in mV, all times in ms and all cur-

10

0.5

0 -65 -15 35

0 -65 -15 35

8(mV)

(mV)

Figure 3.1: Asymptotic values (left) and time constants (right) for the three gating variables
m, n, h in the HodgkinHuxley model (3.1.1), (3.1.3). From [67].
0.16
40

f
0.12

0 V -40

0.08 0.04
-80 0 20 40 60 t 80 100

0 0 20 40 60

I 80

100

Figure 3.2: Left: Regular spike train generated by the HH neuron during a constant stimulus.
Right: The frequency of spikes (spikes per millisecond) as the stimulus strength is increased. HH is a type II model.

external input (excitatory synaptic current, signal arriving from neighbouring axonal region, or intracellular current injected by the experimentalist) the conductance of Na channels increases, positive Na+ ions ow into the cell and raise the membrane potential even further. At high values of V the process is shut down because the inactivation variable 1 h becomes large. However the time constant of h is always larger than that of m, so there is enough time for the volt-

age to raise signicantly, what is called an action potential (typical amplitude of 100mV as membrane potential reaches +35mV). As the further inow of Na channels is blocked, the outow of positive K + ions is activated by n evolving on a similar slow time scale. This brings down V back to negative values. An action potential is followed by a longer refractory period with membrane potential lower than the resting value of 65mV until the equilibrium concentrations of both Na+ and K+ are gradually recovered by the ion channels and

passive pores grouped in the leak current. The periodic spike train generated

24

subcritical Hopf

supercritical Hopf

Figure 3.3: Bifurcation diagram of the HodgkinHuxley (left) and the MorrisLecar (right)
models with regards to the external drive I0 . Black circles show amplitude of stable limit cycle, open circles indicate unstable limit cycle. Thick line shows stable xed point, thin line shows unstable xed point.

by a constant external input I (t) = I0 stimulus is shown on Figure 3.2, left. On the right is plotted its frequency in spikes per ms as I0 is varied. One can perform numerical bifurcation analysis of the dynamical system specifying the model and plot its bifurcation diagram, Figure 3.3, left. The action potential is stability to a limit cycle with nite frequency of about 55Hz (number of action potentials per second). We note here that one can use observations from Figure 3.1 to reduce the fourdimensional HH model to a system of two variables, without losing much accuracy. Namely, since the time constant m is much smaller than the others, one could replace the gating variable m(t) by its asymptotic value m (V (t)). This quasi steady-state approximation has also physical grounds since the Na+ activation generally responds more rapidly than the membrane capacitative time constant C which sets the limit to how rapidly the membrane potential can change in response to an applied current [101]. Further, one could note the fact that stants n (V ) and h (V ), and attempt to construct a linear mapping between n the graphs of n (V ) and 1 h (V ) are quite similar, as well as the time cona manifestation of a Hopf bifurcation at I0 6A/cm2 , the steady state loses

and h. The advantage of the two-dimensional model is that its dynamics can be studied analytically. The mathematical details of the reduction and subsequent model analysis can be found for example in [102].

25

MorisLecar model Neuron models which, as stimulus strength is increased, generate spike trains by a discontinuous jump in frequency from zero to a nite value (Figure 3.2, right) are called type II models. In that case the action potentials appear as a result of Hopf bifurcation from a stable xed point (Figure 3.3, left). Hodgkin Huxley was modelled on the giant axon of the squid and is of Type II. However neurons in the mammalian brain typically show smooth transition of frequency from zero as the driving current is increased. Models with such property are called type I. A simple type I model of two variables is the MorrisLecar neuron [103]. It was initially developed for the barnacle giant muscle bre which seemed to have only two important membrane conductances, K+ and Ca2+ , yet demonstrated a wide range of dynamics. It has the form dV = I gL (V VL ) gK w(V VK ) gCa m (V )(V VCa ) I + f 1 (V, w), dt dw = w (V )(w (V ) w) f 2 (V, w) dt with w (V ) = cosh V V1 , 2V2 w (V ) = 0.5 1 + tanh V V3 V4 . V V1 V2 ,

m (V ) = 0.5 1 + tanh

Here w represents the fraction of K+ channels open and evolves with voltagedependent rate w (V ) to an asymptotic value w (V ). The Ca2+ channels respond to V so rapidly that we assume instantaneous activation and specify them simply by m (V ). As before, gL is the leakage conductance, gK , gCa are the maximum potassium and calcium conductances and VL , VK , VCa are corresponding reversal potentials. The gating functions w (V ) and m (V ) are described by sigmoid functions derived from the experimental data and similar to the shapes seen on Figure 3.2 for HudgkinHuxley. The constants are given by VL = 0.5, VK = 0.7, VCa = 1, gL = 0.5, gK = 2, gCa = 1.33, = 1/3, V1 = 0.1, V3 = 0.145, V3 = 0.01 and V4 = 0.15. Note that here neither of the voltage-dependent conductances is ever inactivated (in HH the inactivating variable is h). The voltage dynamics depends entirely on the relative strength of gCa compared to the other two conductances, 26

gL and gK . Despite this simple setup the MorrisLecar model is able to produce a wide range of behaviour, including both type I and II (analysis of the phase portrait reveals that both nullclines are nonlinear and can intersect in a variety of ways). Because of this, MorrisLecar is often used also as a phenomenological model for cortical neurons. There are very few types of dynamics seen in cortex that necessitate a model of higher dimension. For example, chaotic spiking or bursting (illustrated on Figure 2.2, right, IB) cannot be generated by a phase-plane system. A type I scenario is realised through a saddle-node on a limit cycle bifurcation [104]. It is illustrated in Figure 3.3, right, by a bifurcation diagram for the MorrisLecar model.

3.1.2 Integrate-and-re neuron


In the preceding section we presented simple examples of conductance-based neuron models. Such detailed models can reproduce electrophysiological measurements to a high degree of accuracy. However due to their intrinsic complexity they are difcult to analyse, especially when used as building blocks of neuronal networks. In such cases, one prefers to replace them with very simple phenomenological spiking neuron models. These go back to the vision of neurons from Section 2.2.1 as simple integrators and nerve impulse emitting devices. Spikes are generated whenever the membrane potential V (t) crosses some threshold h from below. They are stereotyped events fully characterised by their ring times. The classical example of spiking neuron model is the integrate-and-re (IF) neuron. The subthreshold regime is governed by a current balance equation similar to (3.1.1) C dV = F (V ) + I ( t ), dt Tm < t < Tm+1 . (3.1.4)

Here F (V ) replaces the ionic conductances of previous models with some predened dynamics. If it is a linear function F (V ) = V, then the model is called leaky integrate-and-re. The spiking time Tm is determined when the membrane potential reaches a threshold h, instantaneously resetting to a new value V0 < h, Tm = inf{t | V (t) h ; t Tm1 }, 27
t Tm ; t> Tm

lim

V (t) = V0 .

(3.1.5)

V oltage (mV)

200

400

600 Time (ms)

800

1000

Figure 3.4: Example of voltage evolution of an integrate-and-re neuron. In this case the
external input terms were modelled as both excitatory and inhibitory synaptic conductances. The resting potential is -65mV, the spike threshold -54mV and the resetting voltage -70mV. The inputs to the synapses were generated by Poissonian processes. Nine resets after a spike are visible.

One might also introduce articially a refractory period abs , interrupting the dynamics after a spike and restarting the integration from V0 at time Tm + abs . Suppose that a leaky IF neuron is driven by a constant input current I (t) = I0 . The full time course of the membrane potential after a spike at time Tm is easily determined by integrating Eq. (3.1.4). Namely, it is V (t) = I0 + (V0 I0 ) exp t Tm C .

It approaches the asymptotic value I0 for t . Thus, for I0 h no further spikes can occur. For I0 > h the next spike will occur after a time T := Tm+1 Tm = C ln I0 V0 . I0 h (3.1.6)

At Tm+1 the membrane potential is again reset to V0 and the integration process starts again. Thus, if the stimulus I0 remains constant the IF neuron will re regularly with period T. For the model with absolute refractory period the 28

ring period will be T + abs . For a time-dependent stimulus the formula for the membrane potential can be generalised to V (t) = V0 exp t Tm C

1 C

t Tm 0

exp

s I (t s) ds. C

Figure 3.4 has been calculated using a similar scheme. It shows the subthreshold voltage evolution for an IF model integrating both excitatory and inhibitory input Poisson trains of spikes (in this case the external input was in the form of synaptic conductances gsyn (V (t) Vreversal (t))). Nine spikes are clearly visible. Another often used form of (3.1.4) is the quadratic integrate-and-re neuron C dV = a0 (V Vrest )(V Vc ) + I (t), dt Tm < t < Tm+1 ,

with parameters a0 > 0 and Vrest < Vc . For I = 0 and initial condition V < Vc the voltage decays to the resting potential Vrest . For V > Vc , it increases until the threshold. The parameter Vc can be interpreted therefore as the critical voltage for initiation of action potential by a short current pulse. It is possible to make careful quantitative reduction of the biologically realistic HodgkinHuxley model to a quadratic IF model or other extensions of the IF concept (for example spike response models [67] which we do not discuss here). This allows one to establish a mapping between the parameters of the more formal spiking neuron and those of the HH neuron, and to ask how much the dynamics is altered by the replacement. It turns out that not only the ring rates compare well, but also the coincidence between the timings of individual spikes can be quite high. Depending on the input properties it could reach up to 70-90% matching spike timings as shown on Figure 3.5; for more details see [67] and [105]. It becomes important to validate whether the parameter regimes displaying high coincidence rate are the biologically relevant ones. In another important study [106] the authors subjected an extended IF neuron (with additional properties involving adaptation) to the same in-vivo-like input as an isolated biological neuron, more precisely a pyramidal cell from rat neocortex. They showed that the response of both had the same statistical properties. Keat et al. [107] observed that the responses of in-vivo neurons from the early visual system (retina and LGN) when the anesthesised animal is presented with a natural stimulus, are reproduced by IF neurons spike by spike, close to the natural neurons own reliability. That is, the mismatch between articial and biological 29

voltage (mV) coincidence rates time (ms)

input amplitude

Figure 3.5: Left: The output of a spike response model (dashed) compared that of Hodgkin
Huxley model (solid). Right: Measure of the coincidence of model spikes with those of HH depending on the input intensity for a quadratic IF (solid), spike response model (dashed), a HH with articial threshold (dotted). Value of 1 implies that the output is identical, 0 implies that any coincidences can be explained by chance. From [67].

neuron approaches the variability of response of the latter to the same stimulus from trial to trial. Brette and Gerstner [108] introduced a two-variable adaptive IF neuron that has spike matching rate upwards from 95% for all input intensities (compare with Figure 3.5). They also developed a systematic procedure for determining the model parameters by experimental recordings from a real neuron. Thus today it is convincing that the reduced IF models can reliably be used instead of conductance-based ones when only the spike-timings are important, as in synaptically coupled neuronal networks.

3.1.3 A simple synapse


At a synapse, presynaptic ring results in the release of neurotransmitters that cause a change in the membrane conductance of the postsynaptic neuron. This postsynaptic current may be written in the same way as the ionic conductances described in Section 3.1.1 as Is = gs s(Vs V ). (3.1.7)

V is the voltage of the postsynaptic neuron, Vs is the reversal potential for the synaptic channels and gs is a constant. The variable s corresponds to the probability that a synaptic receptor channel is in an open conducting state. This 30

probability depends on the presence and concentration of neurotransmitter released by the presynaptic neuron. The sign of Vs relative to the resting potential determines whether the synapse is excitatory (Vs > Vrest ) or inhibitory (Vs < Vrest ). The release of neurotransmitter, its diffusion, and kinetics of binding to the receptor sites is a complex process. A summary of detailed biochemical models can be found in [109]. However, most commonly it is approximated by a simple xed shape of the time course of postsynaptic response triggered by an isolated presynaptic action potential. This time course, also known as post-synaptic potential (PSP), can be measured experimentally, as well as derived from the detailed models. Its most important feature is a rise to a peak at some characteristic time 1/ followed by a slower decay to zero. This reects mainly the fact that after release neurotransmitters have to diffuse across the synaptic cleft. The decay is caused by another biophysical mechanism which reabsorbs free neurotransmitter back into the synapse. If the response shape is some (t) ( (t) = 0, t < 0), we can write the dynamics of the postsynaptic current Is (t) as a linear superposition of such shapes, one for each presynaptic spike at time T m , Is (t) =

( t T m d ).
m

T m are the times of non-linear threshold crossing at the presynaptic soma, and d the communication delay introduced by the axon. In a real synapse, spikes that are coming close together will not be summed linearly due to a depletion of stored neurotransmitter as well as involvement of various secondary biochemical processes leading to synaptic modulation and short- or long-term adaptation. We will overlook these effects as they require far more complicated modelling. The shape of synaptic response (t) (PSP) is often approximated by a simple exponential decay (t) = exp(t) H (t), (3.1.8)

of being the Greens function of a rst order differential operator 1 + t /. The dynamics of Is (t) can be expressed simply in the form 1 Is = Is + (t T m d ), m 31 Is (0) = 0.

(H = 1, t 0, H = 0, t < 0 is the Heaviside function) which has the advantage

However this approximation ignores the characteristic delayed peak and introduces discontinuity at t = 0. A better choice is to dene (t) as a difference of exponentials (t) = 1 1
1

[exp(t) exp( t)] H (t),

where and are, respectively, the inverse rise and fall times of the synapse response. This shape matches quite precisely experimental observations. In this case Is (t) can be evolved by the equation Is + ( + ) Is = Is + (t T m d ),
m

Is (0) = 0,

Is (0) = .

In this Thesis we will use the latter form of (t) with which can be written as (t) = 2 t exp(t) H (t). (3.1.9)

It is known as an alpha function and it is the Greens function of the operator

(1 + t /)2 . It is plotted on Figure 3.7, left.


For all specications above we chose normalisation such that
0

(t) dt = 1.

3.1.4 Cable theory for the dendrite


Axons and dendrites are thin tubes of cellular membrane, often idealised as cylinders. What is important conceptually is that for short lengths the resistance to electric current ow across the membrane is much higher than the resistance along the interior core or the surrounding medium. Therefore the current inside tends to ow parallel to the cylinder axis for a considerable distance before a signicant fraction leaks out. This brings about the similarity between dendrites and electric cables, both are sometimes referred to as core conductors. A mathematical abstraction for a passive dendrite (no signicant number of active ion pumps in the membrane) leads us to the cable equation generally describing a 1D core conductor, V V 2 V = + D 2 + I (z, t). t D z 32 (3.1.10)

Here V represents the voltage difference across the membrane as a deviation from its resting value Er , e.g.V = Vi Ve Er , the spatial variable z is the distance along the axis of dendritic cylinder, I (z, t) is some driving input. The derivation of this equation is discussed in much detail in [110, 111]. The formulation (3.1.10) incorporates the neurobiological parameters for space () and time (D ), with a diffusion constant given as D = 2 /D . These represent respectively the length and time over which membrane potential decays an order of magnitude. In terms of the electrical parameters of the dendrite and its diameter d, they are expressed as D = Rm Cm , = dRm /(4Ri ). The electrical parameters are Ri (resistivity of the intracellular uid length), Rm (resistivarea). Plausible values for these parameters gained from physiological experiments [112, 113] are as follows: Ri between 170 and 340cm, Rm between 100 and 200 kcm2 , Cm between 0.7 and 2 F/cm2 . The boundary condition at a cable end could be set as a sealed end, that is there is no core current at that point, V (z1 , t)/z = 0. This is suitable for the tip of the dendrite. A voltage clamp boundary condition is formally written as V (z1 , t) = VC . Choosing V (z1 , t) = 0 means that the membrane potential difference Ve Vi is clamped to its resting value Er . The boundary condition at a soma end is 1 V V (0, t) = Csoma (0, t) + V (0, t) Gsoma . ri z t In this case the core resistance ri = 4Ri /(d2 ) takes account of the dendritic diameter. The left-hand side is the current leaving the dendrite, it has to be equal to the current change in the soma. Csoma and Gsoma are the overall soma membrane capacitance and conductance. If several dendrites branch from the soma, the left-hand side would contain the sum of their currents. When two or more cables are connected at a dendritic branch point, the boundary conditions are chosen to full a balance of currents (Kirchoffs law) in a similar way. Breaking down dendrites in sections, each described by a cable equation with boundary conditions matched in that way, one can build compartmental models of entire dendritic trees [114]. Each compartment can have different electrical constants gleaned from detailed physiological measurement, active (nonlinear) properties can also be added. Numerical and analytical methods for solving such systems of cable equations are today quite advanced and efcient (see for 33

ity of the dendritic membrane area) and C (membrane capacitance per unit

example [115]) and allow for quite realistic simulation of dendritic trees.

Semi-innite cable Of all the dynamics effects introduced by the dendrites, in this Thesis we are concerned primarily with the delay effect. It is the most important dendritic effect that remains as cells are averaged together in neuronal populations. This motivates us to pose the model in way that simplies the mathematical analysis, possibly losing most of the computational capabilities of the dendritic tree but preserving the delay effect with its diffusive nature. Namely, we consider rst it might seem as too gross a simplication, however it is possible to link in a mathematical way models of branched dendritic trees to a model with a single cable. Rall and Agmon-Snir [110] list the conditions on the tree geometries for which this can be done. Further, although dendrites are of course nite, it can be noted that when the cable termination is more than four times distant from a point of observation, there is negligible difference from the semi-innite case. The boundary condition for the innite end is boundedness,
z

equation (3.1.10) over a single uniform semi-innite cable, thus, z, t R + . At

lim |V (z, t)| < .

Solution to the linear cable equations are functions of both position and time. However, if the current being injected I (z, t) is held constant in time, the membrane potential settles to a steady-state solution. It would satisfy the timeindependent version of (3.1.10). Assume that instead of external current injection, a voltage clamp maintains V = V0 at the soma end. Then the problem is formally D d2 V V = 0, 2 D dz V (0) = V0 , lim |V (z)| < ,

with easily obtained solution V (z) = V0 exp(z/). In other words, in an innite uniform cable the voltage decreases exponentially with distance from the injection site, with a decay constant as expected from the denitions. To solve the time-dependent equation it is best to use the method of Greens function. The Greens function E(z, t, z0 , t0 ) is essentially the time-dependent response of a cable at rest to an instantaneous pulse of current injected at point 34

press the solution to this latter problem, taking t0 = 0. The initial condition is V (z, 0) = 0. We will simplify the soma boundary condition to a sealed end. The equation is D E 2 E E = ( z z0 ) ( t ), t D z2 d2 E dz2 E z Its solution is E(z, ) = 1 e( )(zz0 ) , D( ) 2 ( ) = 1 + iD . DD
z =0

z = z0 and time t = t0 , that is, I (z, t) = (z z0 )(t t0 ). We will now ex-

z R+ , t R

and taking the Fourier transform in time, we get the ODE 1 + iD DD E= 1 ( z z0 ), D

= 0,

lim | E| < .

The parameter ( ) is often called propagation constant of the cable. Finally, by applying inverse Fourier transform we get the solution E(z, t) = 1 4Dt et/D e(zz0 )
2 /4Dt

describing the activity prole along the cable at a time t after the injection at z0 of instantaneous current pulse with norm one. It is positive, symmetric about z0 and satises
R

E(z, t) dz = et/D , E(z, 0) = (z z0 ), E(z, t) = 0, z = z0 , t = 0.

2 1 D 2 t D z

This is the Greens function for the innite cable equation. Its form allows us to use the notation E(z, t, z0 , t0 ) = E(z z0 , t t0 , 0, 0) = E(z z0 , t t0 ), with E(z, t) = 1 4Dt et/D ez
2 /4Dt

H ( z ) H ( t ),

(3.1.11)

In that notation, the dynamical response to arbitrary input I (z, t) and initial potential V (z, 0) is obtained by convolving them with E(z, t):
t 0

V (z, t) =

E(z z , t t ) I (z , t ) + V (z , t )(t ) dt dz .

The second term in the brackets gives the evolution of the initial condition. 35

3.2

From spiking networks to rate models

Ultimately, we are interested in the properties of large networks of neurons. Many authors have investigated numerically and analytically the dynamics of arrays of interconnected integrate-and-re neurons (IF, Section 3.1.2) a short review is available in [116]. There is also some work with large scale simulations of IF populations with detailed anatomically based structure [117, 118, 119, 120]. These have strived to show that experimentally observed functionality of a specic cortical region may arise from its known anatomical and physiological architecture. A number of interesting phenomena have been noted in IF networks with a spatial topology, such as synchronisation into bursts [121, 122], segregation of neurons with higher and lower frequency of spiking (neuronal activity) into travelling or spiral waves [123, 124], rings [125] or solitary patches of active neurons [126]. These are the same types of phenomena that we will be looking at in neural elds. A suitable model composed of IF neurons can be written as follows. Let an array of identical IF neurons with intrinsic dynamics F (V ) as in (3.1.4), be labelled by the indices i = 1, . . . , N such that neuron j projects an axon to neuron i. Then, dVi = F (Vi ) + Xi (t), dt Xi (t) = I + Wij J (t Tjm ),
j =1 m N

(3.2.1)

where Tjm is the mth ring time of the jth neuron, Wij is the strength of the synapse from neuron j to neuron i (set as zero if there is no connection) and tion 3.1.3). Note that we can include synaptic, dendritic and axonal effects by appropriate choices of the function J (t). For example, using the theory from Sections 3.1.3 and 3.1.4, we can write J (t) = 2
t 0

the function m J (t Tjm ) describes the shape of the resulting PSP (see Sec-

E(zij , t t ) (t Tjm d) dz dt .
m

Recall that (t) is the standard PSP shape, d is an axonal conductance delay (which, if we prescribe spatial topology to the network, can also be made spacedependent), E(z, t) is the Greens function of the semi-innite cable equation (3.1.10), and zij species the position of the synapse from neuron j along is dendrite. 36

3.2.1 Derivations for the ring rate model


While it is possible to analyse uniform IF networks mathematically and to study more complicated networks numerically, there is another paradigm for modelling of large ensembles of neurons which can provide more simple and elegant understanding of some of their properties. It is the paradigm of population dynamics rather than interconnected neuron dynamics, and ring rates rather then individual spike timings. In this paradigm it is possible to relate some of the self-organising behaviour observed in biological preparations and in network simulations to very simple principles well known from Nonlinear Physics. Now we make the last reduction and approximation of our description of neurons, with the aim to dene models operating in that paradigm, namely, neural elds. In the system (3.2.1) we would like to cast away the internal voltage dynamics. We are interested instead in the neuronal mean ring rates and would like to construct a simpler relationship between the known ring rates of the presynaptic neurons j (t) and a neurons output ring rate i (t) which are dened as some moving average over the spike count, i (t) = 1 r
t

t r m

(t Tim ) dt .

We apply this time averaging to both sides of (3.2.1). For the synaptic term we obtain 1 r
t

t r m

J (t Tjm ) dt =
1 r

t r
m

J (s)(t Tjm s) ds

dt =

J (s)j (t s) ds.

For an alpha function and no dendritic processing the averaged stimulus input becomes
N 0

< Xi (t)> = I + Wij


j =1

2 (t )et j (t t d) dt

(3.2.2)

Next, we have to assume that the synapses in the network are slow (although they still sum input linearly). In that case we pick a PSP shape (t), for example the alpha function (3.1.9), with a synaptic rise time 1/ signicantly longer than all other time scales in the model. Since the neuronal dynamics is now much 37

faster than that of X (t), the neurons ring rate will quickly relax to its steady state value. Then, we can use our calculation of the steady state interspike interval for constant input (3.1.6). The ring rate at any time t is fully specied as i (t) = ln Xi (t) V0 Xi ( t ) h
1

H ( Xi (t) h) =: f ( Xi (t)),

where H (t) is the Heaviside function. The function f ( X ) is usually referred to as ring rate function, or input-output transfer function. It gives us a direct relationship between the strength of input to the neuron and the momentous frequency of its spike output response. The assumed slow evolution of the synaptic input Xi (t) also allows us to drop the averaging brackets on the lefthand side of (3.2.2). With the denition of a ring rate function we have obtained a closed system of nonlinear equations for the synaptic currents Xi (t). By using the Greens function property of the alpha function (Section 3.1.3) we can write it out as a system of differential equations 1 d 1+ dt
2

Xi (t) = Wij f ( X j (t d)).


j =1

(3.2.3)

This system describe an analog or ring rate model. Here we presented the timeaveraging argument for its justication developed by [116, 127, 128]. The main necessary assumption is slow synaptic dynamics. The slow synaptic response essentially is the mechanism to average over the incoming spikes. There are several more approaches to deriving a rate model, each requiring somewhat different restrictions on the spiking network. However they all arrive at essentially the same equation. It was originally introduced by Wilson and Cowan [129] who used population averaging. Suppose that the network is divided into N populations (ensembles), each with large number of densely interconnected neurons (for example, these could be the cortical columns, Section 2.2.4). The variable i (t) is interpreted as the proportion of neurons in the ith population that have red during an instant dt i.e. a mean population ring rate. Xi (t) is the mean synaptic activation of neurons in population i. Due to the high neuronal connectedness in a population, the mean Xi (t) will depend rather weakly on in-population dynamics and can be treated as a deterministic variable dened by (3.2.2). It is controlled by the ring rates j (t) bombarding 38

population i through a large number of neuron-to-neuron connections summed in the mean coupling constants Wij . The ring rate function f ( X ) converts the mean synaptic activation Xi (t) into a proportion of ring neurons i (t). Assume that the neurons in a population have a peaked distribution of some of their properties, for example the value of the ring threshold (3.1.5). Since f ( Xi ) integrates that distribution from to + it will have the characteristic sigmoidal form (Figure 3.7, right), with inection point at the distribution peak. We note here, that to obtain f ( X ) in the simple form used in (3.2.3) one has to again use some form of temporal averaging (see [129, 130]). Shriki et al. [131] have shown that slow time scale in the spiking model neurons (e.g. slow synapses) is not necessary for the validity of the general rate model (3.2.3). But it is necessary that the spiking network dynamics does not enter a state with high degree of synchrony, which would result in fast oscillations of the ring rates. Gerstner [67, 132, 133] has constructed a mathematically rigorous argument by considering identical spiking neurons with an additional noise term (intrinsic noise is ubiquitous in the brain and it is a major omission in all the single neuron models presented in this Chapter) in the limit of innite number of neurons. Assuming that neurons display no adaptation he is able to utilise the theory of renewal processes [134]. Briey, a ring rate function can be expressed by solving the rst-passage time problem for the neuronal voltage crossing the threshold (solving it in general is not possible, but one can obtain its rst moment in closed form). In [133] he shows that a population rate model in a state reecting asynchronous neurons is able to react to abrupt changes of input with fast transients. Thus, such a description can capture faithfully signal transmission performed by the population. The optimally asynchronous state (phases are distributed uniformly around the unit circle) is in fact a steady state for the rate model with the population activity being a constant over time Xi (t) = Xi0 . Other approaches for justifying a rate equation have been developed by Knight [135], Amari [136], again Knight and coworkers [137] and Brunel and Hakim [138].

39

3.2.2 Advantages and limitations of rate models


An obvious theoretical advantage of rate models is that they are simpler and easier to analyse compared to spiking networks. There is another important advantage related to applications. Real neurons are inherently stochastic and neural networks even more so. Networks embedded in an operating brain receive large amounts of background input which is often treated as independent noise, as well as, chemical and electrical disturbances from physically close neurons (which might not be part of the same network). All of the models we considered in this chapter are deterministic, they can predict spike sequences accurately if their input is known. However this is unlikely to be the case in most practical studies and the benet from a deterministic model becomes unclear. One can add stochastic terms to a spiking model and compare means with averaged trial-to-trial data. There is much theoretical work on stochastic spiking models [67, 111]. However, in a rate model this averaging is already done the equation variables are the means of neuronal activity (spike rates or synaptic conductances). It is much more straightforward to compare these with experimental results. Another practical advantage is that we have averaged not only the dynamics but the network itself. A spiking neuron network contains many parameters with physiologically unknown values, including the connections between individual neurons. A common strategy is to draw these parameters from some random distribution but it might not be clear which distribution would mimic most closely the real system. A rate model, specically one in which a unit represents a population has far fewer parameters to constrain. These can often be set phenomenologically by comparing with some anatomical macromeasurements. By their nature, rate models cannot account for effects of spike timing and spike correlation, aspects that are important for understanding the function in at least some parts of the nervous system. A disadvantage is that ring rate models can be mathematically linked to the microscopic spiking dynamics only for some quite restrictive assumptions. In many situations their use remains phenomenological. Importantly, current derivations require uncorrelated and independent presynaptic inputs. As a consequence rate description breaks down

40

G(z,t) w(|x-y|)f(u(y,t|x-y| v z

))

u(x,t) y f(u(y,t)) w(|x-y|) x

f(u(x,t))

Figure 3.6: Components of the common neural eld model. The output of neurons at x
is f (u( x, t)) which arrives at y as a weighted and delayed input w( x y) f (u( x, t + d)). It is then being summed into the overall activity u(y, t). processed by the synaptic lter (t) and the dendritic lter E(z, t) (Sections 3.1.3 and 3.1.4),

if synchronicity in the population increases. Also, neurons have to be assumed non-adaptive although this is not true for the vast majority. This precludes modelling of Hebbian learning. Adaptation is included instead in an ad hoc manner at a macroscopic level, as we will see in Section 5.3. Another necessary assumption is either very large population sizes of functionally equivalent neurons or slow synapses. Direct computer comparisons [133, 139] between spiking and rate descriptions of the same system have shown that the dynamics are faithfully reproduced, with the restrictions outlined above. Some of these comparisons [131, 140] have involved realistic HudgkinHuxley type models instead of simplied spiking neurons.

3.3

Neural eld models

We need to make one more transformation to get to the equations we are interested in in this Thesis. Instead of considering a population rate model with N or N discretely numbered populations as in (3.2.3) (in the Wilson-Cowan interpretation of population averaging), we replace the discrete topology with a continuous one, the real line R or the plane R2 . To each point x is assigned a population of neurons with innitely many neurons. The connectivity weights 41

1/

w
-1 0

x-y

Figure 3.7: Left: The alpha function response curve (3.1.9) of a 2nd order synapse. Right: Plot
of the sigmoidal ring rate (3.3.4). Bottom: Inverted wizard hat function (3.3.5) describing the coupling between the neuron at position zero and its neighbours.

Wij are replaced by a continuous coupling kernel w( x, y) and the presynaptic inputs sum over j = 1, . . . , N becomes an integral. We write the mean population activity Xi (t) as u( x, t), and from now on we will omit references to its exact physiological meaning in order to let it take any real values. Led by the results from [133] (see end of Section 3.2.1) we postulate a homogenous steady state u( x, t) = u to signify the asynchronous background activity of the network. For example, if u = 0 then negative values of u represent suppression of the natural level of activity. In effect, the model is formulated for the deviations of the population activity from its normal physiological state. We obtain a neural eld model (Figure 3.6) 1+ 1 t
2

u( x, t) =

w( x, y) f (u(y, t d)) dy.

(3.3.1)

The interpretation of the ring rate function f (u) remains the same as in Section 3.2.1. If we consider the conductance delay to depend linearly on the distance between neuronal populations (i.e. all axons take paths of similar shape), we can simply introduce space-dependent delay by setting d = | x y|/v where v is the axon conduction velocity. The differential on the left-hand side as in

(3.2.3) represents the synaptic dynamics and depends on our choice of PSP shape 3.1.3 (in this case alpha function). There is no problem to incorporate 42

also dendritic dynamics, we do this in Section 4.2. In the following Sections we give some important examples of neural eld models and illustrate the methods for nding interesting inhomogeneous solutions that we will use in other Chapters.

3.3.1 Two-population models and Mexican hats


Wilson and Cowan in their original paper [130] considered two types of populations coupled together. In most parts of the brain, it is possible to divide neurons into excitatory and inhibitory depending on the effect of their output on other neurons. The former form predominantly excitatory connections with their postsynaptic neurons, the latter predominantly inhibitory connections. In neural eld theory it is accepted to label the neuronal types that are distinguished in the model as populations. This terminology might be slightly confusing but we will not refer anymore to the neuronal populations we considered to this point, as we have now collapsed them into points. Thus, Wilson and Cowan formulated a two-population model, which in our notation can be written as 1 u E ( x, t) = u E ( x, t)+ E t
R

(3.3.2)
R

1 u I ( x, t) = u I ( x, t)+ I t
R

wEE ( x y) f E (u E (y, t)) dy

w IE ( x y) f I (u I (y, t)) dy, (3.3.3)

wEI ( x y) f E (u E (y, t)) dy

w I I ( x y) f I (u I (y, t)) dy.

We have chosen exponential synapses (3.1.8) and set the connectivity strength to depend only on the distance between neurons | x y|, i.e. it is translationally invariant. Within a cortical sheet more distant points are connected by fewer axons, that is why a common modelling choice is wab ( x ) = e| x|/ab , 2ab a, b { E, I }

with some spread parameter ab > 0. We know from Section 3.2.1 that the ring rate function should have a sigmoidal shape. A convenient choice is f (u) = 1 1 + e ( u h ) 43 . (3.3.4)

plotted on Figure 3.7, right. The parameter h is referred to as a ring rate threshold, while denes its steepness (the slope at the inection point is /4). The system (3.3.2),(3.3.3) is suitable for analysis and it has been studied for example by [141, 142, 143]. One could consider also a single population of identical neurons, where excitation and inhibition are encoded within the connectivity kernel w( x ). Equation such a one-population model. If the spread of the excitatory connections is smaller than that of the inhibitory, EE < EI , the connectivity is called Mexican hat due to its characteristic shape. In the case, EE > EI we will refer to it as inverted Mexican hat. In Figure 3.7, bottom, is shown a function with the same properties, inverted (for w0 = +1) wizard hat, dened as w( x ) = w0 (1 | x |)e| x| , w0 = 1. (3.3.5) (3.3.1) with a difference of exponentials w( x, y) = wEE ( x y) wEI ( x y) is

We will work with it through most of this Thesis. In some modelling studies it is appropriate to choose short-range excitation and long-range inhibition (w0 < 0), for example within a cortical column, see Section 3.5. When modelling the cortical sheet however, one needs to consider an inverted wizard hat with local inhibition and distal excitation. This is a case that has been explored relatively little (see Section 4.1) and we will concentrate on it in Chapters 4, 5 and 6. We expect that a single-population model will have poorer dynamics than a twopopulation one, however it is more convenient to work with due to the lower dimensionality. In fact, it is one of the purposes of this Thesis to show that with some extensions of the single-population model reecting important cortical features, one obtains all the richness of solutions previously reserved for the two-population neural elds.

3.4

Pattern formation in neural eld models

At the beginning of Section 3.2 we mentioned that large arrays of integrate-andre neurons (with connections of the type (3.3.5)) can display dynamics such as, travelling waves, spirals, rings and spots. In the continuum framework of neural eld description it becomes easy to employ methods of nonlinear dynamics collectively called pattern formation, and show that the observed phenomena 44

of network organisation are in fact caused by the principles known from pattern forming systems from Physics. Examples of such systems are convection patterns in a layer of uid heated from below, Faraday waves in vertically vibrated sand, some chemical reactions involving inhibitor and activator species, dislocation distributions in solids under stress. All these can be modelled as stable inhomogeneous solutions (or sometimes, long transients) of the equations describing the respective system. Importantly, while the physical realisation of such systems may differ vastly, the interesting dynamics is governed by the same universal classes of equations. Important prerequisites are nonlinearity, symmetry, spatial coupling and competition between two processes, one of which is usually inux of energy that keeps the system away from equilibrium (i.e. the system is dissipative). For more examples and background on pattern formation see [144, 145, 146]. Most physical systems are described by systems of partial differential equations (PDEs). The universal equations relevant to the patterns (more on these in Chapter 5) are also differential. One of the contributions of Theoretical Neuroscience to mathematical theory is the adapting of methods for use with integral equations of the type (3.3.1) and showing that the same universal equations hold not only for PDEs. We note here that a similar problem has been addressed in another eld as well the physics of phase transitions and solid-liquid phase separation, see for example [147]. In this Section we summarise the two most widespread mechanisms of pattern formation and illustrate their action in the simple neural eld (3.3.1). The bulk of this Thesis is devoted to revealing the same mechanisms in more biologically relevant and mathematically interesting extensions of that model.

3.4.1 Globally periodic patterns: Turing instability


A classical setup for studying pattern formation is the reactiondiffusion system of two agents [145, 146, 148]. An activator promotes the production of the agents, while the other, an inhibitor, depletes their concentrations. In the absence of diffusion there is a stable steady state at which production and depletion are balanced. However if the inhibitor diffuses faster than the activator it escapes to regions further away from where it is produced (the activator45

rich regions) and creates bands of depleted activator there. A globally periodic stable solution emerges with some intrinsic wavelength dependent on the diffusion coefcients ratio. This process is referred to as a Turing instability (due to the pioneering results in [149]). More generally, a homogeneous steady state becomes unstable to spatial perturbations with certain wavenumber k as some system parameter is varied (in the above case the inhibitor diffusion coefcient). In the neural context the two competing processes are the synaptic excitation and inhibition. In correspondence to reactiondiffusion systems, an instability is possible in the case of local excitation and distal inhibition i.e. a wizard hat connectivity (w0 < 0). Let the model u( x, t) =
R

(t s)

w( x y) f (u(y, s)) dy ds

(by using the Greens function property for the synaptic lter (t) (3.1.9), the equation is equivalent to (3.3.1) without delays) have a spatially homogenous steady state u( x, t) = u. We are using f (u) as dened in (3.3.4). The steady state satises the equation u = f (u) (s) ds w(y) dy.

The connectivity kernel (3.3.5) has balanced excitation and inhibition, that is
R

w(y) dy = 0. It can be interpreted as the microscopic excitatory and in-

hibitory activity cancelling out when averaged across macroscopic scales. In that case, from above we have u = 0. The conditions for growth of inhomogeneous solutions can be obtained through simple linear stability analysis of u we nd for which parameter values it becomes unstable to spatial perturbations of globally periodic type eikx , and what is the intrinsic wavelength 2/k of the growing patterns. Linearising the model (the ring rate f (u) is the only nonlinearity) we get u( x, t) = (t s) w( x y)u(y, s) dy ds. (3.4.1)

The order parameter for our bifurcation analysis is therefore the neuronal gain = f (0). To dene the conditions for linear stability with regards to perturbations of characteristic spatial scale eikx we make the ansatz u = et+ikx which turns the integrals in (3.4.1) into Laplace and Fourier transforms respectively. 46

We use the notation () = which gives us 1 = ()w(k ), an algebraic equation known as the dispersion relation from which we can determine the growth rates of periodic modes and their dependence on wavenumber, = (k ). Note that a neural eld without space-dependent delays typically has a structure of two convolutions (essentially, a connectivity lter and a synaptic lter). This yields the separability of space and time frequencies in the dispersion relation. The uniform steady state u is linearly stable if Re < 0 for all wavenumbers k. The transforms of the alpha function and the wizard hat are, respectively () = 1 , 2 w(k ) = 4k2 . (1 + k 2 )2
R+

(s)es ds,

w(k ) =

w(y)eiky dy

(1 + /)

Substituting and rearranging in the dispersion relation we obtain = w(k ) 1 .

Since w(k ) is non-negative the condition for stability becomes: w(k ) < 1/ for all k. The maximum of w(k ) is at k c = 1 and therefore the value c = 1/w(k c ) = 1 of the bifurcation parameter is the threshold for Turing instability. For larger 1 a band of wavenumbers would be associated with positive eigenvalues (k ); those periodic modes would grow forming the pattern. This is illustrated in Figure 3.8. Exactly at the point of instability 1 = c , the eigenfunctions associated with the positive eigenvalues are just eikc x . The solution is a linear combination of sines and cosines with period 2/k c , u( x, t) = A1 (t) cos(k c x ) + A2 (t) sin(k c x ). (3.4.2)

A physiological interpretation of the Turing instability is that as the intrinsic excitability of neurons is raised (for example by delivering drugs suppressing inhibition) a critical point can be reached beyond which macroscopic network self-organisation process takes over the microscopic dynamics. Implications of this mechanism to physiology are discussed in Section 3.5. 47

Re

Re

=0

Activity
c

1 1

>
c

-1 0

kc

Space

Time

Figure 3.8: Left: An illustration of the mechanism for a Turing instability: as c is increased a
range of wavenumbers around k c become linearly unstable. Right: The emerging Turing pattern in a simulation of the neural eld model. The wavenumber is k c = 1.

Here we sketched only the linear instability analysis of the uniform solution u. To determine the exact form of the new solution we need to go beyond linear order. This is the main topic of Chapter 5.

3.4.2 Localised solutions: bumps


Turing instability is a global mechanism for pattern formation that brings the system into spatial segregation of more and less active regions, tiling the system domain at a dened scale. A different aspect of interest is the interactions that inuence the shape of these regions locally. A major role here plays the theory of interface dynamics [150] which attempts to extract equations of motion only for the region boundaries (interfaces, fronts) from the description of the full system. Again, it is possible to adapt the methods of interface dynamics to integral equations and apply them to determine for example the evolution of localised spots of activity (bumps) in neural eld models. In a 1D system the interface curvature does not play a role and the problem is much simpler than in 2D. Here, as a way of introduction we sketch the construction and stability analysis of a 1D bump that was done rst by Amari [151]. In Chapter 7 we will deal with extensions of this model. For mathematical simplicity we take the limit of model (3.3.1) to one with innite steepness of the nonlinearity , that is we set the ring rate as a 48

u(x)

W(x)
0.3

h h
0.2

0.1

x1

x2
0 0 2 4 6 8 10

x1-x2

the wizard hat function (3.3.5), w0 = 1. The intersection points with the Heaviside threshold h give existence conditions for bumps with corresponding widths.

Figure 3.9: Left: A static bump solution with width x2 x1 . Right: The primitive W ( x) of

by observing that if the bump activity is above the threshold h in an interval

Heaviside function f (u) = H (u h). We construct a bump solution manually,

[ x1 , x2 ] the time independent steady state equation reduces to


u( x ) =
x2 x1

w( x y) dy =

x x1 x x2

w(y) dy.

This expression has to satisfy the conditions u( x1 ) = u( x2 ) = h. The integral W (x) =


x 0

w(y) dy can be readily calculated, for a wizard hat function it is

W ( x ) = w0 xe| x| , plotted in Figure 3.9, right. Note that it is an odd function and W (0) = 0. By using this notation the bump shape is simply described as u( x ) = W ( x x1 ) W ( x x2 ) while both conditions become u( x1,2 ) = W ( x2

x1 ). Since the system is translationally invariant the bump solution is entirely we can see from Figure 3.9, right, depending on the parameter h the neural eld might be able to support two types of bump with different widths. For a higher threshold bumps do not exist but note here that we obtained the condition for existence of single bumps only. It does not give us information about other inhomogeneous solutions such as a pair of bumps or N-bumps. The next natural question is to investigate the bump stability. Let u( x, t) be a solution close to the stationary bump solution u( x, t) = u( x ) + u( x, t). If 49 determined by nding its width as = x2 x1 , as a solution to W () = h. As

the perturbation u( x, t) decays everywhere with time, this would imply that the bump is stable. The time-dependent solution u( x, t) crosses the threshold h at time-dependent perturbations xi (t) = xi + xi (t), i = 1, 2 which satisfy the equations u( xi (t), t) = h for every t. Therefore we can differentiate these conditions by t to get dx u u ( xi (t), t) i (t) + ( xi (t), t) = 0. x dt t We can use the original neural eld equation (choose exponential synapse for simplicity) to replace u/t with the integral on its right-hand side, namely 1 u ( x , t ) = u ( xi , t ) + t i
x2 ( t ) x1 ( t )

w( x y) dy = i = 1, 2.

h + W ( x i x1 ) W ( x i x2 ) = h + W ( x2 x1 ),

where xi are everywhere time-dependent. Here we need to make an approximation replacing the gradient of the perturbed solution with that of the steady analysis as it is accurate to rst order. We obtain a system of ordinary differential equations (ODEs) for the coordinates of the crossing points u x ( xi ) xi = h W ( x2 x1 ), i = 1, 2. state bump u x ( xi (t), t) u x ( xi ). This does not affect the following stability

The points x1,2 are naturally the steady states of this ODE system. Summing together the two equations i = 1, 2 and using the symmetry of the steady state for the bump width: bump (so that u x ( x1 ) = u x ( x2 ) = c > 0) we end up with a simple equation c d = h W ( ). 2 dt By standard linear analysis the stability condition for is simply

W () = w() < 0

(3.4.3)

The bump is stable if its width ts in the inhibitory range of the model connectivity. On Figure 3.9 is immediately seen that the wider bump is the stable one. with speed 2w()/c. An unstable bump would either collapse to = 0 or grow in width

Amaris approach [151] sketched here is an illustration of the methods used in interface dynamics and the possibility of their application to nonlocal equations. By assuming that no new regions of activity (bumps) can appear, split 50

or merge, one can successfully describe the system only through the evolution of the region boundaries. We will consider extensions of the this work to more complicated models in Chapter 7.

3.5

Applications of neural elds

In light of the long sequence of reduction steps we had to make to set up the stage for neural eld models, studying them would have had little justication were there no evidence that they are a useful description of brain operation at some level. In this Section we provide references to experimental studies whose results seem to match well with dynamics observed in neural elds. We also discuss examples of the neural eld paradigm being used to gain insight into computation algorithms likely used by the brain. At rst it might seem rather far-fetched to try to model a neuronal network with an equation over innite domain where every point represents a neuronal ensemble of innite size. However on average per 1mm2 of cortex there are on the order of 104 105 neurons and 108 109 synapses [152], so approximating these numbers with

innity might be more plausible than with any network of size that can be simulated on an average computer. In fact in some cases it is easier to relate the experimental scale with that of neural eld models rather than realistic net-

works. When measuring macroscopic electrical activity in neural tissue (with resolution on the order of hundreds of micrometres) one can identify the coordinate x with the measurements physical positions. In relation to topographic maps of feature representation (see Section 3.5.2) in sensory and motor cortical areas, x can be identied with the feature parametrisation. Neural elds t well with the idea that the brain may be using population coding, that is information is represented not in the spike train of any single neuron but in the graded collective response of a neuronal group (see [60] and references therein; [153] for review of recent experimental evidence). Maybe the most direct link between models and experiment should be sought in controlled in-vitro experiments.

51

3.5.1 Travelling waves in slices


In Section 3.4 we showed that stationary localised pulses of higher activity or periodic patterning of the network domain are characteristic properties of the neural eld models. Typically they appear in parameter regimes of elevated excitability. In the next Chapters we will discuss that under the same conditions natural extensions of the models allow also stable solutions that are nonconstant in time, the simplest examples being travelling waves, moving bumps, oscillatory solutions. We mentioned that corresponding dynamics has been observed in discrete large networks of spiking neurons. In fact Roxin et al. [140] compared continuous neural eld model with a network of realistic conductance-based (HodgkinHuxley) neurons and showed that the parameter spaces for the types of observed dynamics corresponded qualitatively. Demonstrating such dynamics in in-vitro preparations of neural tissue was also sought by experimentalists. Travelling waves in in-vitro slices (of rat hippocampus and cortex) were rst evoked in the 1980s [154, 155, 156] following up on EEG observations of druginduced epileptiform activity in-vivo (see references in [154]). Today, it is a standard technique in experimental neuroscience usually the cortex is cut along the columnar structure to preserve maximum local connectivity, and bathed in agents that suppress inhibitory synapses. A slice normal to the cortical surface and thus containing all cortical layers, but only 400m thick (on the scale of macrocolumn diameter, Section 2.2.4) represents a 1D system well. 2D patterns can be studied in tangential slices, their areas in different experiments could vary between a mm2 and cm2 . In these preparations a local stimulation by short current pulse from an electrode can be observed to amplify and propagate with speeds of 20-100mm/s without spreading apart and loosing shape, although neither its speed nor amplitude is uniform owing to the cortical inhomogeneity (it may also experience jumps of subthreshold propagation) [155]. In some brain areas waves propagate equally well in both directions, while in others they are unidirectional, suggesting anisotropic connectivity. Typical data from single trials is shown on Figure 3.10. The measured quantity is the local eld potential at an array of electrode locations. Local eld potential is extracellular variable that integrates the intensity of synaptic and dendritic

52

Figure 3.10: Top, left: A travelling wave is clearly visible in the traces of local eld potentials
in rat somatosensory cortex. Bottom, left: The same data in pseudo-continuous form. Right: Responses of a slice to two different stimulus intensities (20A and 80A), with and without disinhibition (10M and 0M PTX, respectively). The stimulus position is denoted by the small circle. From [157].

currents of all neurons in a neighbourhood of the electrode. Thus it can be readily associated with the variable Xi (t) in (3.2.3). Figure 3.11 shows more complicated dynamics that occurs spontaneously in a suitably treated slice [158, 159]. An initial epileptiform spike (top row on the left of Figure) is followed by lower intensity periodic wavetrain (bottom row) with duration of up to 200 pulses. Later, taking a tangential slice Huang et al. [160] found out that these wavetrains may be part of a rotating spiral wave (right column). More importantly for us, they performed numerical simulations of a 2D neural eld with some small extensions and with realistic boundary conditions. It reproduced qualitatively all the observed phenomena. Other authors have undertaken neural eld modelling hand-in-hand with experiments [161, 162], establishing also good quantitative correspondence between various control parameters and neuronal properties. They could infer theoretical predictions from their calibrated models which they conrmed in further experiments. For example Pinto and Ermentrout [163] rst showed 53

Figure 3.11: Left: Snapshots from spontaneous epileptiform excitation (top row) followed
by a pulse travelling upwards (bottom row), at the last frame can be seen the leading edge of the next pulse. Center: Shows the position of optical sensors on the side of the slice from rat somatosensory cortex. Voltage-sensitive dyes and optical imaging allow two-dimensional data collection with high resolution. In this case the horizontal coordinate is in fact the depth of cortex. From [158]. Right: Two dimensional patterns in a tangential slice. On the left column, part of an expanding ring wave, right column spiral wave. From [160].

theoretically that the initiation, propagation and termination of the travelling pulse must involve distinct network mechanisms and designed experiments to conrm that [157]. In another study, using the dependency for the speed of a moving Amari-style bump on the ring rate threshold Richardson et al. [164] demonstrate that one can control the speed of activity pulses by applying electrical elds to the experimental preparation (an applied electrical eld alters the effective threshold of neurons).

3.5.2 Functional architecture of visual cortex


In this Section we will describe how the neural eld paradigm can be used to link the experimentally known functional organisation (the organisation of information processing) of cortex with its ne anatomical architecture. Sensory areas of cortex typically display a well-dened spatial organisation of the responses to elementary stimuli. For example the area of rat somatosensory cortex devoted to processing input from individual whiskers is divided into a neat two-dimensional grid repeating the relative locations of whiskers on the 54

rat muzzle. In auditory cortex adjacent neuronal populations respond to adjacent sound frequencies. The 2D coordinates of a visual stimulus projected on the retina correspond to physical coordinates of activated region in the visual cortex. There is further ne-grained dependence of the cortical coordinate on basic stimulus features, such as the angle of orientation of its shape, or its direction of movement. All these relationships are examples of topographic maps and form a general principle of organisation for the sensory and motor cortices [165]. Heightened ring activity of a local population encodes the presence of a stimulus with certain features the features it responds to constitute its receptive eld. Each populations receptive eld is a subset of the feature space (which might have high dimensionality). Topographic maps form when populations at a given distance from each other have adjacent or overlapping receptive elds in the feature space. Typically the connectivity between populations depends strongly on the distance between their receptive elds thus they become very suitable for modelling with neural elds. In this case the spatial variable x describes position in feature space, a quantity that can easily be identied in psychophysical experiments (for example, for visual stimuli with their ( x, y)coordinates on the screen on which they are presented to the animal).

Some topographic maps in detail To give examples of neural eld modelling in relation to feature selectivity, we will rst describe some of the known functional architecture of primary visual cortex (V1). By imposing a Cartesian coordinate system ( x, y) on the retina and another one ( X, Y ) on the surface of the cortical sheet, one can determine a retinotopic map linking each cortical position with the receptor location in the retina that activates it. It turns out that the retinotopic map is conformal and is approximated very well by a complex logarithm Z = k ln(z + a), with z = x + iy, Z = X + iY, (3.5.1)

where the constants k, a are species-specic. If a = 0 it is akin to a polar change of coordinates (X = ln|z|, Y = arg z) that transforms circles centred at the origin to vertical lines. The transformation for a positive parameter a is shown on Figure 3.12, next to an experimental observation in macaque brain for comparison. See [166] for a review of the complex logarithm and more sophisticated ts 55

ln(

z+a)

32sec

Figure 3.12: Left: Visual stimulus shown to anestethised macaque setup and a radioactive
trace of the activity evoked by it on the left side of V1. From [168, 169]. Right top: Conformal mapping of the unit half disc by ln(z + a) with a = 0.3. In the central region |z| < a the map is visual cortex of human subjects. From [167] asymptotically linear. Right bottom: Stimuli that map into travelling waves of neural activity in

to the retinotopic map, and for associated experiments. More recently retinotopy was demonstrated also in humans by eliciting a travelling wave of fMRI activity in V1 when subjects were presented with a dynamic visual stimulus whose shape was calculated by applying the inverse of the logarithmic map to a moving stripe pattern [167]. Two such stimuli are presented on the right bottom in Figure 3.12. Superimposed onto the retinotopy are additional maps such as those of orientation selectivity and spatial frequency selectivity. That is, neurons at a given location respond maximally to visual stimuli within their receptive eld that expose variation in contrast along a certain axis and over a certain spatial extent. At the same time for every retinal location there are the full set of neuronal populations representing every direction and a range of spatial frequencies. It is possible to achieve this thanks to the modular architecture of cortex (Section 2.2.4). In fact, the work of Hubel and Wiesel [170], who rst mapped orientation selectivity in cat visual cortex, helped to dene the notion of macrocolumns. Abstracting away the biological variability one can think of the cortical macrocolumn as a cylinder. As one walks along the periphery the orientation selectivity of neurons smoothly changes to describe all stimulus angles from 0 to 56

high freq

low freq

left eye

right eye

left eye

Figure 3.13: Left: Crystalline model of primary visual cortex. The angle gives the range
of orientations [0, ], the radial direction the range of frequencies. Stripes of ocular dominance alternate. Right: The real picture, from cat area 17. White and gray patches are the ocular dominance zones (with width of around 1mm). Grey contours show neurons with similar orientation preference. These iso-orientation contours come together in pinwheels (spatial frequency preference not plotted). From [171].

. The range of spatial frequencies is covered by the radial coordinate, with the center of the cylinder (where orientation selectivity is unresolved) being responsive to stimuli with small spatial detail, while the center of a neighbouring macrocolumn to the lowest spatial frequency. This scheme is pictured on Figure 3.13, left. The neighbouring macrocolumns in a perpendicular direction to that are sensitive to input from the other eye, thus dividing V1 into stripes of ocular dominance (another map). Thus four columns are necessary for the full representation of one location of visual space. Hubel and Wiesel and follow up workers reached this description by the laborious process of probing neurons locally with a single electrode. More recently optical imaging (refer to Figure 2.1) has allowed observation of the full picture of activity in V1 and revealed that it is not so neatly geometrical. The functional architecture extracted from experiments is shown on the right of Figure 3.13. Indeed, iso-orientation contours come together almost radially in point discontinuities, known as pinwheels. They can be postulated as the centers of macrocolumns. Occular dominance stripes can be highly irregular but almost everywhere iso-orientation contours meet their boundary at right angles, meaning that it is also boundary of the macrocolumns. The mean width of ocular dom57

inance stripes is reported as 0.8mm to 1mm [171, 172]. Interestingly, macrocolumns are arranged so that there are no line discontinuities of the neuronal orientation selectivity. The evidence about the spatial frequency map has been more contradictory but the latest study [173] is supportive of the design outlined above the highest and the lowest spatial frequency preference is found at the pinwheels with a smooth transition in between. Other known maps superimposed on the V1 functional architecture are those for colour preference [174] and direction of motion of the stimulus [175].

Neural elds and orientation selectivity By using a neural eld theory Bressloff and Cowan [176] have implemented a model that suggests how the orientation and spatial frequency selectivity arise from assumed local connectivity. For simplicity here we will illustrate the ideas with a model only for orientation selectivity, proposed earlier by Ben-Yishai et al. [177]. Their purpose was to explore two possibilities, that the neuronal responses in V1 are stemming directly from the retinal and thalamic input, or that recurrent connectivity within the cortical macrocolumn plays a key role in shaping orientation selectivity properties. This is an example of a model where one does not need to work in cortical coordinates, the independent variable is taken to be the orientation preference as illustrated on Figure 3.13, left. The connectivity strength and sign depends only on the difference between orientais taken as periodic from /2 to /2, in this case long-range connections and lowing the ideas behind equation (3.3.1) they set up 1 u(, t) = u(, t) + f ( I (, t)) t with a synaptic input I (, t) = 1
/2

tion preferences, J ( ). To cover one macrocolumn the domain of the model

coupling to other parts of the visual eld are considered of no importance. Fol-

/2

J ( )u( , t) d + Iext ( ),

where Iext ( ) is the subcortical input (the stimulus is assumed constant in time). Note that here the sigmoidal nonlinearity f ( I ) is outside the integration the dynamical variable u(, t) is interpreted as the ring rate rather than the synaptic input as in the equations we looked at earlier (3.2.3),(3.3.1) one could say 58

it is a proper ring rate model. It wouldnt bother us as the two formulations have been shown to be dynamically equivalent under common conditions [178]. Here considering axonal delay is not necessary as all connections are local, less than 0.6mm in length. As we expect from the analysis in Section 3.4.1, for a coupling function J ( ) with local excitation and distal inhibition, one could obtain a dynamical regime with spatially periodic intrinsic activity. In this case the domain is nite and only unstable modes with wavelengths that t in it an integer number of times can grow. To have a periodic connectivity kernel we choose J ( ) = J0 + J2 cos(2 ). The term J0 represents a uniform all-to-all inhibition, while J2 gives the amplitude of the orientation-specic excitatory coupling. For 0 < J0 < J2 the connectivity is of Mexican hat-type similarly to the wizard hat function we used in Section 3.4.1. In this case it represents that neurons with more similar orientation preference are more excitatorily connected. The subcortical feedback can also be taken as sinusoidal with peak 0 centered at the stimulus orientation, Iext ( ) = c cos 2( 0 ). The coefcient c measures the amplitude of input, which depends on the contrast between visual stimulus and the background illumination. In the scenario with negligible cortical recurrent processing, we set J0 = J2 = 0. As the contrast c becomes very large the ring rate f ( Iext ) of more and more neurons reaches its saturating value. The peak of cortical response becomes broader and orientation selectivity is lost. When J0 > 0, J2 = 0 (uniform inhibition scenario), at higher contrast the inhibition may provide sufciently potent response to sharpen the orientation peak. Nevertheless there is strong dependency of its width on the contrast. In the case of Mexican hat connectivity, the pattern formation mechanism kicks in. Ben-Yishai et al. used as bifurcation parameter J2 , the magnitude of excitation/inhibition difference. When J2 is large enough there is a spontaneous generation of orientation tuning due to Turing instability. The mode with largest wavelength, cos 2, grows to an amplitude predened by the nonlinearity. Without external input this solution has a marginal phase i.e. any orientation is equally stable. However adding even weakly anisotropic input pins the orientation to it. Importantly, in this case the shape and width of the peak is independent of the contrast c, so orientation selectivity is equally 59

sharp for large and for very weak contrast of the visual stimulus. This property is well documented experimentally [179]. The predicted Mexican hat-type connectivity was more recently conrmed by a physiological study [180] connections between neurons with 0-20 difference in orientation tuning were found to be predominantly excitatory, while for those with difference 20-40 predominantly inhibitory. Another difference between the scenarios with and without cortical processing is that of virtual rotation. Without cortical processing, if the visual stimulus is instantaneously shifted to another position the activity at the previous position will decay as another peak raises at the new position. In the regime of pattern formation, the bump of activity will smoothly shift to the new location activating the intermediate neurons. This might be a neuronal basis for the psychophysical perception of smooth rotation when two stimuli are abruptly exchanged. See [181] about the role of V1 in the more general phenomenon of apparent motion. The important point for us is that neural elds are a useful modelling framework for generating and testing hypotheses in cognitive science. Adding the example in next Section we will see that they can be a unifying framework as well, explaining different phenomena by the same mechanisms.

3.5.3 Pattern formation in 2D and the visual system


It would be interesting to apply the neural eld paradigm to the retinotopic map. We already know from Section 3.4.1 that in a 1D system with Mexican hat connectivity one can obtain static periodic wavetrains by increasing the network excitability beyond a critical value. The natural domain for a patch of cortex is two dimensional. The rotationally invariant wizard hat function is w ( r ) = ( r 1 )er , r = |r|, r = ( X, Y ),

using the cortical coordinates dened in Section 3.5.2. In a 2D system the crests of the periodic solutions would lie on a planar lattice. It can be dened by two linearly independent generator vectors l1,2 R2 as

L = {n1 l1 + n2 l2 | n1,2 Z }.
60

The doubly-periodic patterns will satisfy u(r + l, t) = u(r, t) for some l

products <ki , l j > = 2ij , i, j = 1, 2,

wavevector k lying on a lattice dual to L, generated by k1,2 such that the scalar

L. Upon Fourier transforming, such doubly-periodic solution would have a

L = {n1 k1 + n2 k2 | n1,2 Z }.
At the bifurcation point c , due to the rotational symmetry of the system, a con(see Chapter 6). However, the periodic solutions that will grow are those with wavevectors contained in the intersection of the unstable ring and the nodes of the lattice L . This limits the excited modes to a nite number. The extension of (3.4.2) for a planar system is u(r, t) =
j

tinuous ring of wavevectors {k : |k| = k c } becomes unstable simultaneously

A j (t)ei<kc , r> + cc,


j

where {kc } are all linear combinations of k1 and k2 that have a modulus k c . The term cc stands for complex conjugate of the expression preceeding it (so that we obtain a real function). In systems with Euclidean symmetry such as ours (invariant to translations, reections and rotations) the common patterns square lattice, for orthogonal generators, the hexagonal lattice, for an angle of /3, and a rhombic lattice for any other angle. Their duals are illustrated on Figure 3.14 together with a wavevector ring {kc } to show how many modes become excited in each case. These are the simplest cases of pattern formation in an innite 2D system, for more details see for example [148]. On a single lattice one could get several types of solutions depending on which of the modes will grow. To determine the solution which will win over the others one needs to pursue nonlinear analysis. By colouring the periodic solution differently above and below its midline, we obtain geometric tiling of the plane which is more easy to refer to. Then, the solutions that can t on the square lattice are stripes or squares (checkerboard pattern), on a hexagonal pattern stripes, different types of hexagons, rectangles, triangles, . . . (see Figure 3.15 left, for some examples). In Section 3.5.2, following a number of researchers we hypothesised that neural elds of the type (3.3.1) may be relevant to describing the primary steps of cor61 that become stable are associated to lattices with |k1 | = |k2 |. These are the

Figure 3.14: Planar lattices with |k1 | = |k2 |. A rhombic lattice (left) and the special cases,
hexagonal (center) and square (right). The circles represent a ring of marginally stable wavevectors with lengths k c . The thick dots mark the wavevectors that will contribute to the pattern. In the last case we have illustrated that these are only the most simple types of doubly-periodic solutions and larger sets of modes may be excited.

tical processing of visual information. Then, if geometrical tilings of the plane are common solutions for 2D neural eld systems beyond a Turing bifurcation point, we would expect to be able to nd manifestations of this process in the operating visual system, possibly in some more extreme conditions (earlier we hypothesised that the normal physiological state of the brain corresponds to the homogeneous solution u(r, t) = u, Section 3.3). Indeed, in Section 3.5.1 we saw that it is possible to observe travelling waves (i.e. moving stripes) and spiral waves in pharmacologically treated slices of cortex in-vitro. Refering back to Figure 3.12 right bottom which presents moving visual stimuli that trigger a travelling wave in V1 during observation, we get a tentative feeling of what would be the visual experience related to manifestations of Turing instability in V1. It was rst suggested in 1979 by Ermentrout and Cowan [182] who applied inverse of the retinotopic map to the geometric patterns discussed above to obtain visual coordinates. Since we perceive the world as undistorted, it is likely that as the activity at V1 is taken up in the brain, at some later stage of processing, the inverse of the logarithmic map (3.5.1) is performed. Ermentrout and Cowan considered only the retinotopic map, interpreting high activity at cortical position ( X, Y ) as a perception of bright spot at retinal coordinate ( x, y). Stripy and checkerboard patterns are transformed by the retinotopy to spiral and tunnel-like images (Figure 3.15). More recently, the problem was reexamined with a more sophisticated model

62

a.

b.

e.

f.

c.

d.

g.

h.

Figure 3.15: Left: Different patterns tting on a square (a,b) or hexagonal lattice (c,d). Single
lines show the underlying lattice, shading the part of periodic solution above the midline. From [182]. Right: Transformation of geometric patterns by the inverse retinotopic map. Originals: e. horizontal stripes; f. hexagons; g. squares (checkerboard); h. squares (rotated). From [183].

[183, 184] incorporating the modular orientation preference map. It consists of macrocolumns endowed with dynamics as in Section 3.5.2 and coupled by lateral connections. Thus it takes into account that activity in V1 is mostly related to processing of edges and contours when converting it to possible visual imagery. Due to more complicated symmetries this model has a substantially larger number of pattern types. A few of them are shown on the top row of Figure 3.16, as they would be experienced in the visual eld. The reviewed models suggest that highly geometric patterns, particularly with spiral- or tunel-like appearance, might have prominent place among visual hallucinations. Those would be linked to the primary processing stages of the visual system. The rst systematic studies of visual hallucinations were conducted by Klver in 1926 [185, 186] with the hallucinogenic drug mescaline. He identied four types of simple imagery that repeated in all his subjects: tunnels; spirals; cobwebs; and lattices, honeycombes, fretworks and checkerboards. Since then, these basic forms have been found to be almost ubiquitously a part of drug-induced hallucinations, including by marijuana, LSD, ketamine, cocaine, . . . [187], Figure 3.16 middle. Experiences of tunnels and cobwebs of 63

light are also very common in near-death experiences [188]. Hallucinations are usually result of changes in brain operation that span many levels and subsystems, the geometric symmetry predicted by our simple models is typically weaved into much more complex and dynamic designs as illustrated on Figure 3.16, bottom. We conclude by quoting from an eloquent Victorian account [189] of the hallucinogenic effects of peyote cactus (traditionally used in parts of North America, with psychoactive ingredient mescaline), which affects primarily the sensory modalities leaving the mind clear: The visions never resembled familiar objects; they were extremely denite, but yet always novel; they were constantly approaching, and yet constantly eluding, the semblance of known things. I would see thick, glorious elds of jewels, solitary or clustered, sometimes brilliant and sparkling, sometimes with a dull rich glow. Then they would spring up into ower-like shapes beneath my gaze, and then seem to turn into gorgeous buttery forms or endless folds of glistening, iridescent, brous wings of wonderful insects; while sometimes I seemed to be gazing into a vast hollow revolving vessel, or whose polished concave mother-of-pearl surface the hues were swiftly changing. I was surprised, not only by the enormous profusion of the imagery presented to my gaze, but still more by its variety. . . . On the whole, I should say that the images were most usually what might be called living arabesques. There was often a certain incomplete tendency to symmetry, as though the underlying mechanism was associated with a large number of polished facets. The same image was in this way frequently repeated over a large part of the eld; but this refers more to form than to color, in respect to which there would still be all sorts of delightful varieties, so that if, with a certain uniformity, jewel-like owers were springing up and expanding all over the eld of vision, they would still show every variety of delicate tone and tint.

3.5.4 Relevance and validity of neural eld models


In this Chapter we have tried to demonstrate the relevance of studying neural eld models. While they are often introduced as ad hoc models, we have shown that there exists a mathematical route for deriving them from biophysically realistic models in neuroscience (Sections 3.1 and 3.2), particularly they can be linked to the well established single neuron models. It is true that, compared with models in the physical sciences, an inordinate number of approximations 64

a.

b.

c.

d.

e.

f.

Figure 3.16: Top: Visual patterns generated by the coupled macrocolumn model, [183]. Middle: a,b,c. Drawings of geometrical hallucinations caused by marijuana, from [187]; d. a solution to the model constructed to resemble the more complicated imagery in (c), [183]. Bottom: Artists impressions: e. gud by dempa, after psychedelic drug experiences; f. Ascent in the Empyrean by Hieronymus Bosch, possibly informed by folklore knowledge of near-death experiences.

65

and restrictive assumptions are accumulated along this route. At each stage extensive simulation comparisons and experimental work check that the new paradigm gives in a reasonably good match to reality and to the rest of the theory, despite possibly limited mathematical validity. Indeed, often the resultant models are later evoked in circumstances well outside the assumptions needed for their correct mathematical justication. That is why in many cases authors prefer to consider them as phenomenological constructions direct translations of some neuroscientic hypotheses of current interest into mathematical language. For formulating hypotheses about the brain at a coarse scale, for very large networks, where connectivities are only statistically or qualitatively known, neural eld models are one of the main tools currently at our disposal, and the most agreeable to analysis. In the current Section we showcased a few examples of neural elds contributing to neuroscience theory. These have both suggested mechanisms by which the phenomena might be generated and by which cortex might be operating in general, and lent credibility to the neural eld paradigm itself. Importantly, they have suggested the process of pattern formation as an important ingredient of brain dynamics, and establish a similarity with nonliving nonlinear physical systems. The laws and properties of pattern formation are quite similar across systems and often taking note of their presence has prompted the selection of a model. This makes us believe that studying the general properties of the equations of neural eld type is important in its own right. The accumulated knowledge then can be used by neuroscientists concerned with a particular problem. They would be able to foresee the properties and the overall dynamics of the model they may choose for the neuronal substrate. This is the point of view that motivates the work in this Thesis. There are a few more examples of employing neural elds for their pattern formation properties which we have not included in this Section. The tuning properties of cells for head-directions in rats [190] resemble the mechanism offered for orientation selectivity. Interestingly, there is no topographic map for headdirection preference, neurons with different properties are intermixed. While the model in Section 3.5.2 could be stated both in feature space and cortical coordinates (cref. Figure 3.13 left), here only the former makes sense. An66

other promising application for neural elds is short-term working memory. It is thought to be stored as bumps of localised activity in prefrontal cortex that may last as long as minutes [191, 192]. Theoreticians have proposed Amari-type mechanisms (Section 3.4.2), although in the discrete version of neural elds: arrays of integrate-and-re [126] or conductance-based neurons [193]. There is also a line of research towards establishing a link to cortical rhythms [194, 195] or epileptiform activity in-vivo [196, 197].

67

C HAPTER 4

Neural elds with space-dependent delays


Most research on pattern formation has been done in the framework of partial differential equations (PDEs) while spatially extended models in neuroscience often are integro-differential. Yet, one of the general results that has emerged as early as the rst work on pattern formation (Alan Turing [149]) remains true instability is driven by spatial competition of exciting and inhibiting agents acting at different characteristic lengths. In reaction-diffusion systems for example, the activator must have smaller diffusion constant than the inhibitor. In neural eld equations modelling patches of cortex (e.g. Sections 3.5.1 and 3.5.3) local interactions are provided by the intracolumnar connections and distal by efferent axons with length of a few millimetres or more. Both effects are incorporated in the function describing the network connectivity and the desire that the equation should exhibit pattern formation results in certain constraints on the functions form usually it has to be chosen with local excitation and distal inhibition (the Mexican hat function), as in the classical papers of Wilson and Cowan [130] or Amari [151]. As we discussed in Section 3.5.2, within a macrocolumn local excitation and lateral inhibition is an accurate description of connectivity. However, at a larger scale, while intracolumnar connections are mostly inhibitory, it is known that about 80% of long-range axons are excitatory [37, 198, 199, 200], i.e. connectivity is more accurately described by an inverted Mexican hat function. The results in this Chapter demonstrate that upon taking a more realistic neural eld model than previously studied (by considering the nite speed of signal propagation) the biologically plausible connectivity is also 68

able to support pattern formation of Turing type.

4.1

The route to delays

The importance of conduction delays in cortex has been appreciated since early on. Space-dependent delays have been included in the models formulated by Wilson and Cowan [130], by Nunez [194] and the later unication and generalisation of those by Jirsa and Haken [178]. We also initially included axonal delays in Sections 3.2 and 3.3. However owing to the mathematical complexity of working with equations with delays, analysis so far has been customarily performed on simplied cases omitting the delay terms. Only recently have studies concentrated on the effects that might arise due to the delay mechanisms. Hutt et al. [201, 202] considered the axonal conductance delay suggested in previous works. Bressloff [203, 204] analysed a model of a neural eld with a semi-innite dendritic cable (Section 3.1.4). Roxin et al. [140] looked at a model with a small discrete delay (delay independent of the distance between sites). All of these papers center on linear Turing instability analysis of single population models in one spatial dimension. They were the rst to show that delay mechanisms typically lead to novel properties. Namely, dynamic Turing patterns are generic, whereas the corresponding models without delay terms can only support static patterns or no patterns at all. These ndings motivated our interest in further exploring the role of delays for the spatio-temporal dynamics of neural elds. In Section 3.4.1 we performed linear stability analysis of a simple neural eld and saw that a Turing instability sets in when some of the dispersion relation roots (eigenvalues) become positive. In a model with discrete or spacedependent delay the roots can be complex and depending on where they cross the imaginary axis different types of instability could occur: stationary periodic pattern, uniform shift to another homogeneous steady state, oscillatory periodic pattern (travelling wavetrain), or uniform oscillation of the whole domain. The types will be discussed in more detail in Section 4.5. At least two processes acting at unequal characteristic lengths are necessary for the generation of patterns. The introduction of a second characteristic time is

69

necessary in order to evoke oscillations. Note that the original two-population model without delays developed by Wilson and Cowan [129] (Section 3.3.1) does have two temporal parameters the different membrane time-constants for the excitatory and inhibitory synapses. However in order to decrease the dimensionality of the system most authors assume that inhibitory synapses act much faster than the excitatory [178, 205, 206]. The unequal nite spatial scales are preserved in the form of the connectivity kernel of the new single equation (the Mexican hat), but the second temporal scale is lost. This is in essence equivalent to a reduction to one neuronal population with two types of connections, the model that we study here. As examples of the treatment of full two-population models and the resulting oscillatory dynamics see Tass [141] and Bressloff and Cowan [142]. In recent publications the one-population model has been extended with various features in order to reintroduce an interaction with second characteristic time length. Hansel and Sompolinsky [207] enhanced excitatory neurons with spike-frequency adaptation; Bressloff [203] undertook the incorporation of dendritic tree diffusion effects; Hutt et al. [201] suggested axonal time-delays. In this Chapter we formulate a neural eld model that generalises these models and allows us to treat them in a unied way. We determine the linear conditions for Turing instability for this generalised model as a stepping stone extending those results into the nonlinear domain in Chapter 5. We apply the results to examples with axonal conductance delay and with dendritic diffusive delay and delineate in detail their bifurcation plots. Also we show how, for such particular choices of the connectivity, one can construct an equivalent PDE formulation of a neural eld, the so called brain wave equation.

4.2

Neural eld with time-dependent connectivity formulation

The neural eld model with space-dependent axonal conductance delay, studied by Hutt et al. [201], is u( x, t) =
t

(t s)

w( x y) f u y, s 70

| x y| v

dy ds.

(4.2.1)

Here we briey recap the model components, they are introduced thoroughly time t R + . The dependent variable u( x, t) is identied as the synaptic activity at a population x and time t. The ring rate activity generated as a consequence of local synaptic activity is f (u). A common choice for the ring rate function is the sigmoid (3.3.4) f (u) = (1 + exp( (u h)))1 , describes not only the anatomical connectivity of the network, but also includes the sign of synaptic interaction. For simplicity it is also often assumed to be isotropic and homogeneous, as is the case here. We will have in mind primarily the wizard hat function (3.3.5) w( x ) = w0 (1 | x |)e| x| , which is balanced: generality. The parameter w0 can take two values, 1 and 1 (inverted and stan w ( y ) dy

in Chapter 3. The equation is dened on the 1D domain x R and evolves in

with steepness parameter > 0 and threshold h. The spatial kernel w( x ) = w(| x |)

= 0. It simplies calculations without loss of

dard wizard hat, respectively). The boundedness of f (u) and the connectivity w( x ) decaying exopnentially to zero at innity guarantee that the integrals are well dened. The temporal convolution involving the kernel (t) ( (t) = 0 for t < 0) represents synaptic processing of signals within the network. In examples, we choose to work with a second-order alpha synaptic response (3.1.9) (t) = 2 t exp(t) H (t), (H (t) is the Heaviside function). The delayed temporal argument to u on the right-hand side of (4.2.1) represents the delay arising from the nite speed of locity of action potential. In models with dendrites there is a further spacefrom the cell-body (see below). Analysis of pattern formation properties usually proceeds in Fourier space. In Section 3.4.1 we easily converted the model (3.4.1) without conduction delays to Fourier space because its right-hand side is composed simply of two convolution operations, a spatial one with the connectivity function w( x ) and a 71 signals travelling between points x and y; namely | x y|/v where v is the ve-

dependent delay associated with the processing of inputs at synapses away

temporal with the synaptic lter (t). A delay term presents the difculty that (4.2.1) no longer has a convolution structure. We propose to resolve this by dening a two-dimensional spatio-temporal kernel K ( x, t) = w( x ) t

|x| , v

(4.2.2) g(y)(y x0 ) dy =

where (t) is the delta-function, having the property

g( x0 ). Then by introducing a two-dimensional convolution operation

(K g)( x, t) =

R2

K (y, s) g( x y, t s) dy ds,

along with temporal convolution for the synaptic lter

( g)( x, t) =

(s) g( x, t s) ds,

we can write (4.2.1) in the succinct form u = K f u, interpreted as a more general time-dependent connectivity. (the symbol denotes composition f u = f (u)). The kernel K ( x, t) can be

It is now possible to cast models with other types of delay in the same form, for example the model of Bressloff [203, 208] with a rudimentary dendritic tree. We introduced the theory of passive dendritic dynamics in Section 3.1.4. Using the cable equation (3.1.10), a neural eld where each neuron x is endowed with a single semi-innite dendrite, is naturally formulated as t v = v/D + Dzz v + I ( x, z, t), I ( x, z, t) =
R2

(4.2.3) (4.2.4)

(s)w( x y, z) f v(y, 0, t s) dy ds.

Here v( x, z, t) is the potential along the cable z R + at position x R and time t R. The cable equations (4.2.3) are only coupled through their input I, with w( x, z) specifying the axo-dendritic connectivity. We identify the soma activity

with the potential at the dendritic terminal, u( x, t) = v( x, 0, t) assuming that there is no ow of current back from the axon to soma i.e. choosing a sealed end boundary condition z v
z =0

= 0. We can express the potential at z = 0 directly


1 4Dt

in terms of the input to the dendrite by using the Greens function (3.1.11) E(z, t) = et/D ez 72
2 /4Dt

H ( t ),

dendrite

axons

synapse

z0

z0 y x

soma

x
axons

Figure 4.1: The two neural eld architectures with a dendritic cable that we consider. Left:
All synaptic inputs impinge on the same site at a distance z0 from the soma. Right: Here the between neurons. distance of the synapse from the soma is linearly correlated with the spatial separation | x y|

for the innite cable equation derived in Section 3.1.4. Namely, we have v( x, 0, t) =
0 t

E(z , t t ) I ( x, z , t ) dt dz .

It allows us to eliminate the dendritic dynamics altogether and work only with v( x, 0, t) = u( x, t), the activity at the soma. The resulting equation is u( x, t) =
R4

E(z , t ) H (z ) (s)w( x y, z ) f u(y, t t s) dt dz dy ds,

(H (z) is the Heaviside function making sure we are working with semi-innite cable). Here, the time-dependent connectivity K ( x, t) is K ( x, t) =
R+

E(z , t)w( x, z ) dz .

(4.2.5)

It incorporates the anatomical axo-dendritic structure, together with the passive evolution of the signal along the dendrite until it arrives at the soma. We consider two types of axo-dendritic connectivity w( x, z). For both we assume that a neuron can make only a single synaptic contact with another neuron, which allows us to simplify the expression (4.2.5). For the rst model, to keep compatibility with previous studies [203] and to keep small number of parameters, we postulate that synapses are all at a xed distance z0 from the eralized connectivity function (4.2.5) is separable in time and space, taking the space-dependent dendritic delays w( x, z) = w( x )(z z0 | x |). Here we pro73 cell body the connectivity function is w( x, z) = w( x )(z z0 ). Then, the gen-

form K ( x, t) = w( x ) E(z0 , t). We will also look at a non-separable kernel with pose a linear correlation between the neuronal distance | x y| and the distance

of the synapse from the cell body. This model better corresponds to anatomical data showing that axons from more distant neurons arborize further away in the dendritic tree [204, 209]. In this case K ( x, t) = w( x ) E(z0 + | x |, t), where z0 and are parameters. In both cases, which are illustrated on Figure 4.1, w( x ) is

the previously used Mexican hat-type function specifying the prole of axonal connectivity. To summarise, by introducing the generalised connectivity kernel K ( x, t), we can unify in one framework the models presented in this Section. They differ in the character of their delay mechanism. While both can be space-dependent, the axonal conductance delay is preserving the shape of the signal, while the dendritic delay is diffusive in nature. Being able to cast them in the same formal equation hints that their dynamics would not be so different. We will investigate this in the next Sections. Importantly, we can now develop the analysis for all neural eld models with time-dependent connectivity in one dimension. The equation that we study in this and the next Chapter is given as u = K f u with the restrictions (t) = 0, t < 0, K ( x, t) = 0, t < 0, K ( x, t) = K ( x, t). (4.2.7) (4.2.6)

and a nonlinearity f C (R ) monotonous with limx f ( x ) = 1. The and t2 (t), t2 x2 K ( x, t) also in L1 (, ) (for the nonlinear analysis in Chap-

convolution kernels are taken in L1 (, ), as well as their Fourier transforms ter 5). We will consider the particular models given in this Section as examples

and for illustration of our results.

4.3

Conditions for Turing instability

We are interested in the ability of the general neural eld equation (4.2.6) to undergo a spontaneous breaking of the continuous symmetry as some bifurcation parameter of the system is varied. Then, a spatially homogeneous solution loses its stability and spatial perturbations will begin to grow in amplitude. Due to the systems translational invariance the growing inhomogeneities preserve 74

discrete symmetry and the resulting pattern is globally periodic. Since the equation (4.2.6) is translationally invariant and isotropic the new periodic solution has no preferred phase (or orientation, in a 2D model) it depends only on the initial conditions. If a weak non-homogeneous or anisotropic external stimulus were to be added to the system, it would pin the phase of the pattern so that one of its peaks coincides with the stimulus peak (see Section 3.5.2, [151, 201]). Effects of external input will not concern us here. However in Chapter 6 we will give attention to anisotropic connectivity and see how it pins a prefered pattern wavelength. Different parameters in the model can be chosen as bifurcation parameters. For example Hutt [202] and Roxin et al. [140] determined the regions of pattern formation depending on the shape of the connectivity kernel w( x ) i.e. the relationship between excitation and inhibition in the network. Therefore we will keep the connectivity xed (allowing only for general switch between the Mexi(3.3.5)) and consider instead the steepness of the ring rate function f (u) (its rst derivative). Experimentally, this is the most controllable parameter because it can be interpreted as the global intrinsic excitability of the neural tissue which can be tweaked by administering suitable drugs (for examples, see Section 3.5). Let u( x, t) = u be the spatially-homogeneous steady state of (4.2.6). The conditions for growth of inhomogeneous solutions can be obtained through simple linear stability analysis of u we nd for which parameter values it becomes unstable to spatial perturbations of globally periodic type eikx , and what is the intrinsic wavelength 2/k of the growing patterns. The steady state u satises the equation u = f (u)
R

can hat and inverted Mexican hat type arrangements, through w0 = 1 in

(r ) dr

R2

K (y, s) dy ds.

(4.3.1)

Using the following Laplace transforms by time and Fourier by space () =


R

(t)et dt,

K (k, ) =

R2

K ( x, t)e(ikx+t) dx dt,

(4.3.2)

it can be written as u = (0)K (0, 0) f (u) (note that the hat stands for a two-dimensional transform corresponding to the two-dimensional convolution in (4.2.6)). We Taylor-expand about u the only 75

2 =E
c
0.05

Activity

0 =-E -2

-0.05

50 40 30

15 10 10 0 0 5

Time 20

-4

-2

-1

Space

Figure 4.2: Left: A typical dispersion curve in the complex (, )-plane described by a complex conjugate pair of roots as k is varied. Plotted at the critical value 1 = c . As 1 is increased the dispersion curve will shift to the right and a continuum of pairs (, )(k ) will enter the positive half-plane, refer to Figure 3.8, left. Right: An example of highly-nonlinear solution when 1 is large: periodic boundary conditions, localised initial perturbation of maximum amplitude 0.01, v = 1, 1 = 40. Note that the amplitude saturates at a nite value (in this case near 0.05).

nonlinearity in the system, the ring rate f (u) = f (u) + 1 (u u) + 2 (u u)2 + . . . and obtain the linearised model u u = 1 K (u u). The chosen parameter for our bifurcation analysis is 1 = f (u). To dene the conditions for linear stability with regards to perturbations of i. Although the system and its solutions are real, for conciseness we will often work with the complex extension and restrict our attention to the real case when this has implications for the analysis. After substitution we obtain a dispersion relation for the pair (k, ), L := 1 ()K (k, ) 1 = 0. (4.3.4) characteristic scale eikx we make the ansatz u u = Re et+ikx where = + (4.3.3)

This is an equation linking the wavelengths of the perturbations to their growth rates. A plot of the solution (k ) for a xed 1 in a (, ) space gives a curve parameterized by k (Figure 4.2, left). Turing instability sets in when the real part for the rst time becomes positive for some k c , that is the dispersion curve lies in the negative half plane with only its maximum touching the imaginary axis. Denote the value of the bifurcation parameter for which this happens by c . For simplicity we will assume that at c the dispersion curve touches the imaginary 76

axis only at a single k c (up to a sign change). Then, the triple (c , k c , c ) satises the equation (4.3.4) with = 0, namely 1 = c (ic )K (k c , ic ). (4.3.5)

Since K ( x, t) is even function of the space variable, wavenumbers will always tive symmetry around the real axis and frequencies c will also come together as solutions. The eigenspace of the linear equation will be real and at c a complete basis is given by come in pairs k c . Because the problem is real the dispersion curve has reec-

{cos(c t + k c x ), cos(c t k c x ), sin(c t + k c x ), sin(c t k c x )} .


Further on we will write eigenvectors in complex notation A1 ei(c t+kc x) + A2 ei(c tkc x) + A1 ei(c t+kc x) + A2 ei(c tkc x) =

(4.3.6)

c1 cos(c t + k c x ) + c2 cos(c t k c x ) + c3 sin(c t + k c x ) + c4 sin(c t k c x ) with c1,2 = 2Re A1,2 , c3,4 = 2Im A1,2 . These are the vectors that rst become unstable as 1 is increased beyond c .

4.4

Types of Turing instability

Different types of dynamics can be observed depending on the values of k c and c at which the eigenvalues cross the imaginary axis. According to this, Turing instabilities are broadly divided into three types [144, 210], illustrated in Figure 4.3 with simulations of the axonal model (4.2.1). If we have c = 0 the instability is of static type. If k c = 0 the system merely shifts to a homogeneous steady state different from u. If k c = 0, a stationary spatially periodic pattern with wavelength 2/k c develops after a transient period of time (Figure 4.3c). This was the case in Section 3.4.1 where we obtained an expression for the eigenvalue that was real. If c = 0 and k c = 0, the mode that is excited is oscillatory but has no spatial periodicity the whole domain of the equation synchronises into limit cycle oscillations with temporal frequency c with the same phase (Figure 4.3b). This is an analog of Hopf bifurcation in ODEs. Finally, if both k c = 0 and c = 0 the instability is usually called dynamic Turing or TuringHopf. Then the system settles down to a global pattern 77

3400

300

300

max

t
0 0

3150 3350 50 200 100

u(x,t)
3100 50

min

0 0

Figure 4.3: Space-time plots (horizontal axis x and vertical t) of solutions to the model with
axonal delay (4.2.1) as specied in Section 4.2, however here we used a cubic nonlinearity the sigmoidal function (3.3.4). The model is simulated with periodic boundary conditions and random initial conditions. Legend: a. travelling wave (standing wave is initially selected due to the numerical discretisation but it is unstable as we will show in Section 5.2); b. bulk oscillation (Hopf bifurcation); c. a static pattern; d. travelling waves with a large wavenumber k c (again the solution evolves rst to a standing wave which later drifts off). f (u) = 1 u 0.2u3 . See Section 5.2.3 for the cubic form of f (u). Similar results are obtained for

with wavenumber k c , which however is moving coherently sideways with a speed c = c /k c (Figure 4.3a). As is apparent from the eigenmodes (4.3.6) this is in essence a superposition of two innite trains of travelling waves, one leftand the other right-going. Note that if the two wavetrains balance each other, one would observe a standing wave, as in the initial stages of Figure 4.3a. To determine the conditions under which this may occur it is necessary to go beyond a linear analysis and determine the evolution of mode amplitudes Ai . The techniques to do this will be described in Chapter 5. There we also see why the growing solutions predicted by the linear analysis converge to patterns with certain nite amplitudes. More complicated scenarios are also possible. For example the two imaginary maxima + i and i may collapse to the real axis just as they cross to 78

or from the positive half-plane. This is Takens-Bogdanov instability which offers a broader range of outcomes for the system, although it occurs only for very restricted choice of parameters. It is the transition interface between static and dynamic Turing patterns. It has been studied in detail by Curtu and Ermentrout [206] for the two-variable model of Hansel and Sompolinsky with adaptation [207]. In Section 5.3 we will conrm their results in a more general setting. At a Takens-Bogdanov bifurcation one has a double zero of the dispersion relation (4.3.4) (since two complex-conjugate eigenvalues coincide at zero). It is easy to check that for the one-variable model that we work with here this situation cannot occur for any choice of parameters. Another case is when two eigenvalues (or two complex eigenvalue pairs) cross the imaginary axis at the same parameter values with different wavenumbers k c . One needs to use again weakly-nonlinear analysis to determine the result of interactions between the two excited modes. We have not pursued these cases since they occur for very carefully selected parameters which is unlikely to occur in reality. It should be underlined that in this and next Chapters we are discussing solutions of the model dominating in the weakly-nonlinear regime only. The bifurcation parameter is very close to the critical value, and additionally, the domain is innite or nite with periodic boundary conditions. In a more complicated and realistic setting several patterns might superimpose or coexist in separate regions of space and different types of geometrical defects might develop due to the boundary conditions [145]. For values of 1 far from c the amplitude equations which are derived through perturbation techniques no longer give accurate information about the system which most often will display very irregular spatio-temporal behavior as illustrated in Figure 4.2, right.

4.5

Examples axonal and dendritic models

Unlike the example in Chapter 3 the conditions for Turing instability for the models we are interested in are generally not resolvable by hand. We describe here how to prepare them for a numerical solution. Since most numerical packages cannot work with equations of complex variables, we split the dispersion

79

relation (4.3.4) into real and imaginary parts, G (, ) = Re L(k, + i ) and H (, ) = Im L(k, + i ), to obtain a 2 by 2 system for the components of , G (, ) = 0, H (, ) = 0. (4.5.1)

Solving this system gives us (for a xed parameter 1 ) a curve in the plane

(, ) parameterised by k. We couple to this system the conditions for a bifurcation: imposing (k ) = 0 gives a local maximum of the dispersion curve (we have to pick manually the one that is also a global maximum); the condition =0 (4.5.2)

ensures that instability is just triggered. These two additional equations pin down values for k and 1 as well. The expression of as a function of k is not available to us so we have to use the formulas for differentiation of implicit functions. By taking the differential of (4.5.1) Gk dk + G d + G d = 0, and solving this 2 by 3 system we obtain Gk G Hk H dk = G G H H d, Gk G Hk H d = Gk G Hk H d. (4.5.3) Hk dk + H d + H d = 0

Since d = 0 at the curve maximum, we can infer the condition Gk G Hk H

= 0.

(4.5.4)

Using numerical continuation we can track the set of solution points (c , k c , c ) of the system (4.5.1), (4.5.2), (4.5.4) as some parameter is varied (e.g. the axonal propagation speed v). They will describe in (v, 1 )-parameter space a critical curve separating the regions of stable resting state and those of complicated spatio-temporal behaviour. To chart these curves we employ the package XPP with AUTO continuation [211]. Codes and listings are given in the Appendices.

80

4.5.1 The model with axonal delay


The FourierLaplace transform of (4.2.2) is calculated as K (k, ) =
R2 R2

K ( x, t) et eikx dx dt

w0 (1 | x |)e|x| (t

| x | t ikx )e e dx dt v

= w0

R2

(1 | x |)e|x|(1+ v ) eikx dx.

With ea| x| = 2a/( a2 + k2 ), we get K (k, ) = 2w0

( a 1) a2 + k 2 ( a + 1)
2 ( a2 + k 2 )

a = 1+

. v

(4.5.5)

The Laplace transform of the alpha function (3.1.9) is () = (1 + /)2 . Then, the dispersion relation 1 K = 1 can be brought to a polynomial form. The coefcients of this polynomial of sixth order, 6 =0 an (, v, 1 , k )n = 0, n are given in Appendix 9.1.1. Partial numerical analysis for this model has been done earlier by Coombes [212] but he was interested only in the possibility of a dynamic Turing instability for inverted wizard hat connectivity (see Section 4.1). Here we track all six roots of the dispersion relation across parameter space, for both signs of the connectivity function. We construct critical curves in the (v, 1 ) plane delimiting the region of stability of the homogeneous steady state. We nd that for w0 > 0 (inverted wizard hat) two pairs of complex roots can cross into the positive half-plane as 1 is increased. One of these pairs (shown on Figure 4.2, left) crosses with nonzero wavenumber k c (v) and the other with k c = 0. The latter comes rst for small values of v (large delays) giving a Hopf instability, and the former for moderate and small delays, giving the TuringHopf instability. A plot of all critical curves 1 = c (v) is shown in Figure 4.4. For the standard wizard hat connectivity (w0 < 0) we see, as expected, that the static Turing instability is independent of the axonal velocity v. Several important qualitative features for the case w0 > 0 become clear from Figure 4.4. For any nite non-zero v there exists a positive threshold 1 = c (v) at which modes with some wavenumber k c (v) become excited; on the other hand, as v , c also tends to innity. The latter observation is consispattern formation when the connectivity is of inverted wizard hat type. 81 tent with the failure of models without axonal delay to demonstrate dynamic

25

20

15

d Dynamic Turing

10

Hopf
k =0,
c

0,
c

0
c

b Rest state

0
k =0

c
1

k =0, c

Static Turing
c

-5 0 2 3 4 5

Figure 4.4: The curves show the parameter values where eigenvalues of the model with
They indicate the instability threshold c as a function of the axonal speed v. The points a. to d. denote the parameter values for the respective solutions in Figure 4.3. axonal delay cross the imaginary axis. The sections with smallest |1 | are the critical curves.

For = 1 (as we will assume throughout) the dynamic instability is rst met for v = 1 at c = 8 with a wavenumber k c = 2 and dominates for medium and large speeds, the bulk oscillation (k c = 0) also appears rst at c = 8 for axonal speed v 0.46. At v 0.62 the two modes become unstable simultaneously, while we have not investigated analytically what the result of their interaction, simulations suggest that whenever the eigenvalues with k c = 0 are unstable, the bulk mode is mainly determining the dynamics even deep beyond the TuringHopf instability. As v becomes smaller than 0.1 the pattern wavenumber quickly, though continuously, shifts from k c = 0 to very large values. Direct numerical simulations of the full model (Section 4.6) show excellent agreement with the predictions of the linear instability analysis. Compare Figure 4.4 and 4.3.

82

Static instabilities and unbalanced kernels In all models we look at in this Chapter we choose a connectivity with global balance between excitation and inhibition in the sense could pick more generally w( x ) = w0 (1 A| x |)e| x| , where A controls the relative strength of excitation. The Fourier transform is w(k ) = 2w0 A ( k 2 1) + k 2 + 1 ( k 2 + 1)2
R R

w( x ) dx = 0. One

(4.5.6)

and the overall effect of the kernel is

have motivated our decision for xing A = 1, by arguing that it simplies the calculations by setting the steady state to zero, u = 0. Additionaly, examination

w( x ) dx = w(0) = 2w0 ( A 1). We

of the effect of different proportions of excitation and inhibition (the shape of the connectivity function) has been already done by [201, 202] for this model and by [140] for a model with discrete axonal delay that is small. Another reason is that we are more interested in dynamic Turing instabilities. It is easy to see that a balanced kernel provides the best circumstances for obtaining these. Set c = 0 to get a static instability. The dispersion relation (4.3.5) reduces to 1 = c w(k ). The bifurcation occurs when maxk w = 1/1 for some 1 . Investigating the function w(k ) we obtain two possible extrema. The rst is k = 0, w(0) = 2w0 ( A 1).

This represents a shift to another homogeneous solution when the parameter however it does not affect the critical curves for other instabilities since the linear analysis in Section 4.3 does not depend on the particular value of u. As gives a bifurcation point at k2 = c A 1 the bifurcation point moves to innity, c . The other extremum 3A 1 , A+1 c = 1 4A = w0 . w(k c ) ( A + 1)2 1 > 0 crosses c = 1/[2w0 ( A 1)]. This is a fairly uninteresting possibility,

This is the static Turing pattern we detected for w0 < 0. For A = 1 it appears at the highest value of c = 1, approaching c = 0 as A is increased to innity or decreased to zero. Thus a balanced kernel gives the simplest structure for the steady states from which we start our analysis, as well as providing the largest parametric space in which to look for primary dynamic instabilities. 83

4.5.2 The models with dendritic and axo-dendritic delays


The FourierLaplace transform of the axo-dendritic connectivity kernel (4.2.5) is K (, k ) =
R+

w(k, z ) E(z , ) dz .

In Section 3.1.4 we calculated the Fourier transform by time of the function E(z, t), the Laplace transform is, accordingly,
1 z +/D e , E(z, ) = D + /D

1 . DD

For a xed synaptic distance z0 we have K = w(k ) E(z0 , ) which is separable. The transform of the axonal connectivity is real, w(k ) = 4w0 k2 /(1 + k2 )2 and independent of . Hence, the critical wavenumber at all instabilities is a constant, the maximum of w(k ) gives k c = 1. The dispersion relation for dendritic delays is transcedental and there are countably many eigenvalues. However numerically (Appendix 9.1.2) we discover that only two eigenvalues can cross to the positive half-plane. Indeed, in the dispersion relation we have that the product () E() has to make up to a real number (at an instability point this real number is 1/w0 c ), it appears that this gives all but two roots having a real part = D, which is negative. For a synaptic position linearly correlated with the distance between neuronal much like the one for space-dependent axonal delays. One obtains E(z0 , ) multiplied by (4.5.5), however with a = 1 + + . Indeed, K (k, ) =
R2 R

sites, K ( x, t) = w( x ) E(z0 + | x |, t), the integral transform is calculated very

w( x ) E(z0 + | x |, t) H (t) et eikx dx dt


R

w( x ) E(z0 + | x |, ) eikx dx =

w ( x ) E ( z 0 , )e | x |

+ ikx

dx,

due to the specic form of E. We have plotted the critical curves for this system in dependence of the synaptic spread parameter in Figure 4.5, right. Note that here larger delay is to the right of the graph, unlike Figure 4.4. Comparing the two we can see that they display strong similarities a Hopf bifurcation for larger delays, a transition to TuringHopf for smaller; a static Turing instability for w0 < 0, which however can become a shift to another homogeneous state for larger . We expect this since both types of delay are space-dependent. At = 0 the model reduces to a xed synaptic position. If we are varying the closest 84

w0

1
Dynamic Turing

40

30

30

Hopf Dynamic Turing

k c=
c

20

20

10

kc=

kc
c

Rest state

10

Rest state

kc

=0

kc

=0

Static Turing -10

-10

Static Turing
0.5 1 1.5 2

kc=
1

c Static homogeneous
2 2.5 3

=0

z0

2.5

0.5

1.5

the plot) for a model with a passive dendritic cable. Left: With xed synaptic point correlated with the distance between neuronal sites by z0 + | x |, z0 = 1.

Figure 4.5: The Turing instability curves (only the curves with smallest |1 | in

contact point at a distance z0 from the cell body. Right: With synaptic contact Other parameters are = D = D = 1.

contact point z0 , the entire curves translate without deformation following the trajectories of Figure 4.5, left. From the calculation above we can see that to further include axonal delays into the model, one has simply to write a = 1 + /v + + in (4.5.5). The code given in Appendix 9.1.2 is for the dispersion relation of the full axo-dendritic delay model. For medium and large v the critical curves with respect to look very much like for v (that is, Figure 4.5, right). As v is decreased the at v 0.8 it comes rst even for = 0. If we look at the critical curves along v Hopf curve moves down, making the region for bulk oscillations larger, until (for a xed ) they resemble those for the axonal delay model (Figure 4.4) with

Also there is no TuringHopf instability for small v.

the exception that as v the TuringHopf curve tends to a nite value of c .

4.6

Conversion to PDE. Brainwave equations

When rst confronted with the oscillatory nature of EEG data many modellers (who were often from Physics background) put together systems of ODEs [213, 214] or hyperbolic PDEs [195] as a description of the cortical dynamics. The latter became known as brainwave equations. Although they could reproduce many 85

of the EEG properties, they had no physiological basis. Thanks to the brainwave equations a number of metaphors (compare to Section 2.1) and sometimes speculations entered the neuroscientic discourse. Analogies were made between the cortex and various physical systems such as the Earths magnetic eld or a resonating violin [215]. EEG rhythms were seen as spatial resonance modes of the cortical activity. Importantly, this introduced the idea that computations could be performed by the brain on a scale larger than the neuronal population through the interactions of brain waves. It was not clear however how this representation of cortex could tie in and interact with its neuronal structure and the microscopic cell activity. In this Chapter and in Section 3.4 we found that one could have travelling waves and other global patterns also in neural eld models. In fact, we found that the same pattern-forming mechanisms, that were previously studied for physical systems described by PDEs, operate in integral equations like (4.2.6). This has its explanation since one can nd choices of the convolution kernels for which there is a mathematical correspondence between the integral equation and a PDE equation, typically a high-order damped wave equation. Thus a nonlocal neural eld model can be converted to a local PDE formulation. This also removes the mystery behind brainwave equations and establishes a link between EEG waves and the neuronal dynamics and connectivity. Conversion to a PDE framework has been used by several authors [212, 216]. Still, our method of working directly with the integral model (4.2.6) is advantageous because only for a subset of the possible kernel choices does the model have a PDE form. For example, the explicit choices for the connectivity function w( x ) and the synaptic lter (t) were guided by the desire that there exists a PDE form for our examples. A PDE equation is much easier to simulate numerically and most of the numerical solutions here have been provided by the PDE scheme (we simulated also one of the integral models to conrm that it gives the same solutions). For the remainder of this Section we show how to convert the example models dened in Section 4.2 to their PDE forms. In Chapter 6 we will showcase an approach for constructing PDE formulations that approximate the dynamics of an integral neural eld which has no exact PDE form. The theoretical stability analysis however does not require a PDE form.

86

PDE for axonal delays The kernel of the model with axonal delays (4.2.1) has a rational FourierLaplace transform (4.5.5). We apply both transforms dened in (4.3.2) on the full nonlinear equation (4.2.1) to get u(k, ) = K f u = 2w0 1 VD Q PD f u,

( Q = 1 )

for some nice class of solutions u. Rearrange the terms (multiply by Q PD) and apply the inverse transforms to get the nonlinear PDE Q PD u( x, t) = 2w0 VD f u( x, t) with differential operators Q= t 1+ VD = t v
2 2

(4.6.1)

PD = 1+ t v
2

t 1+ v

xx
t v .

xx 2 +

Equation (4.6.1) is of 6th order in time and 4th in space. Now one can apply all methods for solving nonlinear differential equations in the usual manner. A differential framework is especially advantageous for the numerical solution of the equations. We present in the Appendix the codes that we use for brute force simulations of the integral equation (Appendix 9.2) and the PDE equivalent (Appendix 9.3.1). However we use the integral code as a check only, since it is many times more demanding on computing power. This is particularly so for a model with space-dependent delays, where apart from integrating data taken from the entire spatial domain, one has to keep history of it back in time. Therefore most gures in the Thesis have been produced by way of the differential equations.

PDE for dendritic delays In this case we can start from the formulation (4.2.3), (4.2.4). The cable equation is already in differential form, we need to take a FourierLaplace transform (by x and t) of (4.2.4): I (k, z, ) = ()w(k )(z z0 ) f v(k, 0, ). 87

The Fourier transform of the wizard hat function w( x ) is given by (4.5.6) (with A = 1). By sending the denominator to the other side and by inverse transform we get differential equations for the two variables v( x, z, t) (cable voltage) and I ( x, t) (synaptic input at z0 ): (4.6.2) t 2 2 (1 + ) (1 xx ) I ( x, t) = 4w0 xx f v( x, 0, t). The implementation of MatLab code for evolving this equation is given in Appendix 9.3.2. If we have also an axonal delay, the expression for w(k ) is simply replaced by (4.5.5) and we get an equation for the synaptic input with the same form as (4.6.1): 1 Dzz ) v( x, z, t) = I ( x, t)(z z0 ), D

(t +

(t +

1 Dzz ) v( x, z, t) = I ( x, t)(z z0 ), D Q PD I ( x, t) = 2w0 VD f v( x, 0, t).

(4.6.3)

In the case of correlated synapses, in equation (4.2.4) the -function involves one of the variables of integration, with the unfortunate effect of stripping away that integral: I ( x, z, t) =
R2

(t s)w(y)(z z0 |y|) f u( x y, s) dy ds = z z0 f v( x z z0 z z0 , 0, t) + f v( x + , 0, t) ds.

(t s)w

This leads to a differential system where the equations both for the voltage and the input current are two-dimensional: 1 Dzz ) v( x, z, t) = I ( x, z, t), D z z0 z z0 z z0 t , 0, t) + f v( x + , 0, t)]. [ f v( x (1 + )2 I ( x, z, t) = w (4.6.4)

(t +

It is much more complicated to solve numerically, see Appendix 9.3.4.

4.7

Scales at the bifurcation point

A characteristic feature of the dynamics of systems near bifurcation points is the separation of scales. An instability occurs when the real part of some eigen88

value becomes positive. It follows that for bifurcation parameter values innitesimally close to the critical, the eigenvalue in question has innitesimal real part. The growth of the corresponding eigenmode is much slower than all other dynamics of the system. This is the essence behind the derivation of the amplitude equations in the next Chapter we discard information about the short-term behaviour of the system, reducing the model to a description only at the slower time-scale. The simpler equations we obtain as a result, allow us to make some precise theoretical conclusions about the nature of the long-term system evolution. In this Section we prepare the ground for the derivation of the amplitude equations by rst quantifying how the fast and slow time scales relate to each other and by determining in an exact way the set of active (unstable) modes. The latter dictate the long-term evolution of dynamics since the stable modes decay to zero (to u) in a time O(1). The information about scales can be obtained from the linearised model. Essentially, it depends on the curvature of the spectrum at its maximum. For a value 1 > c near to the critical, a small part of the spectrum will lie in the positive half-plane (see Figure 3.8, left). The range of k which parameterises this segment of the spectrum curve constitutes the set of active modes for that value of 1 . Competition between those eigenvalues with positive real part (k ) will set the scale of the periodic pattern. Note that in general the fastest growing mode k m at a 1 = c might not be the same as the critical mode k c , since the spectrum might change its maximum as 1 is increased. We do not use the formulation of the dispersion relation (4.3.4) to obtain information about the scales. The only property we need is that the spectrum is smooth and we are at a point of Turing instability, c = 0, c /k = 0. This implies that the same scales will be in force at any Turing bifurcation. The dispersion relation (4.3.4) generally denes a surface ((k, 1 ), (k, 1 )) in

(k, 1 , , )-space. We perform a two-variable Taylor expansion of the function


(k, 1 ) around the critical point, (k, 1 ) = (k c , c ) + (k k c ) + (1 c ) k 1

+
(k c ,c )

2 2 1 2 2 2 (k k c ) 2 + 2(k k c )(1 c ) + (1 c ) 2 2 k1 k 1

(k c ,c )

89

From the conditions for a critical point, the rst two terms are equal to zero. The coefcients of the other terms are in general non-zero, e.g. G Gk 1 = 2 = 0, etc. 2 1 G + H H Hk (4.7.1)

It follows that the rate of growth scales linearly with the distance of the bifurcation parameter from the critical value. Analogous expansion for (k, 1 ) shows that to lowest order it remains unchanged. Concisely, c 1 c , c = o (1). (4.7.2)

To nd the set of active modes we have to solve the inequality (k, 1 ) > 0 with respect to k. Again from the Taylor expansion we have 2 2 2 + 2( k k c ) + (1 c ) 2 + , < (1 c ) 2 1 k1 k2 1

(k k c )2

with 2 /k2 < 0 because the dispersion curve is convex at the maximum. Here, the second and third term on the right-hand side belong to the next higher order in the expansion and thus to the lowest order we have only k kc

1 c .

(4.7.3)

The derivation of this result did not depend on the concrete dispersion relation. It will apply to any bifurcation of the TuringHopf type (apart from degenerate cases, for example (4.7.1) being zero). This includes also the bulk instability with k c = 0. Although we often refer to it as a Hopf bifurcation, beyond it can stabilise also modes with very large but nite spatial wavelengths. We checked those relations numerically at the instability of the axonal model (4.2.1). We have plotted on Figure 4.6 left, how the maximum of the growth rate (k m ) depends on 1 , and on the right, the width of the set of active modes around k m . To plot the former we omitted in code 9.1.1 the condition (4.5.4) which forces the system to stay on the bifurcation point. Rather we tracked the solution to (4.5.1), (4.5.2) as 1 was varied. Similarly to estimate the functional relation of the active set width we tracked the solutions to the system (4.5.1),(4.5.4). One can see that the functional relation is approximately parabolic. We stated in Section 4.3 that the eigenvectors at the point of instability are of the type A1 ei(c t+kc x) + A2 ei(c tkc x) + cc (here cc stands for complex conjugate 90

2.4 2.3
0.015

2.2 2.1 2 the set of active modes

0.01

0.005

1.9 1.8
8.2 8.4 8.6

0 8

8.2

8.4

8.6

Figure 4.6: (We would like to offer to the reader and ourselves a short respite with these
colourful designs much needed at nearly a hundred pages, and to gather strength for the heavy calculation that awaits us in the next Chapter.) A numerical check of the amplitude growth for v = 1 (the instability occurs at c = 8) for the model with axonal delay. Left: The maximal growth rate depends linearly on the bifurcation parameter. Right: The width of the unstable k-band depends quadratically (to lowest order).

of the preceeding expression). To trigger an instability we used perturbations of the form et+ikx (an arbitrary sum or integral of those with different k and s). When 1 > c , using the scalings (4.7.2), (4.7.3) we can write et+ikx = et eic t+i(kkc +kc ) x e(1 c )t ei(c t+kc x) ei

1 c x

(4.7.4)

Let us denote 1 = c + 2 where > 0 is arbitrary and gives the distance from the bifurcation point. For a small , we can use it as a denition for the scales of the different variables. We can express (4.7.2), (4.7.3) as 1 c = 2 2 ,
2

2 ,

c ,

k k c .

In (4.7.4) the dynamics is separated into fast eigen-oscillations ei(c t+kc x) and a slow modulation e t ex . The eigen-oscillations are identical for all terms in the sum of (4.7.4). We set as additional independent variables, = 2 t for the time scale of slow modulation, and = x for the spatial scale (longwavelength) at which the interactions in between the excited nearby modes k become apparent. Then, by subsuming the sum of terms e e (in fact, an integral, because we have dened the neural eld models on an innite domain, hence the set of unstable modes beyond the instability is a continuous band) into a function A(, ), we obtain that the eigenvector at the instability should 91

be written as A1 (, )ei(c t+kc x) + A2 (, )ei(c tkc x) + cc. We will use this result in the next Chapter to show that the growing solutions born in the linear instability converge to patterns with certain nite amplitude due to the nonlinear properties of the model.

4.8

Chapter summary

As it becomes apparent from the introductory Section 4.1, modellers have engaged with the study of many different concretisations of the neural eld model. Our interest has been primarily on the effect of delay mechanism. Here we proposed a generalised time-dependent connectivity K ( x, t) which encompasses all these models. So far presented the linear analysis of dynamic Turing instabilities occuring in such systems. We applied it to a few examples that have been of interest in the community: models with a contuctance delay and a diffusive delay and a mix of the two. We found that the effect of different types of delay does not differ substantially. An additional time scale of interactions (here introduced by the delay) is important to introduce dynamic instabilities, and instabilities for an inverted Mexican hat connectivity. We also discussed how to construct an equivalent PDE formulation of the models, which is easier to solve numerically. We have used these to conrm all our analytical results. Finally, we discuss some general features of the Turing-Hopf bifurcation which provide a stepping stone for the multiple-scale analysis in the next Chapter.

92

C HAPTER 5

The amplitude equations


In this Chapter we derive the envelope equations for the emerging oscillations after the point of instability that we found in Section 4.3. These equations present the normal form of the bifurcation [217], which is the simplest way in which a dynamical system can be expressed in a neighbourhood of a steady state, containing only the systems resonant terms. These are the terms that dene the system dynamics at a given steady state. For ODEs there is a standard way for constructing all resonant terms of a given dynamical system, which is founded on the Poincare-Dulac theorem [217, 218]. At a bifurcation point the system has a zero eigenvalue and the resonances make an innite sequence. Therefore for any practical purposes the normal form is truncated to some order. It is necessarily an approximation and gives useful results only for values of the bifurcation parameter near the instability. Even in that case, there are regions in parameter space of marginal dynamics. In those regions higher order terms have to be included to resolve between some qualitatively different types of dynamics. While a general theory about normal forms of nite-dimensional systems (ODEs) has been established by the rst half of 20th century, for spatially-extended dynamical systems (PDEs and integral systems, such as ours) constructing a normal form has to proceed more on a case by case basis. The theory predicts the general form of the equations but their precise derivation is based on methods whose proof could be established so far only for particular types of bifurcations and systems. That is why studies such as ours, pursue not just the derivation and analysis of the normal form, but also computer simulations of the original

93

system, in order to conrm the results. The normal forms at bifurcations of spatially-extended systems are commonly called envelope or amplitude equations. In Section 5.1 we derive the amplitude equations for patterns emerging beyond the point of TuringHopf instability for an integral system of the type (4.2.6). We omit the simpler case for the static Turing instability since it has already been analysed in [202]. We ultimately obtain the normal form for a bifurcation to patterns that have a non-zero group velocity. This type of normal form was rst found in [219] in the context of travelling wave convection. In Sections 5.2 and later, we show how the normal form helps to unravel system dynamics. For several examples we delimit the parameter regions of different types of dynamics. We interpret this new information in a manner useful to neuroscientists.

5.1

Derivation

The general form of the terms participating in amplitude equations can be seen easily by considering the symmetries and resonances in a system. Indeed the normal form of the TuringHopf bifurcation is well-known to be a system of two coupled complex GinzburgLandau equations [144, 220]. We will arrive at a variant of those in the end of Section 5.1.3. However, to obtain the coefcients in front of those terms more involved calculations are necessary. The method that we employ is multiple-scales perturbation analysis.

5.1.1 Scale separation


In Section 4.7 we saw that beyond the point of instability the dynamics could be separated into fast and slow, the fast being composed of oscillatory terms ei(c tkc x) determined by the equation eigenvalues at the bifurcation, while the slow is a modulation of these terms that remained undetermined at the linear order. For values of the bifurcation parameter prescribed as 1 = c + 2 , the slow dynamics evolves at scales = 2 t and = x. We introduce also an intermediate time scale = t. It is necessary for a consistent derivation of the normal form when /k = 0, although this becomes clear only near the end of the derivation. Therefore we can view the activity u as a function of x, t, , 94

and , namely, u( x, t, , , ) = A1 (, )ei(c t+kc x) + A2 (, )ei(c tkc x) + cc. The model (4.2.6) may be written u( x, t, x, t, 2 t) =
R3

(t r )K ( x y, r s) f u(y, s, y, s, 2 s) dy ds dr.

(5.1.1)

with which we have been working until now. Now the integration acts on the last three variables as well, due to the -dependence. We have to assess the contribution of the operator towards the various scales in the model. Here we will adopt the most general approach for doing this it is to break down the operator into an explicit polynomial by . The coefcients of that polynomial will each be convolutions, each acting at a single scale only. We begin by substituting for u an asymptotic expansion by the powers of , u u = u1 + 2 u2 + 3 u3 + . . . , with as of yet unknown terms ui = ui ( x, t, , , ). Additionally, we make a Taylor expansion of the nonlinear ring rate function, main as free parameters that will participate in the nal amplitude equations. f (u) = f (u) + 1 (u u) + 2 (u u)2 + . . . . The coefcients 2 , 3 , . . . re-

Note that the operator on the right-side of (5.1.1) is not the convolution K

When we consider specic models in Section 5.2.5 we will map their ring rate parametrisations and restrict these coefcients. To separate the scales at which the integration acts, we add and subtract the variables that are not being integrated upon, , and , e.g. ui (y, s, y, s, 2 s) = ui (y, s, + (y x ), + (s t), + 2 (s t)). This enables us to make a Taylor expansion in the last three arguments to get ui (y, s, y, s, 2 s) = ui (y, s, , , ) + (y x ) 2 1 2 + (s t) u (y, s, , , )+ i u (y, s, , , ) + O(3 ). i

(y x )

+ (s t)

+ (s t)

Now the variables y, s on which the integration acts are not coupled anymore with the slow variables and we can take out of the integral. We have a sum of 95

is convenient to introduce the notation t (t) = t (t),

convolutions of the type K . In order to write down these convolutions it tt (t) = t2 (t), K t ( x, t) = tK ( x, t), K tt ( x, t) = t2 K ( x, t).

K xt ( x, t) = x tK ( x, t),

K x ( x, t) = xK ( x, t),

K xx ( x, t) = x2 K ( x, t),

By using s t = (r t) + (s r ) to step through the intermediate time scale, we can write for each term (t r )K ( x y, r s)ui (y, s, y, s, 2 s) dy ds dr = K ui + K x + t K + Kt ui +

R3

2 2 2 2 + tt K + 2 t K t + K tt 2 + K xx 2 + 2 t K x + K xt 2 2 t K + Kt u ( x, t, , , ) + O(3 ) i

M0 ui + M1 ui + 2 M2 ui + O(3 ).
We denote with M j the segregated action of the operator on the different scales. After putting together all expansions we dened so far, the model (4.2.6) separates into an innite sequence of equations by the powers of . We truncate it to third order: u1 = M0 c u1 , u2 = M0 c u2 + 2 u2 + M1 c u1 , 1 (5.1.2i) (5.1.2ii) (5.1.2iii)

u3 = M0 c u3 + 22 u1 u2 + 3 u3 + u1 + M1 c u2 + 2 u2 + M2 c u1 . 1 1

5.1.2 Fredholm alternative


One can see that each equation above contains terms of the asymptotic expansion of u only of the same order or lower. This means that we can start from the rst equation and systematically solve for ui . In fact, if we put L = I c M0 = 96 I c K the system (5.1.2) has the general form Lun = gn (u1 , u2 , . . . , un1 )

and the right-hand side gn will always contain known quantities. Thus to construct the solutions of any nite truncation of the system (5.1.2) it is enough to know the inverse L1 of a linear operator. In terms of L the rst equation (5.1.2i) is Lu1 = 0 and we see that our entire solution space is going to be some perturbation of the kernel of L. In fact, we have already solved (5.1.2i) in Section 4.3 it is the condition for linear stability evaluated on the critical curve (4.3.5). We found that ker L = span cos(c t + can be expressed as a complex linear combination k c x ), cos(c t k c x ), sin(c t + k c x ), sin(c t k c x ) i.e. any solution u1 ker L u1 = A1 (, , )ei(c t+kc x) + A2 (, , )ei(c tkc x) + cc, (5.1.3)

where cc stands for complex conjugate of the preceeding expression and A1,2 are arbitrary complex coefcients variable only on the slow time-scales. There is an important restriction on the vectors that could participate in the right-hand sides gn . It is that the image of a linear operator has to be orthogonal to the kernel of its adjoint. A precise form of this statement is known as Fredholms alternative. Let M be a compact linear operator and let the complex number be given. Then either the inhomogeneous equation

( I M) f = g
has a unique solution f for every given vector g, or the homogeneous equation

( I M) f = o
has non-zero solutions. In the latter case, the inhomogeneous equation has solution f only for those g orthogonal to the kernel of the adjoint operator

< g, ker( I M) > = 0,


[221]. The restrictions for gn = gn (u1 , , un1 ) following from this result will take in u1 . Those equations will govern the slow dynamics near the bifurcation. Dene the functions (t) = (t) and K ( x, t) = K ( x, t) ( and K reected 97 around the point t = 0). The adjoint of L is the operator L = I c K ,

the form of equations for the amplitudes A1,2 as they are the only free quantities

see Appendix 9.4. The Fredholm alternative can be written in a more intuitive form: for all u ker L it is true that <u, gn > = <u, Lun > = < L u, un > = 0. The vectors ei(c tkc x) form a basis of ker L (it is the same vector space as ker L, since the dispersion relation is invariant under the change t t). calculating the four projections Therefore we will arrive at equations for the amplitudes Ai (, ) simply by

<ei(c tkc x) , gn > = 0,

<ei(c tkc x) , gn > = 0.

(5.1.4)

To alleviate notation let u+ = ei(c t+kc x) , u = ei(c tkc x) . It is easy to see that K u = (ic )K (k c , ic ) u = K u , K K (k c , ic ) u = u , i i K K x u = (ic ) K (k c , ic ) u = u ik ik K t u = (ic )

(the bar denotes complex conjugation) and likewise for the other variations of and K. The scalar products (5.1.4) expand as

<u , g2 > = <u , M0 2 u2 + M1 c u1 > = 1 <u , K 2 u2 + K x 1


+ t K + Kt c u 1 > = c u1 >+ < K (u ), 2 u2 > + < K x (u ), 1 < t K + K t ( u ) , c u 1 > = K K u , K+ u , c u 1 > + < c u 1 > = ik i i K ( K ) + ik i

< Ku , 2 u2 > < 1

2 K <u , u2 > + c 1

< u , u1 > =

2 K <u , u2 > + c M1 K <u , u1 > = 0 (5.1.5i) 1

and
<u , g3 > = <u , M0 22 u1 u2 + 3 u3 + u1 + M1 c u2 + 2 u2 + M2 c u1 > 1 1

= 22 K <u , u1 u2 > + 3 K <u , u3 > + K <u , u1 >+ 1


M1 c K <u , u2 > + 2 K <u , u2 > + c M2 K <u , u1 > = 0. 1

(5.1.5ii)

98

Here the FourierLaplace transforms () and K (k, ) are all evaluated at the points k = k c and = ic . By M we denoted the differential operators j
M1 =

+ , ik i

1 M2 = M1 2 + . 2 i

(5.1.6)

Since from (5.1.3) u1 is itself a linear combination of the vectors u and u , most scalar products in (5.1.5) are easily found:

< u + , u1 > = A1 ,
2

< u , u1 > = A2 ,
2

<u , u2 > = <u , A2 u+ + A2 u + 2A1 A2 u+ u + 2 1 1


A1 A1 u+ u+ + A2 A2 u+ u+ + 2A1 A2 u+ u + cc> = 0. The latter is zero because inside the scalar product all exponents are either of second or zeroth order thus nothing projects onto u . The cubic terms give

<u+ , u3 > = 3A2 A1 + 6A1 A2 A2 = 3A1 | A1 |2 + 2| A2 |2 , 1 1 <u , u3 > = 3A2 2| A1 |2 + | A2 |2 . 1


The rst solvability condition (5.1.5i) reduces to
M1 KA1,2 = 0,

i.e.

( K ) A1,2 ( K ) A1,2 = . ik i

(5.1.7)

The general solutions of these rst-order PDEs are A1 (, , ) = A1 ( + v g , ) = A1 ( + , ), A2 (, , ) = A2 ( v g , ) = A2 ( , ), where vg = ( K ) ik ( K ) . i

This means that all slow amplitude modulations A1,2 (, ) will be travelling with an intermediate group speed v g = /k to the left and right respectively. Note that since the relations above are established at a point only, (5.1.7) does On the contrary,
not imply that the action of M1 2 on the basis vectors will also result in zero.
2

1 M1 M1 KA1,2 2

1 = 2

vg ik i

2 A1,2 2 A1,2 =d = 0. ( K ) 2 2 (5.1.8)

99

To calculate the last remaining product <u , u1 u2 > in (5.1.5ii) we need to nd u2 . We can do this from the 2nd order equation (5.1.2ii). Since all operators there are linear, u2 will be some quadratic form of the basis vectors (4.3.6): u2 = form in the equation and use that m,n{0,1,2} Bm,n ei(mc t+nkc x) . To nd the coefcients Bm,n we substitute this L Bm,n ei(mc t+nkc x) = Bm,n ei(mc t+nkc x) c K ei(mc t+nkc x)

= Bm,n 1 c (imc )K (nk c , imc ) ei(mc t+nkc x) ,


and on the right-hand side, for example M0 2 A2 u+ 1
2

+ M1 c A1 u+ = c M1 (ic )K (k c , ic ) A1 u+ = 0,

= 2 K A2 u+ = 2 A2 (2ic )K (2k c , 2ic )u+ , 1 1


etc.

By matching the coefcients in front of similar vectors we obtain Bmn 1 c (imc )K (nk c , imc ) = 2 2
||m||n|| 2

A1 2 A2 2 (imc )K (nk c , imc )

m+n

mn

for m, n {0, 2} (we have put A1,2 = A1,2 for ease of notation), and

B00 1 c (0)K (0, 0) = 2 ( A1 A1 + A2 A2 + cc ) (0)K (0, 0). The M1 term in (5.1.2ii) did not play a role because it evaluates to the expression (5.1.7). To sum up, we have B00 = Bmn = 2 2
||m||n|| 2
m+n mn

2 2(| A1 |2 + | A2 |2 ) (0)K (0, 0) 1 c (0)K (0, 0) ,

A1 2 A2 2 (imc )K (nk c , imc )

1 c (imc )K (nk c , imc )

{m, n = 0, 2} \ (m, n) = (0, 0).

termined using this approach because their corresponding modes are in ker L. They are acted on by M1 in (5.1.5ii) in the following fashion:
M1 KB11 =

All other coefcients are zero, except for { Bmn | m, n = 1} which cannot be de-

The scalar product <u , u2 > yields the undetermined coefcients B11 ( + , , ). ( K ) ik B11 B11 + + ( K ) i B B11 v g 11 +

vg

2
100

( K ) B11 , ik

and this expression will appear in the amplitude equations. The last remaining scalar products are calculated as

<u+ , u1 u2 > = A1 B00 + A2 B02 + A1 B22 + A2 B20 =


A1 22 (| A1 |2 + | A2 |2 )C00 + A2 22 A1 A2 C02 + A1 2 A2 C22 + A2 22 A1 A2 C20 = 1 1 22 A1 | A1 |2 (C00 + C22 ) + | A2 |2 (C00 + C02 + C20 ) 2

and similarly

<u , u1 u2 > = A1 B02 + A2 B00 + A1 B20 + A2 B22 =

1 22 A2 | A2 |2 (C00 + C22 ) + | A1 |2 (C00 + C02 + C20 ) , 2 (imc )K (nk c , imc ) 1 c (imc )K (nk c , imc ) . (5.1.9)

with Cmn =

We are now in position to write out the third order solvability condition which gives the amplitude equations.

5.1.3 The mean-eld GinzburgLandau equations

Substituting in (5.1.5ii) all the necessary scalar products (calculated above) we obtain
A1 (c 1 + b| A1 |2 + c| A2 |2 ) 2 A2 (c 1 + b| A2 |2 + c| A1 |2 ) + 2

2 A ( K ) A2 ( K ) B11 = 0, + c d 2 2 + c ik + i

( K ) B11 ik

+ c d

( K ) A1 2 A1 + c = 0, 2 i +

with
2 b = c 1 33 + 22 (2C00 + C22 ) ,

2 c = 2c 1 33 + 22 (C00 + C02 + C20 ) .

Here we have used that since c is real c K = 1 also holds, and due to (4.2.7), system, the even ones to the real part. Since the coefcient in front of the time derivative is complex we cannot get rid of it by rescaling . 101 Cm,n = Cm,n . The odd eigenvectors in (4.3.6) lead to the imaginary part of this

To eliminate the unknown coefcients B11 we follow [219] and average the

two equations by the and + variables, respectively. For example, the only average B11 will be independent of and thus the derivative term vanishes. After averaging we obtain A1 2 A = A1 ( a + b | A1 |2 + c | A2 |2 ) + d 2 1 , + 2 A A2 = A2 ( a + b | A2 |2 + c | A1 |2 ) + d 2 2 , a = D,
2 b = D 33 + 22 (2C00 + C22 ) , 2 c = 2D 33 + 22 (C00 + C02 + C20 ) ,

quantities in the rst equation that vary by are B11 and A2 . However the

(5.1.10)

with appropriately modied parameters:

2 d= cD 2
D = c 2

vg ik i ( K ) . i
1

(5.1.11)

( K ),

These are the coupled mean-eld GinzburgLandau equations describing a Turing Hopf bifurcation with modulation group velocity v g of order 1, as discussed by [219, 222, 223]. As a reminder of the interpretation of terms in the above equations note that = (ic ), K = K (k c , ic ) are the Fourier transforms of the kernels specifying the model, = (1 c )/2 is a measure of the distance from the bifurcation point, and 1 , 22 , 63 are the rst, second and third derivatives of the nonlinear processing function f (u) taken at the homogeneous solution u. The only parameters not participating in the linearised model are 2 , 3 and they remain as additional degrees of freedom once we have xed a bifurcating solution to and right-going amplitude modulation wavetrains, moving with group velocities v g . The denition of the averaging A depends on the type of solutions with periods P1,2 . In this case we are looking for - localised or periodic. Here we will assume A1,2 are periodic 1 P1,2
P1,2 0

study. The dependent variables A1 ( + , ) and A2 ( , ) are respectively the left

| A1,2 |2 =

| A1,2 |2 d .

102

The form of the cross-interaction term in (5.1.10) compared with the coupled complex GinzburgLandau equations (quoted below) reects that since v g = O(1), the two wavetrains are moving across each other too fast to be able to feel the others ner spatial structure. Not surprisingly the equations (5.1.10) governing the slow-mode evolution are of the GinzburgLandau type. See Walgraef [145] or Newell [224] for discussion of the importance of this equation in pattern formation. It reects the general symmetry class of the pattern-forming system rather than the specic choice of the starting equation (4.2.6). Because of this the form of the amplitude equations A1 = A1 ( a + b| A1 |2 + c| A2 |2 ) + dA1 , A2 = A2 ( a + b| A2 |2 + c| A1 |2 ) + dA2 could easily be deduced by symmetry arguments as, for example, Bressloff [183] thoroughly describes. However the expressions for the coefcients a, b, c, d depend on the physics of specic model and have to be derived by the approach we used here. To this moment, rigorous proof of the reduction to amplitude equations by multiple-scales expansion exists only for the 1D real Ginzburg Landau equation [225]. In a different approach one could prove that the solutions of the GinzburgLandau system approximate the solution of the full system by constructing error bounds [226, 227]. This works for a larger number of systems, and has been used to show the validity of the complex Ginzburg Landau equation as well [227, 228].

5.2

Analysis of the amplitude equations

The coupled complex GinzburgLandau equations are one of the most-studied systems in applied mathematics. They give a normal form for a large class of bifurcations and nonlinear wave phenomena in spatially extended systems and exhibit a large variety of solutions. However, many of these appear due to the inuence of boundary conditions. Here, our focus is on innitely-extended systems. In a 1D-system the most common type of solution are the periodic travelling wavetrain and the periodic standing wave. These could be relevant to types of brain activity observed in neuroscientic experiments (see Section 3.5), therefore we are interested to delimit the region of parameters in which the 103

neural eld models support this type of dynamics. In the following Sections we present the theory behind travelling wave (TW) and standing wave (SW) selection and stability, and apply it to the models of interest from Chapter 4. It turns out that the case of stable standing waves is rather rare. We introduce a few more models, which are more complicated and more realistic, with the aim of nding a theoretical example supporting standing waves in sufciently wide parameter region. That model could then be used as part of the neuroscientists modelling inventory.

5.2.1 Travelling wave versus standing wave solutions


First let us neglect the diffusive terms in (5.1.10). This allows us to determine the conditions for the selection of travelling vs standing wave behaviour beyond a TuringHopf instability. The space-independent reduction of (5.1.10) takes the form
A1 = A1 ( a + b | A1 |2 + c | A2 |2 ),

Such a reduction is valid in the limit of fast diffusion, and is particularly relevant to the interpretation of numerical experiments on a periodic domain (with fundamental period an integer multiple of 2/k c ). Following a polar change of coordinates A1 = r1 ei1 , A2 = r2 ei2 , we separate the real and imaginary parts:
2 2 r1 = r1 ( ar + br r1 + cr r2 ), 2 2 r2 = r2 ( ar + br r2 + cr r1 ), 2 2 1 = a i + bi r 1 + c i r 2 ,

A2 = A2 ( a + b | A2 |2 + c | A1 |2 ),

a, b, c C.

(5.2.1)

2 2 2 = a i + bi r 2 + c i r 1 ,

(5.2.2)

with a = ar + iai , etc. The moduli and the phases separate and we can consider only the system for the moduli. Its stationary states can be identied to solutions of the original eld model (4.2.6). The trivial solution r1 = r2 = 0 corresponds to the homogeneous steady state u. It is stable for ar < 0. The steady
2 2 state r1 = 0, ar + br r2 = 0, along with its symmetric counterpart ar + br r1 = 0,

r2 = 0, exists for ar /br < 0.For the phase we have


2 = a i ar

bi = : e1 . br

104

The amplitudes are A1 = 0, A2 =

The neural eld solution is given as u( x, t, ) =

ar /br eie1 , up to an arbitrary phase-shift.


ar cos (c + 2 e1 )t k c x ). br

ar ie1 i(c tkc x) e e + cc = 2 br

This is a travelling wave with speed (c + 2 e1 )/k c . Another stationary state of (5.2.2) is
2 2 r1 = r2 =

ar , br + cr

1 = 2 = a i ar

bi + c i = : e2 . br + cr

The two amplitudes are equal, A1 = A2 , which leads to a standing wave: u( x, t, ) =

ar ie2 i(c t+kc x) e e + ei(c tkc x) + cc br + cr ar =4 cos (c + 2 e2 )t cos k c x. br + cr

The linear stability analysis of the stationary states in (5.2.2) leads to the following conditions for selection between travelling wave (TW) or standing wave (SW): ar > 0, ar > 0, br < 0, TW exists (supercritical) and for br cr > 0 it is stable, SW exists (supercritical) and for br cr < 0 it is stable. (5.2.3)

br + cr < 0,

In the parameter regions where the stationary states of the ODE system (5.2.2) do not exist, the amplitude of the bifurcating solution still remains nite if the nonlinearity f is a bounded function. However, the bifurcation is subcritical and higher-order terms are needed in the normal form to give us information about the amplitudes. We will not concern ourselves with these regions. We would like to translate the conditions (5.2.3) into ones which have clearer physical meaning in terms of the neural eld model. Since the amplitude equations are only valid near the point of bifurcation, 1 is xed to the value c . The only free parameters are those describing the nonlinearity: 2 and 3 . This motivates us to write out ar , br , cr , using (5.1.11), as ar = e, br =
2 33 e + 22 f,

with

e = Re D,

f = Re D (2C00 + C22 ) ,

2 cr = 63 e + 42 g,

g = Re D (C00 + C02 + C20 ) . 105

(5.2.4)

The expressions e, f, g depend only on the linear part of the neural eld model. They contain FourierLaplace transforms of the model kernels K ( x, t) and (t) (and their derivatives) and they are xed by the model parameters. For ex ample, when analysing the model with axonal delays (4.2.6), (4.5.5), e, f, g will be known numbers for every point c , k c , c (, v) along the critical curve of Figure 4.4. The regions in the space of nonlinear parameters of TW and SW existence and stability are uniquely determined by (, v). An illustration is Figure 5.3, left, where is the nonlinear parameter (steepness of f (u), see Section 4.2).

5.2.2 Structure of the TW and TW/SWgenerators


The selection conditions (5.2.3) divide the (br , cr )-parameter space into sectors where the bifurcation of the TW and SW branches is sub- or supercritical, and in the cases of a supercritical branch subsectors where it is stable or unstable. This information is summarised in the bifurcation diagram on the left of Figure 5.1 (see also [148]). There are a few things to note from this diagram. First, the domains of TW and SW stability are complementary. Also, whereever one solution is supercritical and stable, the other is supercritical as well (but unstable). The empty white regions show where both branches are subcritical. There the amplitude of the pattern determined from the cubic amplitude equation will grow unbounded. For a specic model (in our case, the integral model (4.2.6)), the conguration of the (br , cr )-space preserves its features while mapped onto the given model parameters. Due to the quadratic form of the expressions (5.2.4), all regions in the

(2 , 3 )-space will have parabolic borders with a tip at the origin 2 = 3 = 0.


The parabolas may only differ in their curvatures (second derivatives of (5.2.4)). The preimage of their axis of symmetry 2 = 0 is the line cr = 2br . Since we have e > 0 (ar > 0) beyond the bifurcation point, the branches are supercritical along the negative half of the 2 = 0 line (3 < 0). Thus, the points where TW or SW existence hold are always on the underside of the respective parabolas, even if the parabolas point upwards. Similarly, supercritical TWs (SWs) will be stable below (above) the stability parabola. Only half of the (br , cr )-space is mapped 106

There is only two possible congurations of the (2 , 3 )-space. They are illustrated in Figure 5.1, centre and right. They correspond respectively to a mapping with domain either the half-plane on the right side or on the left side of the line cr = 2br . We can nd from the system (5.2.4) the condition distinguishing which of the two congurations takes place. One obtains the equality
2 42 ( g f) = cr 2br .

Since the parameter 2 is real, we have that for f < g the domain is on the left of the cr 2br line with a resulting image as in Figure 5.1, right; and vice-versa for f > g. When f = g the parabolas for the existence borders of TW and SW and for the stability, all coincide. It is the only scenario by which the system can switch from one conguration to the other. The two congurations are important for the later parameter study of the model, therefore we will give them names. In the rst case (Figure 5.1, centre, f > g) only travelling waves could be stable. We will call this case a TWgenerator. In the second case (Figure 5.1, right, f < g) there are both regions of stable TWs and of stable SWs. We will call that a TW/SWgenerator. The convenience of these designations becomes apparent for example, since it is sufcient to check for a given model parameters that f > g to preclude the nding of stable SWs. In the case of TW/SWgenerator, however, one has to analyse further the dynamics in dependence of the nonlinear model parameters. In the following Section we show how to relate these to 2 and 3 .

5.2.3 Cubic and sigmoidal ring rate function


The general case of independent 2 and 3 can be emulated by choosing a cubic nonlinearity in the model, i.e. f (u) = 1 u + 2 u2 + 3 u3 . Note that in this case there is no truncation of the amplitude equations at (5.1.2) and it turns out that the amplitude equation predictions match very closely the numerical results even far beyond the bifurcation point. Since the ring rate is now unbounded, patterns outside the existence regions grow unbounded beyond bifurcation from the homogeneous steady state. Inside, as well, one needs to work with solutions whose amplitude does not become too large, since the cubic f (u) emulates the sigmoidal threshold shape only beetween the cubic 107

cr

unstable br=0 TW only

2b

S W

TW
(stable)

stable TW unstable
+ unstable SW 0 SW only

SW
(unstable)

br

T W

ta

+ unstable TW

le

stable SW

ta b le

Figure 5.1: Illustration of the simple rules following from the TWSW selection conditions
(5.2.4), see the text. Left: The full bifurcation diagram in terms of br and cr . Centre and Right: The two possible congurations of the parameter space (2 , 3 ) for the model (4.2.6). 2 and 3 are the second and third coefcients respectively of the Taylor expansion of the nonlinearity around the homogeneous steady state. Lighter grey denotes stable TWs, darker grey stable SWs. In those regions the complementary solution is always supercritical but unstable. We use stripes (TWs) and squares (SWs) to show where only one solution is supercritical, and it is unstable. The regions where the dynamics is not predicted by the cubic amplitude equations are left white. The thick black line is the image of br = cr , where TWs and SWs exchange stability. Note that all regions apart from SW-stability are on the underside of their parabolic border, they centre around the negative half-line 2 = 0, 3 < 0.

b
r

0 = cr +

TW-generator

TW/SW-generator

extrema u1,2 = bounded.

2 2 31 3 . 33

Beyond those points the solution will grow un-

Choosing any other function f in effect species a locally homeomorphic mapping, , of the (2 , 3 )-plane to the space of parameters describing f . For the sigmoidal function (3.3.4), f (u) = 1 , 1 + exp( (u h))

this space is in fact lower-dimensional because it is specied by only two parameters (while the cubic f by three): the threshold h and the steepness . We have f (u) = f (u) = , (1 + )2 1 , = e ( u h ), 1+ 2 ( 1) 3 (2 4 + 1) . f (u) = , f (u) = (1 + )3 (1 + )4 (5.2.5) 108

At the bifurcation point the transcendental equation c = f (u) reduces the number of independent parameters by one. For example we may use this equation to write h = h( ) and then express both 2 = f (u)/2 and 3 = f (u)/6 as functions of just the steepness . The relationship between h and for a xed c = 1 is plotted in Figure 5.2, left (the relationship is proportionate to the value c ). The parameter portrait on the two branches is identical because the axis of symmetry of (2 , 3 )space maps onto the middle of the graph h = 0. Note that for < 4c , there is no corresponding value of h which solves the condition the system to be at the bifurcation point c . The range of relevant values of h is very small, its width not exceeding 0.05. Due to all these reasons in this Chapter we prefer to choose for the independent parameter. From (5.2.5) we have = 1 2c 1 2c
2

1.

One can also work with as an independent model parameter. We do this when we supply exact data for the models simulation. It avoids the inaccuracy arising from numerically solving a transcedental equation to nd h( ). See Appendix 9.3.1.

5.2.4 BenjaminFeir instability


We have determined some aspects of the nonlinear behaviour of the full system by analysing the reduced amplitude equations (5.2.1). Basically, the stable solution of the system (5.2.1) is composed of one (TW) or two (SW) periodic wavetrains moving with speed determined by the linear instability. However, the diffusion term present in the full space-dependent amplitude equations (5.1.10) alters somewhat the nature of solutions. The generic instability associated with such diffusion, in an innite 1D system, is an instability to longwavelength sideband perturbations. This is caused by the inuence of nearby excited modes on the primary mode, it is the so-called BenjaminFeirEckhaus instability. When the system is beyond the bifurcation point, 1 = c + 2 , > 0 there is a continuous set of unstable modes around k c , which has a width 2 (see Figure 3.8, left, and Section 4.7). Let us consider one of these modes, with a 109

tension of that in Section 5.2.1. We present the mode amplitude in polar coordinates, A(, ) = r (, )ei(, ) . The appropriate form of the polar phase is be seen in the travelling wave case: ( , ) = e q for the left- and right-going wavetrain respectively, as can uleft,right ( x, t, , , ) = A( , )ei(c tkc x) + cc =

wavenumber k = k c + q, q ( , ). The following analysis is an ex-

2 rei(e q(vg )) ei(c tkc x) + cc = rei(c +vg q+ e)ti(kc +q) x + cc.

The TW has a wavenumber k c + q as required, the temporal frequency of the pattern is also determined: c + v g q + 2 e. To nd e and r (, ) we substitute A = rei in the amplitude equations (5.1.10), to get the expression ie = a + and the phase: r2 = br2 dq2 . The real and the imaginary parts respectively give us the modulus

ar + dr q2 b d , e = a i + bi r 2 d i q 2 = e 1 + d r q 2 ( i i ) br br dr (where e1 is the TW modulation frequency in the space independent reduction,


Section 5.2.1)). By far, we have completely determined in dependence of the parameters a TW solution with a wavenumber in the active set around k c . Below we will determine its stability in respect with all other such solutions. Before that we will look at SWs as well. For the mean-eld GinzburgLandau (MFGL) equations (5.1.10) the sideband stability criterion for SWs is much easier to derive than for the more commonly studied coupled complex GinzburgLandau (CCGL) equations corresponding to a bifurcation with vanishing group velocity. It coincides with the criterion for stability of TWs [219]. The reason is that the coupling between the left- and right-going wavetrains is much weaker in the MFGL system. The real part of the rst equation in (5.1.10) will give us
2 2 0 = ar + br r1 + cr r2 dr q2 . 2 2 Therefore r1 could vary only by and r1 = r1 . Using the second equation as 2 2 2 2 well, we get dr q2 ar = br r1 + cr r2 = br r2 + cr r1 . The only non-trivial solution

of the system is indeed a standing wave:


2 2 r1 = r2 =

ar + dr q2 , br + cr
2

e=

2 a i + ( bi + c i ) r 1

d i q = e2 + d r q
110

d bi + c i i br + cr dr

(5.2.6) .

500
100

max

80

60
c

t
40

(k,t)

20

-0.02

0.02

0
0 1 2

Figure 5.2: Left: The dependence of the gain on the threshold h for the sigmoidal function
(3.3.4) as determined by the condition to be at the bifurcation point 1 = c . Centre: An example of a BenjaminFeirEckhaus instability in the model with axonal delay (4.2.6),(4.2.2) and cubic ring rate. Parameters are the same as for Figure 4.4a with v = 1 and 1 = c + 2.6. The primary unstable wavenumber when 1 = c is k c = 2. Shown is the Fourier transform, u(k, t), of u( x, t) illustrating the active wavenumbers. Initial wave-data with k = 2.64 can be seen to transition to a pattern with wavenumber k = 1.98 (the periodic domain size is 19.04). Right: The prole of perturbation growth-rates ( ) leading to a Benjamin-Feir instability of the primary mode k c .

Note that by formally setting cr = 0 here, we obtain the TW solution. Now we investigate the stability of the off-centre TW and SW solutions. We perturb both the modulus and polar phase of the amplitude: A(, ) = (r + (, )) ei(e +q +(, )) . Substituting in the amplitude equations, we get an equation for the perturbations, + ir = 2r2 (b + c ) + d 2 2 + ir 2 + 2iq 2 + ir .

The spatial average of the perturbations is zero. Therefore, the equation for stability of the SW is the same as that for the TW (in which cr = 0). We set = e +i , = e +i . After a laborious calculation we obtain two perturbation growthrates: 1,2 = r2 br 2iqdi dr 2 r2 b 1

(2iqdr di 2 )(2iqdr di 2 + 2r2 bi ) . 2 r4 br

111

To solve this expression by hand we limit ourselves to long-wavelength perturA Taylor expansion for small gives bations 0, i.e. to the disturbance caused by the neighbouring active modes. 1 ( ) = 2r2 br + O( ), 2 ( ) = 1 2q2 d2 2 2iq(br di bi dr ) + (br dr + bi di ) 2 + 2 2r (br + bi2 ) 2 + O( 3 ). br r br

The rst growthrate relates to the stability of the central k c -wavemode. It repeats the conditions found for the reduced system of Section 5.2.1. Note that there br < 0 is true whenever the TW or the SW solution exists and is stable. The stability towards long-wavelength sideband perturbations is determined by the requirement Re 2 < 0: br dr + bi di + 2q2 d2 r |b| < 0. 2 b2 r r

This is the condition for BenjaminFeirEckhaus stability. Substituting r2 from (5.2.6) and solving for q2 , we nd that there is a band of modes q centred around the primary mode k c , which are stable to sideband perturbations: q2 < 2d2 (br r br dr + bi di a b2 . 2 + b2 d ( b d + b d ) r r + cr )|b| i i r r r r (5.2.7)

A solution with wavenumber outside this band will break up in favour of a wave with wavenumber from inside the stable band. In Figure 5.2, centre, we show the Fourier transform of a wave solution to the model from Section 4.5.1 undergoing a BenjaminFeirEckhaus instability. If the condition br dr + bi di > 0 (5.2.8)

holds (sometimes called Newell criterion), there is no stable wavenumber at all and several different regimes of spatiotemporal chaos can develop. This case is commonly referred to as the BenjaminFeir instability. The parameter conditions determining the different types of chaos are described in [229]. Benjamin Feir stability depends only on the self-coupling and the diffusion constants b and d. One can plot the regions in parameter space where it holds, similarly to the discussions in Sections 5.2.2 and 5.2.3. In the (2 , 3 )-plane the region also will be parabolic. For the MFGL equations the criterion for BenjaminFeir instability coincides for the TW and SW solution. However, the issue is complicated by the validity of 112

the amplitude equations. For group velocity v g of a small order using modied CCGL equations would be more appropriate. Whether v g is nite or small, depends on the relative size of the amplitudes, as well as, that of the system domain. Which are the relevant equations does not matter for the TW sideband stability as the criteria are the same in both systems. However SW sideband stability is much more complicated in the CCGL case, where a short-wavelength instability is possible as well, or the band of Eckhaus-stable q-modes might not be centred around the primary mode k c , or might not even contain it (the group velocity shifts sideways the Eckhaus parabola (5.2.7)). These subtleties are thoroughly analysed by Riecke and Kramer [223]. They nd that the range of the bifurcation parameter values 1 beyond the critical c might be quite narrow for the MFGL system. As one gets further away from the bifurcation point and the pattern amplitude increases, the CCGL dynamics comes into play. It could even re-stabilise SWs that were unstable for smaller 1 s. Untangling the numerous SW sideband stability boundaries is beyond the scope of this dissertation, however the example of SW BenjaminFair instability shown in Figure 5.9 is probably of this re-stabilisation type. The Fourier-transforms of the model solutions show that the primary mode k c looses stability as predicted by our theory. However rather than evolving toward a chaotic solution, a new type of wave stabilises. Unfortunately, we were not able to test BenjaminFeir instability of a TW. In all the other models we look at, both TWs and SWs are BenjaminFeir-stable through the signicant part of the regions where they are stable to space-independent perturbations (see the next Sections). We need to express a few more quantities that we use to search for sideband instability in concrete models (they are implemented in the codes in Appendix 9.5). By setting q = 0, we concentrate on stability of the primary wavenumber k c i.e. a BenjaminFeir instability. In this case the Taylor expansion of 2 ( ) is simpler, we write it out to a higher order: 2 ( ) = d2 | b |2 2 br dr + bi di i 2 2 2 + O( 6 ). br 2r br (5.2.9)

This function is plotted on Figure 5.2, right. When we try to spot Benjamin Feir instability in simulations of the neural eld model, it is best to load (or evolve to) the stable TW or SW solution, and add a periodic perturbation with wavenumber 1 corresponding to the maximum 2 on this plot. The time nec113

essary to register the instability transient is 1/2 (1 ) (which could still be quite long and computationally expensive). From (5.2.9), it is given by 1 = 1 di

br dr + bi di ar b2 . |b|2 (br + cr ) r

(5.2.10)

The width of the band of growing perturbations is 2 ( ) = 0).

21 (found by solving

5.2.5 Examples revisited


Let us show how to apply the theory we developed so far to concrete neural eld models. Here we determine the regions of TWSW selection for the example models dened in Section 4.2. As in Chapter 4 we x the synaptic time constant to = 1, plotting along the parameters v, z0 or (respectively: axonal conduction velocity, synaptic distance from the soma and synaptic spread). We found the linear instabilities and plotted the critical curves for these models in Section 4.5. Now we are interested in the space of nonlinear parameters along the critical curves for TuringHopf instability. The parameters are 2 and 3 for cubic ring rate, or and h for sigmoidal ring rate. First, we look at the model with axonal delays (4.2.2). Using the scripts given in Appendix 9.5 we can plot (not shown) the two quantities f and g to discover that for all v we have a TW-generator ( f > g). Therefore, this model could support stable dynamic Turing patterns only of travelling wave (TW) type. For a cubic ring rate, the conguration of (2 , 3 ) parameter space is similar to the illustration in Figure 5.1, centre, at any value of v. For the sigmoidal ring rate, as we discussed in Section 5.2.3, there is just one independent nonlinear parameter. We can produce a two-dimensional plot showing the TWSW selection at every bifurcation point c (v). It is shown in Figure 5.3, left. It conrms that we have a TW-generator everywhere, but it shows the exact parameter boundaries in which the model will evolve to a travelling wave solution. Here the dashed line is the minimum for which the analysis is valid. It is given by = 4c , for a less than that there is no corresponding value of h which can give us a bifurcation parameter 1 = f (u) at the instability value (see Figure 5.2, left, and the remarks in Section 5.2.3). The 114

amplitude equations are not relevant also for the range of speeds v 0.15 to

a Hopf bifurcation (Figure 4.4). For parameters in the white space above the order to predict the system dynamics. We would like to have a convenient way to summarise information along the

v 0.65, since there another eigenvalue becomes rst linearly unstable through

SW-existence boundary, one requires higher order terms in the normal form in

instability curve c (v) also for the cubic ring rate with parameters 2 and 3 . More so because the parameter space of any other ring rate function is given by a mapping that preserves the conguration of selection regions that we have here. All region boundaries in the (2 , 3 )-plane are parabolas whose tip and axis of symmetry coincide, as seen in Figure 5.1. Therefore they divide the innite plane in a way similar to how straight lines would divide it into sectors. We decide on plotting a measure of the proportion of parameter space that the parabolic sectors take up. These region proportions for the axonal model are shown in Figure 5.3, right. A value of zero, means the parabola is at and divides the plane in two halves, along the 2 -axis. A negative value is a parabola curved downwards, with 1 being the limit of innitely thin parabola coincide with the positive 3 -axis. The TW- and SW-existence and the TWstability regions are always below the respective boundary parabola (refer to Section 5.2.1). Therefore, in Figure 5.3 the region widths are given by the space below the plotted lines, e.g. a value of 1 means that a region covers the entire plane. The formulas that we used to plot these widths are discussed in Appendix 9.5.1. Here we mention only that they are not a well-dened measure for comparing the parameter regions (regions which extend to innity). They are intended to give only an intuitive impression of the parameter space conguration. Looking at Figure 5.3, for example, one can conclude that it is not possible in the axonal model to obtain dynamics beyond a BenjaminFeir instability: the sideband stability line is everywhere above the existence boundary for TWs, while SWs are unstable. On the plot are shown also some isolines for . The sigmoidal function maps only a part of the (2 , 3 )-plane into ( , h). If a parabola has positive curvature and is very steep, it might not cross at all the domain of the map. We have plotted the largest regions whose boundary 115 coinciding with the negative 3 -axis. Correspondingly, for a value of 1 it will

120

0.2

100

Homogeneous oscillations

0.1

80

SW TW
0

60

40

-0.1 20

-0.2 0

Figure 5.3: TWSW selection regions along the TuringHopf instability of Figure 4.4, that is,
for the model with axonal delay and inverted wizard hat connectivity. Left: For a sigmoidal ring rate. The analysis is only valid for above the thick dashed line and a corresponding h (obtained from Figure 5.2, left). Notation is the same as in Figure 5.1: light grey region shows

stable TWs and unstable SWs, grey squares show that only unstable SWs are supercritical. The thick line is where br = cr (the stability line). The domain of BenjaminFeir stability coincides almost exactly with the TW existence region and has been omitted. Right: The region proportions in (2 , 3 )-space: TW existence boundary (light grey), SW existence boundary (dark grey), stability line (black), Benjamin-Fair line (thin). The dashed lines show isolines for the mapping to a sigmoidal ring rate parameter , from bottom to top: = 40, = 120, .

would appear into a parameter plot for the sigmoidal ring rate, when this plot sigmoidal map domain. extends to = 40, = 120 or . The latter line gives the end of the

Similarly, we follow the critical curve of the model (4.2.6), (4.2.5), with dendritic delays with a xed synaptic site z0 . The instability curve is shown in Figure 4.5, left, and the conguration of nonlinear parameter space in Figure 5.4, left. We nd that the only stable solutions for every z0 are travelling waves. The case of correlated synaptic distances is more interesting. On Figure 5.4, right, are shown the selection region proportions along the TuringHopf instability from from Figure 4.5, right. The point where the TW and SW existence borders and the stability line cross, is where the conguration changes from a TWgenerator to a TW/SWgenerator. Therefore, for a synaptic spread parameter > 4.23 one could, in principle, obtain stable SWs. There are also parameter windows of BenjaminFeir instability of the stable SW solution and even the stable TW solution, around = 1.3. However, the scale on the abscissa in Figure 5.4, right, points that the relevant spans of 2 and 3 are rather tiny. A SW solution 116

would be overwhelmed in model simulations by the proximity of TW and of unstable dynamics. Moreover, these regions are only available for a cubic ring rate. They fall well above the domain boundary of the sigmoidal ring rate map (dashed black line). The TWSW selection plot for the sigmoidal parameters in Figure 5.5, left, conrms this. In conclusion, the dominant dynamics for the model with dendritic delays is again a travelling waves. We can track the value of separating the TW and TW/SWgenerator conguration in a second parameter. For example, for the model with both dendritic and axonal delays mentioned in Section 4.5.2, we can track along the axonal conduction velocity v. How to do this is described in Appendix 9.5.3, the result is shown on the right of Figure 5.5. The diagram is consistent with the results we obtained for the special cases investigated until now. The left side of the diagram ( = 0) shows the model with axonal delay only (Figure 4.4), while the model with dendritic delay only (v , Figure 4.5, right) resembles a horizontal section of the upper part of the diagram. Now we can summarise the dynamics of neural eld models with delays beyond a dynamic Turing instability. There are synchronous homogeneous oscillations for large delays, whilst for small ones there are travelling waves. Intermediate between these two extremes there is a TW/SWgenerator. However, we have checked that for any value of v the regions of SWs there remain insignicant. To obtain standing waves, as well as, dynamics beyond a Benjamin Feir instability we have to look further to more complicated models. This leads us to the following Section.

5.3

Spike frequency adaptation

In real cortical tissues there is an abundance of metabolic processes whose combined effect is to modulate neuronal response. It is convenient to think of these processes in terms of local feedback mechanisms that modulate synaptic currents. Here we consider a simple model of so-called spike frequency adaptation (SFA): there is a negative feedback term that lowers the averaged neuronal response in periods of high activity in the population neighbourhood. It is a more general version of the models previously discussed in [163, 205, 206, 207]. For

117

0.06 0.08

0.04 0.04

0.02

0.5

z0

0 1.5 2 2.5 0 0.5 1 1.5

Figure 5.4: The region proportions in (2 , 3 )-space for the model with dendritic delay, along the TuringHopf instabilities from Figure 4.5, left and right, respectively. Notation is the same as in Figure 5.3, right: TW existence boundary (light grey), SW existence boundary (dark grey), stability line (black), Benjamin-Fair line (thin). Dashed isolines: = 100 and . biological background to SFA and modelling at the level of single neuron, see for example [230] and references therein. The neural eld setup of the model is u = [ K f u a a ] ,

t a = a +

wa ( x y) f a (u(y, t)) dy,

(5.3.1)

where a > 0 is the strength of coupling to the adaptive eld a and wa ( x ) = e(| x|/) /2 describes the spread of negative feedback. By solving for a with the Greens function a (t) = et H (t), where H (t) is the Heaviside function, we can reduce the model to a single equation for u( x, t) u = (K f a a wa f a ) u. Typically, the adaptative feedback is considered to depend linearly on the local activity, with a f a (u) = a u. However, a nonlinear adaptation term also could be a relevant model. In particular, the two population model presented in Section 3.3.1 can be reduced (assuming linear inhibitory feedback, f I (u) = u) to a single equation having f a f f E . This is the lateral inhibition model studied by [231]. We will not dwell on this relation and refer to our version below simply as a model with nonlinear adaptation.

118

BjF stable
4

400

300

SW

TW

TW-generator

TW/SW-generator

200

100

Homogeneous oscillations

0.2

0.4

0.6

0.8

1.2

0.2

0.4

0.6

0.8

1.2

Figure 5.5: Left: The regions of existence of TWs (light grey, stable) and SWs (light grey plus squares, unstable) and their BenjaminFeir stability (shaded) for the model with correlated dendritic delays and sigmoidal ring rate (a sigmoidal map of Figure 5.4, right). Right: Two-parameter continuation of the borders between instability types for the model with axo-dendritic delays, inverted wizard hat connectivity (w0 = 1) and z0 = = = D = 1. For a point

(, v) chosen from within the TWgenerator (TW/SWgenerator) the (2 , 3 )


parameter space has the form of Figure 5.1 centre (right).

5.3.1 A linear combination of convolutions


In view of discussing models with adaptation or separate excitatory and inhibitory neuronal populations (two-population models, Section 3.3.1) we will develop the theory for an equation with two (or more) convolution structures, u = (K f + Ka fa ) u. (5.3.2)

Since it is a straightforward generalisation of the calculations in Chapter 4 and Section 5.1 we give a rather schematic exposition. It may serve also as a resum of the steps of linear and weakly nonlinear analysis of a TuringHopf instability in a neural eld model. First, the models homogeneous stationary state is u = 0 K00 f (u) + Ka00 fa (u) . We write the indices K mn as a shorthand for the arguments, K (mk c , inc ), since everywhere in the amplitude equations participate only integer multiples of the 119

critical frequencies, k c and c . The Taylor-expansion of the ring rates gives u u = K 1 (u u) + 2 (u u)2 + 3 (u u)3 + + Ka a 1 (u u) + a 2 (u u)2 + a 3 (u u)3 + .

11 11 The eigenvalues are roots of the dispersion relation dr1 (k, ) = 1, where dr1 =

1 1 K11 + a 1 Ka11 . The derivation of the Turing conditions does not differ from that in Section 4.3 other than replacing the quantity 1 K in (4.3.4) with
11 11 the more general dr1 . As it can be seen from the form of dr1 , all parameter pairs

(1 , a 1 ) lying on a line with slope (Ka11 , K11 ) are the same distance from the
11 bifurcation point. Thus dr1 is a rst integral for the bifurcation parameter. The

system dynamics would be identical along this line. This includes the disper11 sion curve (k, 1 , a 1 ) = (k, 1 K11 + a 1 Ka11 ) = (k, dr1 ). The Taylor expan11 sion of the dispersion curve (k, dr1 ) formally corresponds to that of (k, 1 ) in

Section 4.7. As a result, we can write


11 11 dr1 drc = 2 . 11 11 From here, the equation dr = , where dr = 1 K11 + a Ka11 , gives us the

coordinates of along the two bifurcation parameters, i.e. 1 = c + 2 and a 1 = a c + 2 a . When we make an asymptotic expansion of u( x, t, , , ) and separate the scales of the neural eld, we obtain identical operators Mi as those dened in Section 5.1, and Ma i where , K are replaced by a , Ka . The sequence of equations, corresponding to (5.1.2), looks like Lu1 = 0, Lu2 = M0 2 u2 + M1 c u1 + Ma 0 a 2 u2 + Ma 1 a c u1 , 1 1 Lu3 = M0 22 u1 u2 + 3 u3 + u1 + M1 c u2 + 2 u2 + M2 c u1 + 1 1 Ma 0 2a 2 u1 u2 + a 3 u3 + a u1 + Ma 1 a c u2 + a 2 u2 + Ma 2 a c u1 , 1 1 ... with L = I c M0 a c Ma 0 . The group speed is found as vg =
11 drc ik 11 drc . i

The subscript of dr is the subscript of the participating s, i.e. dr1 is the linear part of the model (in Fourier space), while dr2 , dr3 , . . . are the higher nonlinear 120

20
Takens-Bogdanov

8
16

Dynamic Turing

Dynamic Turing
TW/SW-generator

12

Rest state
8

Static Turing

TW-generator

0
Dynamic

-2

Static Turing

Hopf

Turing

Homogeneous oscillations
TW/SW-generator

-4 0 2 4 6 8
0

Dynamic Turing

Figure 5.6: Left: The linear instabilities of the model with axonal delay v = 1 and localised ( = 0) linear adaptation with strength a . At a = 8 the steady state u looses stability for 1 = 0 and therefore does this independently of the systems spatial setup. Right: Organisation of emergent dynamics in a model with axonal delays and spike frequency adaptation with standard wizard hat connectivity (w0 = 1). Here = 0, = a = 1. orders, drimn = m i K mn + a i Kamn , the parameters are redened as
11 a = D dr ,

(5.3.3)

at the (m, n)th growing mode. The amplitude equations remain (5.1.10), only

11 11 b = D 3dr3 + 2dr2 (2C00 + C22 ) , 11 11 c = 2D 3dr3 + 2dr2 (C00 + C02 + C20 ) ,

1 d= D 2 with
11 drc D= i

vg ik i and Cmn =

(5.3.4)

11 (drc ),

mn dr2 . mn 1 drc

If fa (u) 0, so that a i = 0 above, we recover (5.1.11).

5.3.2 Analysis of models with SFA


We will investigate the model (5.3.1) for axonal delay (4.2.2) and a linear adaptation term, a f a (u) = a u. We have that u = 0 is a a homogeneous steady 121

state for any a . The analysis of its linear stability, implemented in Appendix 9.1.3, reveals critical curves with respect to the parameter v similar to those of the model without adaptation (Figure 4.4). Increasing the adaptation strength a shifts and compresses the critical curves toward the zero axis 1 = 0. For w0 = 1 the instabilities remain dynamic Turing leading to TWs. At a a =
c a := 8 they reach the zero axis. The critical curves do this simultaneously for c every v (they have become at). Let us resolve what happens at a = a . If we

dominant and it is local. The dispersion relation for the stability of u is simply for a = 1 and a = 8. Therefore, for a > 8 any solution will grow expo1 = a a . Solving this by hand we obtain that u = 0 would loose its stability

pick 1 0 the system is in fact not spatially extended, the adaptation term is

nentially, independent of the initial datas wavenumber. This was conrmed by simulations of the full model. On the left of Figure 5.6 we plot the critical curves with respect to a , for a xed v = 1 and = 0 (representing local feedback). The case with = 0 (non-local feedback) is qualitatively similar. The positive half-plane shows the linear stability portrait for the inverted wizard hat model (w0 = 1): both the Hopf and TuringHopf modes come lower with increasing a . The nonlinear analysis tells us that the Turing-Hopf instability remains a TWgenerator everywhere. However, the negative half-plane reveals new interesting dynamics
c for the standard wizard hat model (w0 = 1): for a sufciently large a < a

the static Turing instability is replaced by a dynamic TuringHopf instability. This statement is true for all v. Hence, dynamic instabilities are possible in a

model with short range excitation and long range inhibition provided there is sufciently strong adaptive feedback. Indeed, this observation has previously been made by Hansel and Sompolinsky [207]. In fact, in a model with Mexican hat connectivity and linear adaptation term but lacking space-dependent delays (v ) Curtu and Ermentrout [206] discovered homogenous oscillations, travelling waves and standing waves in the neighbourhood of a TakensBogdanov bifurcation. The latter is a codimension two bifurcation where two complex conjugated eigenvalues become real simultaneously with crossing the imaginary axis [232]. Therefore this is the interface where a dynamic TuringHopf instability turns into a static Turing instability. By tracking with respect to v the points where one type of insta122

0.4

12

TW
0.2

10

SW

sta

ble

TW
8

sta

ble

5.5

6.5

7.5

5.5

6.5

7.5

Figure 5.7: Model with linear localised SFA. TWSW selection along the Turing Hopf instability curve for standard wizard hat, v = 1 (see Figure 5.6, left). Here we have a TW/SWgenerator. Both TWs and SWs are BenjaminFeir stable. Left: Region proportions for cubic ring rate. Right: After mapping by the sigmoidal ring rate. bility is replaced by another in Figure 5.6, left, we can construct the diagram on the right (see Appendix 9.5.3). It shows the type of dynamics we would observe at each 1 = c (v, a ) for the model with standard wizard hat. We have located the Takens-Bogdanov point, as well as determined that through the larger part of TuringHopf instability parameter space we have a TW/SWgenerator. Here, the left edge of the diagram (a = 0) corresponds to the axonal delay model of Figure 4.4, (no dynamic instability possible), the upper edge section through v = 1 gives the instability portrait on the left of the Figure. The Takens-Bogdanov point rst appears for v 6.27 at a 2.51. We should check if the TW/SW-generator offers large regions of stable SWs or not. The plots in Figure 5.7 and 5.8 show that this is our rst instance of easily accessible SWs, also for the case of sigmoidal ring rate. Numerical simulations conrm this. The region proportions on the left of Figure 5.8 hint also that this might be the rst instance where we can observe a BenjaminFeir instability. There is one more subtle point to make when we are trying to simulate the model in this regime. In the linear instability portrait a Hopf eigenmode comes very close to the critical mode for larger a s. This is shown on the right of Figure 5.8. This eigenmode inuences very strongly the emerging patterns, prac123 corresponds to the model of Curtu and Ermentrout (v ), and a horizontal

0 0.8

-1 0.4

Rest state Hopf

Takens-

-2 0 -3

Bogdanov

Static Turing

point

Dynamic Turing
-4

2.5

3.5

4.5

5.5

-5

Figure 5.8: Model with linear localised SFA. Left: TWSW selection along the TuringHopf instability curve for standard wizard hat, v = 8, cubic ring rate. At a 3.38 the parameter space conguration switches from TW/SW to TWgenerator. In this model there are large regions where TWs or SWs are with v = 8.

Benjamin-Feir unstable. Right: Zoomed in linear stability portrait, w0 = 1, tically adding another slow scale and making the nonlinear analysis invalid. Thus, we cannot take advantage of the right half of the nonlinear parameter space, where TWs become BenjaminFeir unstable. Instead we will be looking for unstable SWs for a < 2.9. An example of such instability is shown in Figure 5.9. On the plots of Fourier space one can clearly see that the initially stable primary wavenumber k c slowly looses stability thanks to a sideband perturbation (for the setup, see Appendix 9.5.2). However, we were not able to observe chaotic solutions as expected from the theory. The question as to why the transition settles to modulated ordered solutions remains open (it is also possible that this is due to the relatively small spatial domains on which we were able run the simulations). We consider briey the example of nonlinear adaptation, with the choice f a = a f in (5.3.1). In the case of purely local feedback ( = 0) the linear instability portrait for the model with axonal delay (Figure 4.4) does not change qualitatively when a is increased from zero, although all instability curves, for both tion model they do not hit 1 = 0 but tend asymptotically to it with a . w0 = 1 and w0 = 1, move towards the zero axis. Unlike the linear adapta-

The steady state u at 1 = 0 remains stable for every a . A region of TW/SW 124

800

2000

t
1600 0 2400

54.8 0

200 0

x
a
0 0

54.8 0

c
u
2000 0 54.8 0 3

u(x,t) (k,t)

max max

Figure 5.9: A BenjaminFeir instability of standing waves in the model with linear SFA with a cubic nonlinearity. Parameters: v = 8, a = 2.66, 1 = with the primary unstable wavenumber k c = 1.144, but wavenumbers in its neighbourhood start to grow (a). After some time new nonlinear stability conditions come into play which, depending on the initial condition (respectively k = 1.0395 and k = k c ), might let only a single wavenumber survive (b) or stabilise a more complicated conguration (c). generator appears for medium speeds, shown in Figure 5.10, left, though the SW bands in (2 , 3 ) space are very small. However, they signicantly widen for nonlocal feedback and can also be observed for the case of sigmoidal ring rates. An example of the parameter space for nonlocal feedback is given on the right of Figure 5.10. c + 0.1, 2 = 1.3, 3 = 0.4. The initial condition favours a standing wave

5.4

Recapitulation

In this and the preceding Chapter we tried to systematise the knowledge about periodic patterns in one-dimensional neural eld models. Although the example models that we investigated have been discussed before, it was done by a multitude of authors who used varying specications and approaches. We introduced a generalised time-dependent connectivity K ( x, t) which encompasses all these models, and applied a unied method to reveal their dynamics.

125

Homogeneous oscillations

0.1

TW/SWgenerator
TW-generator
-0.1

Homogeneous oscillations

12

Figure 5.10: The dynamics of the model with axonal delay and nonlinear adaptation with strength a and range . Left: Twoparameter plot for local adaptation, = 0. TW/SWgenerator appears, but the SW stability regions are in fact small. Right: TW/SW selection for = 3, a = 1. Standing waves can be observed even for a sigmoidal ring rate. The parameter plots of different models in this Thesis are compatible with each other, often they are revealed as particular sections of a single parameter space. This makes explicit the dynamical unity of different neuronal mechanisms, such as dendritic and axonal delays. Using a time-dependent connectivity also holds potential for applying our theoretical results to novel types of neural elds for example incorporating dynamic synaptic connections, as would be the case for a network undergoing learning [233]. Many of the example models considered here, have been studied previously to the level of linear analysis only. We went further, applying weakly nonlinear theory and deriving the amplitude equations for a TuringHopf instability, in order to be able to differentiate travelling wave and standing wave solutions as well as conrm their sideband stability. The work of Curtu and Ermentrout [206] comes closest to our own, deriving amplitude equations for a neural eld with spike frequency adaptation. However it assumes that there is no longwavelength spatial modulation and arrives at the space-independent reduction of amplitude equations (which we give in Section 5.2.1). Our result that the normal form of TuringHopf bifurcation in neural elds may be described by nonlocal amplitude equations (MFGL equations), is new. Previous derivations of the normal form have been for simpler neural elds with separable spatial and

126

temporal interactions (e.g. without delay mechanism), in which case the normal form is local (CCGL equations), see for example [234]. Curtu and Ermentrout also demonstrated that their neural eld model has stable standing wave solutions in neighbourhood of a Takens-Bogdanov bifurcation. One advantage of our approach is that we implemented our theoretical results into computer tools (Appendices 9.1 and 9.5) allowing us to chart in parameter space the regions of different dynamics of the models. Thus, we could improve on their work by reporting whether standing waves are commonly found solutions for the model, or require a great deal of ne tuning for the parameters, which would be inpractical in real tissue experiments. Our conclusions are that static Turing patterns are reserved to neural elds with standard Mexican hat-type connectivity (local excitation, distal inhibition). Additional mechanisms such as delays or neuronal adaptation introduce dynamic Turing patterns to neural elds with inverted Mexican hat connectivity (local inhibition, distal excitation). All of these lead to travelling wave patterns, standing waves have rather small parameter windows, which for a sigmoidal ring rate are lost. The case of linear spike frequency adaptation is a major exception where dynamic Turing instabilities exist for a standard Mexican hat connectivity. These have substantial and easily accessible regimes of stable standing waves, as well as of BenjaminFeir unstable solutions. These observations have been thoroughly conrmed by simulations of the full neural eld models using the code given in Appendices 9.2 and 9.3.

127

C HAPTER 6

Planar neural elds with delay


Up to this point of the Thesis we have developed our understanding about neural elds and about their pattern forming abilities by using models dened on a one-dimensional topology. Notwithstanding the obvious inadequacy of line brains we have shown with examples in Chapter 3 that 1D models can be relevant to the theory of brain dynamics. One can test experimentally the predictions of the 1D modeling theory by preparing thin cortical slices in a suitable manner as described in Section 3.5.1. On the other hand, in Section 3.5.2 we gave an example of a specic functional neuronal system that is best represented mathematically as residing on a periodic 1D domain. Beside looking for specialised neuronal systems to which we could apply our admittedly simplied modeling tools, we would like to be able to model also the bulk dynamics of brain areas. This is especially interesting to do for cortex as there are accessible technologies that allow us to observe that bulk dynamics in the living brain: EEG, MEG and others, refer to Chapter 2. It is best to represent cortex as a two-dimensional sheet of neuronal populations. This is justied by the two-dimensional nature of cortical signal detected by most of these imaging technologies, as well as the strong vertical coupling between neurons within cortical microcolumns. Therefore, ignoring the depth of the cortical sheet, we would like to develop an ability to analyse neural elds with delays, dened on 2D domains. In this case the main difculty is not the theoretical analysis, which is rather similar to that for the 1D equations, but the numerical solution of the model. A numerical scheme for the model in integral form, if based on the ideas from Appendix

128

9.2, would involve at each time-step updating as well as summing over a vast three-dimensional array of the activity history. On the other hand, as we will see below it is not possible to transform the model to a PDE framework in an exact way, as we were able to do for many examples in the 1D case. The primary point of Section 6.1 is the procedure that we propose for constructing a PDE system approximating the dynamics of the integral model. Here we have to leave the generality of the two previous Chapters. We derive the PDE form for a 2D neural eld with axonal conduction delay and isotropic connectivity. Its linear instabilities and their numerical conrmation are presented in Section 6.2. As a nal contribution, in Section 6.3 we discuss pattern formation in a model with spatially modulated anisotropic patchy connectivity. This is relevant to modelling the visual cortex.

6.1

Integral and PDE form in 2D

ogous to (4.2.1). However, in the 2D case using a formulation with separable excitatory and inhibitory populations instead of a wizard hat function (see Section 3.3.1) results in more transparent calculations, as each neuronal population can be described by a simple exponentially decaying connectivity function. Let about r in a population a, induced by input from population b. The following denition holds for any number of neuronal populations, giving a possibility to include in the model the vertical layered structure of cortex. However, we will make use of only two populations, excitatory (a, b = E) and inhibitory (a, b = I). The synaptic activity is the result of the linear action of a synaptic response lter ab on the presynaptic averaged spike input ab : u ab = ab ab . The symbol represents a temporal convolution in the sense that
t

Here we dene a model on the plane r R2 , r = (r, ), r R, [0, 2 ), anal-

u ab = u ab (r, t), (t R + ) be the spatially averaged activity of synapses centered

(6.1.1)

( )(r, t) =

ds (s)(r, t s).

As previously, we will use the normalised second order synaptic lter (3.1.9), (t) = 2 tet H (t). 129

The variable ab (r, t) describes the presynaptic input to population a arriving from population b, which we write as ab (r, t) =
R2

wab (r, r ) f b hb

|r r | r ,t v ab

dr .

(6.1.2)

We take the connectivity function to be translationally invariant and isotropic, thus wab (r, r ) = wab (|r r |). We pick exponentially decaying synaptic footprints for all populations, wab (r ) = w0 ab er/ab , 2 r = |r|, (6.1.3)

differing only by their spread parameters ab . The function f a is the ring rate (3.3.4) of population a, and v ab is the mean synaptic axonal velocity along a bre connecting population b to population a. We take the variables h a to be a linear sum of all neuronal input, ha = with h0 a

uab + h0, a
b

(6.1.4)

a constant drive term. Liley et al. [235] have proposed a more sophisti-

cated version, where h a is interpreted as the average soma membrane potential governed by a nonlinear equation. However, we will stick to a linear sum for simplicity.

6.1.1 Derivation of the PDE form in 2D


The numerical solution of the model (6.1.1),(6.1.2) is challenging for two reasons, in particular. The rst being that the nonlocal presynaptic input term (6.1.2) is dened by an integral over a two-dimensional spatial domain, and the second, that it involves an argument that is delayed in time. Since this delay term is space-dependent, it requires keeping a memory of all previous synaptic activity. The huge numerical overheads in simulating such nonlocal systems have motivated investigators to look for ways to convert the model to a local PDE formulation. In the 1D case, as we saw in Section 4.6, the integral neural eld equations have PDE equivalents called brainwave equations, typically damped wave equations. To obtain a 2D version, initially, in the brainwave equation the spatial derivatives were simply replaced by gradients, improper results. 130
x

[236]. Later, the linking to neural elds revealed that this naive approach gives

2 ,

The following steps for deriving the correct form are analogous to those in Section 4.6. We introduce the 3D Fourier transform by (r, t) = 1 (2 )3
R3

(k, )ei(kr+t) dk d.

Introducing the generalised connectivity kernel Gab (r, t) = Gab (r, t) (where r =

|r|) with

Gab (r, t) = wab (r )(t r/v ab ),

(6.1.5)

allows us to rewrite (6.1.2) as ab (r, t) =

R2

Gab (|r r |, t s)b (r , s) dr ds,

(6.1.6)

where a = f a h a . This has a convolution structure, therefore its Fourier trans-

form is ab = Gab b . Note that for a xed axonal delay (6.1.5), Gab reduces to

a two-dimensional Fourier transform of a radially symmetric function i.e. to a Hankel transform [237], namely, H wab (r )eir/vab . Indeed, G (k, t) =
R3 2 0 0

w (r ) t
0

r i(kr+t) e dr = v

R2

w(r )ei(kr+r/v)

w(r )ei(kr cos +r/v) r dr d

=2

w(r )eir/v rJ0 (kr ) dr = H w(r )eir/v .


2 0

Here, J0 (z) = (2 )1

deiz cos is the Bessel function of the rst kind of

order zero. Importantly, we nd that Gab (k, ) = Gab (k, ) with |k| = k. If Gab (k, ) can be represented in the form VD ab (k2 , i )/ PD ab (k2 , i ) then we have that PD ab (k2 , i )ab (k, ) = VD ab (k2 , i )b (k, ). By identifying k2 model in terms of the operators 2 and t . However, unless the functions PD ab of these operators is unclear. For example, for the choice of exponential synaptic footprint (6.1.3), setting A ab = 1/ab + i/v ab we have [237] w0 A ( )r ab = w0 e ab Gab (k, ) = H ab 2 Introducing the operator A ab : A ( )
3/2 2 A ab ( ) + k2

2 and i t , then a formal inverse Fourier transform will yield a local

and VD ab are polynomial in their arguments then the interpretation of functions

(6.1.7)

A ab =

t 1 + , ab v ab 131

apart the 2D and the 1D case. At this point, Laing and Troy [216] proposed that one should approximate the nonrational Hankel transform Gab (k, ) with a rational function and work with the PDE system obtained by inverse-transforming the approximation. They suggest that one could use a Pad approximant (a generalisation of a Taylor series [238]), or a least squares t over a favourite range of values in the Fourier space. However, they studied a neural eld without delay terms which has a real Hankel transform. The time-dependent connectivity Gab (r, t) has a complex Hankel transform Gab (k, ) (since A ( ) is complex). How to match a given approximant to it is not obvious. More generally, Laing and Troy argue that the dynamics of an integral model with any connectivity that is constant in time can be approximated by a PDE system whose Fourier transform is a rational series. However, it remains unclear if a good approximation in the Fourier space will result in good matching of the dynamics. To avoid this point one may reconstitute a connectivity w(r) having the series as its Hankel transform, and dene that as the model of study. In that case there will be no approximation involved. Varying the parameters associated with the series one would move through the space of models with exact PDE formulations. The trade-off is that the connectivity functions of interest may fall outside that space. Unfortunately, this is exactly the case for the biologically important Mexican hatshaped and exponential functions. The approximation that has come into common use whenever researchers need 2D equations to simulate the mean-eld cortical dynamics does not dwell on these ne points. It is obtained simply by Taylor expansion in neighbourhood of k = 0 of the irrational denominator in (6.1.7):
L Gab (k, )

the problem arises as to how to interpret A2 2 ab

3/2

. This difculty sets

A ab ( ) + 3 k2 2

w0 ab

(6.1.8)

In effect, this approximation is only valid in the long-wavelength limit, yet the corresponding PDE has become the standard brainwave equation to use for modelling 2D cortical dynamics [235, 239, 240, 241]. For our model (6.1.1),(6.1.2) it is 3 A ab 2 ab = w0 b . (6.1.9) ab 2 The PDE system is completed by substituting ab = Q ab u ab , where Q ab =

(1 + t / ab )2 is the operator associated with the Greens function (t) in (6.1.1)


132

(see Section 3.1.3). We refer to (6.1.9) as the long-wavelength model. Higher order approximations can be obtained by expanding to higher powers in k [242], but all resulting higher order PDE models will still be long-wavelength approximations.

6.1.2 Improved approximations


A Bessel approximation To obtain a PDE model that side-steps the need to make the long-wavelength 4 (6.1.10) [K0 (z) K0 (2z)] , 3 and K0 is the modied Bessel function of the second kind of order zero. We B(z) = replace the exponentials in the model connectivity with this expression, which we will call Bessel approximation. It is particularly useful since K0 ( az) has a simple rational Hankel transform given by 1 . + k2 approximation we use the observation [243] that ez B(z), z C, where

H [K0 ( az)] =

a2

For the approximation of the Fourier transform Gab (k, ) we have


B Gab (k, ) =

w0 ab H er/ab ei/vab 2 w0 4 ab H K0 ( A ab ( )z) K0 (2 A ab ( )z) 2 3 4w0 1 1 = ab . 2 2 3 2 2 A ( ) + k 4 A ( ) + k


ab ab

(6.1.11)

Note that here we approximate after we have performed a Laplace transform on Gab (r, t). An inverse Laplace transform of B( A ab z) will not give the original generalised connectivity. In particular, the velocity of axonal conductance for
B the approximate model Gab will not coincide with the parameter v ab . This is

discussed in more detail below. The PDE system following from (6.1.11) is

A2 2 ab

4A2 2 ab = 4w0 A ab b . ab ab

(6.1.12)

We expect that it has dynamics similar to those of the original neural eld with axonal delay (6.1.1),(6.1.2), and that we can use this brainwave equation to replace the nonlocal model whenever simulations of 2D cortical dynamics are 133

required. The numerical scheme for evolving (6.1.12) is the PDE formulation
B for an integral model with time-dependent connectivity Gab , which we can cal-

culate below.

Connectivity and velocity proles, comparison of the approximations The approximation B(z) ez is proposed as a better match to the connectivity than the long-wavelength one. To compare them we can take inverse Hankel
L transform of Gab (k, ) and obtain the spatial prole L(z) that effectively re-

places er with the long-wavelength approximation. The inverse transform of (6.1.8) is


L H1 Gab (k, ) =

w0 2 ab K0 2 3

2 A ( )r 3 ab

therefore, ez L ( z ) =

2 K0 2/3 z . 3

(6.1.13)

The approximation L(z) is poor for both large and small values of z = A ab ( )r: 2 1 L(z) = lim ln = , z z 0 3 z 0 e z 2 L(z) e(1 2/3)z = , lim z = lim |z| 3 |z| e 2 2/3 z (using that K0 (z) ln z for |z| 0, and K0 (z) /2z ez for |z| , lim

[244]) while being acceptable for medium values (see Figure 6.1, left). More dis-

turbing is that L(z), which is required to model the spatial neural connectivity, is innite at the origin. The Bessel approximation B(z) on the other hand is nite by construction. It is comparably close to ez for medium values of z, but a much better approximation than L(z) for the extreme values: 4 B(z) = ln 2 0.924, z z 0 e 3 B(z) 4 lim z = lim = 0. 2z |z| e |z| 3 lim Note that in the last line the functions diverge linearly, rather than exponentially as for L(z). The comparison of the two approximations L(r ) and B(r ) to the exponential function er is shown in Fig. 6.1, left. While the better match of B(r ) should make us optimistic, a method to relate the approximation error in Fourier space with the degree of discrepancy between 134

/=>
1.5

0.5

0
0.5

0 0

-0.5
1 2 3

Figure 6.1: Left: Comparison of the exponent er (black) and the functions that we use to
approximate it: L(r ) (dashed), B(r ) (dotted) and O(r ) for 1 = 1, 2 = 0.937 (dashdotted). Right:
B L The Greens functions for propagation of synaptic activity Gab (r, t) (grey) and Gab (r, t) (black) at

a point r = 1 for parameters v ab = 1, ab = 1 (v ab = 1, ab = 1).

the dynamics of the models is lacking. The problem of determining the better of two different approximations to a neural eld has not been posed until now. We propose to use for comparison the very equations dynamics. Specically, we would match the points of linear instability in the models parameter space. We believe this is a useful criterion since the end goal of mathematical analysis of neural models is usually to nd their linear instabilities. As we saw in Chapter 4, there is no difculty in determining these for nonlocal neural elds. We would determine also the instabilities of each of the approximating systems, and at the end choose the better match. We follow the details of this procedure in Section 6.2. It turns out that the Bessel approximation (6.1.12) gives an exact match to the integral model (within computer error), while the long-wavelength approximation (6.1.9) has a quantitatively different instability portrait. This can be seen in
B Figure 6.4. However, there are other difculties with Gab that make it unsuit-

able for a model of neural dynamics. Thanks to Ingo Bojak and David Liley for pointing these out [personal communication]. Namely, if we calculate the timeB dependent connectivity function Gab (r, t) leading to the Bessel approximation

(we do this next), we nd that it generates an unphysiological negative second pulse of synaptic activity. In the original model Gab (r, t) = wab (r )(t r/v ab ) a pulse of synaptic activity 135

reaches a distance r from its source at the exact instant of time t = r/v ab . Any approximation in the Fourier space breaks this nice representation of G (r, t) and leads to a smearing out in time of the pulse (because Gab is a Greens function for the propagation of activity). Let us illustrate this by deriving rst the
L connectivity function Gab (r, t) corresponding to the long-wavelength approxi-

mation (6.1.8). An inverse 3D Fourier transform yields


L Gab (r, t) =

1 (2 )3

1 (2 )2

L Gab (k, )eit 0

2 0

eikr cos d dk d w0 v2 ab ab
2 3 2 2 2 k v ab

J0 (kr )

(v ab /ab + i ) +

eit d dk.

Using that the Fourier transform of a function of the form sin(t)e |t| is

sin(t)e( +i )|t| dt =

, ( + i )2 + 2

we infer that
L Gab (r, t) =

1 2

2 0 w v 3 ab ab

J0 (kr ) sin

3/2 kv ab t evab t/ab dk H t r 3/2 v ab ,

w0 evab t/ab ab 9t2 6r2 /v2 ab

where the Heaviside function ensures that we take into account only past time. We can see here that the information starting off as an instant pulse at t0 = 0, r = 0 arrives at locations r as a time-extended (exponentially decaying) pulse following the instant t1 = 2/3 r/v ab . Note that the conductance velocity of the long-wavelength model is in fact v ab = 3/2 v ab . This is another consequence of the careless denition of the approximation (6.1.8). One has to re-dene the spatial scale as well, ab = 3/2 ab , to obtain a correct approximation to (6.1.5). It has a connectivity (or Greens) function
L Gab (r, t) =

w0 ab 3

evab t/ab t2

r2 /v2 ab

H t

r v ab

The connectivity for the Bessel approximation is calculated in the same way. Let us re-set (6.1.10) for clarity of notation as B(z) = 4/3 [K0 (z) K0 (z/)], = 1/2. We obtain v ab t/ab 2w0 r B ab e Gab (r, t) = H t 3 v ab t2 r2 /v2 ab 136 evab t/ab t2 r2 /2 v2 ab r . v ab

H t

In Figure 6.1, right, we plot the time-course after a distance r = 1 of an innite


B pulse (0). In the case of Gab there appears a second negative pulse trailing be-

hind at time 2t1 . For small r it could be as signicant as the positive pulse. This rather unphysiological feature that was pointed out to us by Bojak and Liley, is what prevents us from proposing the Bessel model as the better approximation to use for representing 2D neural elds with axonal delay as PDEs. On the other hand, note that this is the only approximation that we found to give an exact match with the instability portrait of the original model (see Section 6.2). It is possible to interpret the spread-out of a pulse also in terms of a distancedependent velocity prole. When the synaptic activity propagates with a distribution of axonal velocities q ab (v), we can write the generalised connectivity (6.1.5) as

Gab (r, t) = wab (r )

q(v) t

r v

dv = wab (r )

v2 q(v) r

.
v=r/t

(6.1.14)

Rearranging (6.1.14), it becomes clear that in the general case the velocities are dependent on the axonal length, q(v, r ) =
r rGab r, t = v . v2 wab (r )

In the original model the velocity distribution is simply q ab (v) = (v v ab ).

For the long-wavelength model this gives q(v, r ) = and for the Bessel model, q(v, r ) = e(r/ab )/(v/vab ) e(r/ab )/(v/vab ) H (v ab v) v 1 v2 /v2 ab K0 (r/ab ) K0 (r/ab ) . e(r/ab )/(v/vab ) H (v ab v) v 1 v2 /v2 K0 (r/ab ) ab ,

Note that in the second case we obtain a velocity distribution starting at the correct velocity v ab . Fig. 6.2 shows plots of q(v, r ) for both the long-wavelength and the Bessel approximations of the model. Both have two peaks in the velocity distribution: one at v = 1 (v = 1) as expected, and one for small r and v. This second peak however is strongly localised and hence merely introduces an insignicant overall delay to a travelling pulse. 137

q(v,r)
0.2

0.15

0.1

0.05

0 4 3 2 1 0 0 0.5

4 3 2 1 0 0 0.5

Figure 6.2: The space-dependent velocity distribution q(v, r) for the long-wavelength (left)
and the Bessel approximation (right), plotted for v ab = 1. For the original integral model it is just q(v) = (v v ab ). [Image due to Liley and Bojak.]

The optimal approximation We felt that since the Bessel approximation introduces a trailing negative pulse of activity, its application to modelling cortical activity is compromised. Here we skip over a number of unsuccessful attempts to construct ad hoc substitutions to ez that would give us both physiologically sound properties of the model and improved match of the linear instabilities with those of the original model. Finally, we decided to work with an expression including free parameters which we would be able to t for obtaining the best linear instability match. Following our experience up to now, we choose O (r ) = 1 (K0 (r/1 ) K0 (r/2 )) . 2 2 1 2

2 2 The prefactor 1/(1 2 ) ensures common normalisation with er . The ex-

pression O(z) is nite, including when 1 = 2 . If the parameters satisfy the

Figure 6.1, left, where we put 1 = 1 and 2 = 0.937).

2 2 condition 1 2 = ln 1 ln 2 we also have that O(0) = 1 (as in the plot in

An important point to make is that now we replace only the spatial part of the exponential e(1/ab +i/vab )r . In this way the positive and negative pulse in the Greens function arrive at the same time although they may have different 138

decay rates (resulting in a shape similar to the alpha function). The substitution is er/ab eir/vab with the re-dened
2 1

1 [K0 ( A ab;1 ( )r ) K0 ( A ab;2 ( )r )], 2 2 A ab;i ( ) =

(6.1.15)

1 i + . i ab v ab The Hankel transform of the connectivity is


O Gab (k, )

2 1

w0 ab

2 2

( A2 1 ( ) + k 2 ) ab;

( A2 2 ( ) + k 2 ) ab;

(6.1.16)

O and the Greens function (the connectivity Gab (r, t)) is O Gab (r, t) =

2 2 (1

w0 ab

2 2 )

evab t/1 ab evab t/2 ab t2 r2 /v2 ab

H t

r v ab

Clearly, in this case the positive and negative pulses propagate together (superimposed on each other), independently of the choice of 1 and 2 . Setting (1 , 2 ) = ( 3/2, 0) and compensating with ab = 2/3 ab recovers the long-wavelength connectivity. However, rather than picking values of the parameters beforehand, one can apply an optimisation procedure that gives the best approximation of the original model by some criteria. The criteria could be a t of the connectivity Gab (r, t), or of its transform G (k, ) in the full complex space, etc. Doing this is rather difcult and we choose to look for a good match of the linear instabilities in the models parameter space. In any case, the PDE model that would replace the integral equation (6.1.6) is

(A ab;1 2 )(A ab;2 2 )ab = w0 B ab b , ab


with the operators t 1 + , i ab v ab 1
2 2 2 ab 1 2

(6.1.17)

A ab;i =

B ab =

1+

21 2 ab t . (1 + 2 ) v ab

6.2

Linear instability analysis

Here we explore the stability of the homogeneous steady state of the integral model (6.1.1),(6.1.5),(6.1.6) and its various approximations proposed in Section 6.1. 139

Let h a (r, t) = hss denote the homogeneous steady state. Substituting it into a (6.1.6) we obtain u ab = ab Gab f b (hss ) = Wab f b (hss ), b b with the coefcients Wab =
R

wab (r) dr. Therefore we have the closed form

hss = a

Wab f b (hss ) + h0. a b


b

Linearising around this solution and considering perturbations of the form system of eigenvalue equations ha = h a (r, t) hss = h a et eikr , in a manner similar to that in Section 4.3, gives the a

ab ()Gab (k, i)b hb .


b

(6.2.1)

Here the bifurcation parameters are a = f a (hss ). Demanding non-trivial soa

lutions yields an equation for the continuous spectrum = (k ) in the form

E (k, ) = 0, where

E (k, ) = det(D(k, ) I )

(6.2.2)

with elements [D(k, )] ab = ab () Gab (k, i)b . A Turing instability occurs at the smallest values of the bifurcation parameters b for which there exist some non-zero kc satisfying Re (kc ) = 0 (if kc = 0 the emerging solution is also homogeneous). Generically, in 2D one expects to see the emergence of doubly periodic solutions that tessellate the plane, namely travelling waves with hexagonal, square or rhombic structure (cref. Section 3.5.3 and Figure 3.14). In the model with delays (6.1.5), similarly to Chapter 4, we expect to nd dynamic instabilities with Im (kc ) = c = 0. The spatially periodic patterns would oscillate with temporal frequency c or move coherently with speed c = c /k c (where k c = |kc |). Here we will consider two populations, one excitatory and one inhibitory. We
0 will use the labels a, b { E, I } and set w0 EE,IE > 0 and w I I,EI < 0. In this case

the determinant (6.2.2) is

E (k, ) = [1 I I G I I I ][1 EE GEE E ] EI IE GEI G IE I E ,

(6.2.3)

where Gab = Gab (k, i) and ab = ab (). In cortex the extent of excitatory 140

connections is broader than that of inhibitory connections (inverted Mexican

20

20 15

Turing-Hopf
10 10

Hopf

static shift

0
static Turing

Hopf

-4

-3

M1

-2

-1

0 0

10

12

Figure 6.3: Critical curves for the integral model (6.1.1),(6.1.2) with excitatory and inhibitory
population (see parameter identications in the text). Left: Dependence on the relative inhibitory strength w0 , for a = 1, E = 2 and v = 1. Right: Dependence on the axonal delay, for I a = 1, E = 2 and w0 = 4. I

hat connectivity, Chapter 4) and so we take aE > aI . For simplicity we shall set aI = 1 and aE = , aI = I = 1 and aE = E , and w0 = w0 = 1 aE E (i.e. output from the excitatory neurons stimulates equally the inhibitory and excitatory populations) . We focus on only a single axonal conduction velocity and set v ab = v. The dispersion relation (6.2.3) is solved in a manner similar to the 1D case, the codes are given in Appendix 9.6. An example plot of the linear stabilities of the integral model (6.1.2) for as the ratio between the excitation and inhibition strengths is varied, is shown in Figure 6.3, left. By extensive parameter investigations we found that dynamic Turing instabilities in a model with two populause a single ring rate function f a = f . Combined with the earlier parameter properties independently of the target population. The plot in Figure 6.3, left, shows that when E = 2 (the excitatory connections are on average reaching two times farther than the inhibitory), one can times stronger than the excitatory). By substituting k = 0, = 0 in (6.2.3) one nds that the critical curve giving a shift to another homogeneous steady state 141 observe a TuringHopf instability for w0 4 (inhibitory connections are four I tions are possible when E I and w0 w0I , therefore we set w0 = w0 and EI I aI I identications, we obtain a model where the population output has the same

2 2 is = 1/(w0 + E ). In general we should have w I around or smaller than E I

when looking for more interesting types of instability. Henceforth, in all following examples we will keep E = 2 and w I = 4. To compare different approxic on the axonal speed v. The instability portrait of the integral model is shown

mations to the original integral model we will consider only the dependence of

on Figure 6.3, right. For large and medium axonal delays there is a Hopf bifurcation (global oscillations), with a transition to Turing-Hopf bifurcation (travelling wavetrains) for large delays. Thus, there is no marked change from the linear instability portrait the 1D neural eld with axonal delays (see Figure 4.4). This is due to the radial symmetry of the chosen connectivity G (r, t) = G (r, t), in Section 6.3 we will see that anisotropic connectivity leads to more interesting dynamics. While Figure 6.3 shows results for the integral model (6.1.2), in Fig. 6.4 is given a comparison of the instability protraits of the various PDE approximations. We note that there are no qualitative differences between the models in the sense that, at the linear level all models support Hopf and Turing-Hopf instabilities, with a switch from one to the other with increasing v. However, quantitatively, the models rank from worst to best match to the integral model, in the order: the long-wavelength approximation (6.1.9), the optimal model (6.1.17) with 1 = 2 = 0.6, and the Bessel approximation (6.1.12). The Bessel approximation practically gives an exact match. However, due to the considerations discussed in Section 6.1.2 it might not be a valid approximation from a modelling point of view. For this reason, we suggest the use of the optimal model with some suitable t of the parameters 1 , 2 . Here, we have chosen to t for the point of switch from Hopf to TuringHopf instability in the (v, ) plane. We found that it lies closest to the corresponding point of switch in the integral model when 1 0.6 and 2 0.6 (see Appendix 9.6.2). To test the predictions of our linear stability analysis and to compare the nonlinear behaviour of the models we resort to direct numerical simulations. Note that we have carried out these only for the approximate PDE equations. Their dynamics is found to be in excellent agreement with the linear stability analysis. The numerical solution was implemented in collaboration with Carlo Laing (see Appendix 9.7). Figure 6.5 shows dynamic Turing pattern seen in the longwavelength model and the optimal model. For all models, parallel moving 142

20

15

10

Turing-Hopf
5

Hopf
0 0 2 4 6

10

12

Figure 6.4: Critical curves showing the linear instability thresholds in the (v, ) plane, for
the integral model (black), the long-wavelength PDE approximation (thin), the Bessel PDE approximation (dark grey), and the optimal PDE approximation (light grey). For each model there are three branches shown, on the left side of the diagram from bottom to top they are: with k c = 0, c = 0 (Hopf bifurcation, for the integral, long-wavelength and Bessel models these coincide), k c = 0, c = 0 (TuringHopf instability, if rst), and a fake instability (dashed) with k c = 0 where the dispersion surface has a minimum rather than a maximum. Although no eigenvalues become positive at this curve, it is useful for formally comparing the dispersion relations. The parameters are a = 1, E = 2 and w0 = 4, as in Figure 6.3, right. I

stripes are very commonly seen beyond the Turing-Hopf bifurcation, particularly for small domains, but a variety of other patterns such as those shown here are also possible, i.e. both systems have multiple attractors. Therefore, based upon numerics alone it is not possible to make the statement that there is a qualitative difference between the types of possible patterns in the two models. While the linear instability portraits might be quite similar to that in the 1D case (Figure 4.4), the fully nonlinear dynamics is much richer. This makes undertaking a weakly nonlinear analysis on the lines of Chapter 5 a very ambitious task which we do not attempt here. While the algorithm for deriving a normal form 143

2.02 2.02

2.01 2.01

2.00

1.99

1.99

1.98 1.98

Figure 6.5: Snapshots of periodic patterns, each 1/4 of a period later than the previous one
(clockwise). Shown is the excitatory synaptic input to the excitatory population, u EE . Domain Figure 6.4. Left: The long-wavelength model, at v = 12, = 20. Right: The optimal model, 1 = 2 = 0.6, at v = 12, = 15 (a similar distance from the bifurcation point c as on the left). is 30 30, simulated with the sigmoidal nonlinearity (3.3.4). Parameters as in Figure 6.3 and

of the instabilities is not more complicated, there is a variety of possible 2D patterns and planforms whose stability has to be tested against each other (as opposed to just travelling and standing waves in 1D, Section 5.2). We decided to do instead linear analysis of a model with spatially anisotropic connectivity, expecting that this would reveal properties particular to a 2D model. In the next Section 6.3 we show how to extend the approach for approximating the integral model with a PDE system to a neural eld with periodically modulated patchy connectivity.

6.3

Spatially modulated connectivity

It is now known that the neocortex has a crystalline micro-structure at the millimeter length scale, so that the assumption of isotropic connectivity has to be revised (Section 3.5.2, for a recent discussion see [234]). For example, in visual cortex it has been shown that long range horizontal connections (extending several millimeters) tend to link neurons having common functional properties (as dened by their feature maps). Since the feature maps (for orientation preference, spatial frequency preference and ocular dominance) are approximately periodic this leads to patchy connections that break continuous rotation symme-

144

try but not necessarily continuous translation symmetry. With this in mind we introduce a periodically modulated spatial kernel of the form wP (r, r ) = wab (|r r |) Jab (r r ), ab (6.3.1)

denition is for the case when the modulation dominates over the intrinsic Mexican hat connectivity. The derivation for the case when modulation is of similar magnitude (6.3.6) goes along the same lines, a brief discussion is given at the end of the Section. Above, note that the patchy kernel wP is homogeab neous, but not isotropic. Following recent work of Robinson [245] on PDE systems mimicking a neural eld with patchy connectivity we show how to obtain an equivalent PDE model for an integral neural eld equation with a spatial kernel given by (6.3.1). First, we exploit the periodicity of Jab (r) on the lattice and represent it with a Fourier series: Jab (r) =

where Jab (r) varies periodically with respect to a regular planar lattice L. This

Jab eiqr.
q

(6.3.2)

The vectors q are orthonormal to the generator vectors of the lattice L . The Fourier coefcients are Jab =
q
q

1 2

R2 q

eiqr Jab (r) dr,

with Jab the complex-conjugate of Jab . Then the model can be written as
P P ab (r, t) = Gab b =

wab Jab eiqr (t r/vab ) b = Jab eiqr Gab b ,


q q q

where Gab (r, t) is the unmodulated isotropic connectivity (6.1.5). We may write
P ab (r, t) = q Jab ab (r, t), where ab (r, t) = eiqr ab (r, t). The Fourier transform P P is again ab (k, ) = Gab (k, )b (k, ), however with P Gab (k, ) = q q

Jab Gab (|k q|, ),


q

(6.3.3)

and Gab (k, ) given by (6.1.7). In other words, the transform of a connectivP ity Gab (k, ) modulated on a periodic planar lattice is expressed as the sum of

shifts of the transform of the original isotropic connectivity Gab (k, ) by the dual lattice vectors q. Replacing Gab (k, ) with the optimal approximation (6.1.16) and applying inverse Fourier transform we see that ab (r, t) satises
q

(A ab;1 2 )(A ab;2 2 )ab = w0 B ab b , q q ab


145

(6.3.4)

amplitudes ab indexed by the dual lattice vectors q. Assuming that there is a natural cut-off in q, then we need only evolve a nite subset of these PDEs to see the effects of patchy connections on solution behavior. Note also that the Turing instability analysis for the patchy model is identical to that of the
P isotropic model under the replacement of Gab by Gab in (6.2.2), so that now

where q = iq. Hence, we have an innite set of PDEs for the complex
q

depends on the direction as well as the magnitude of k:

[D(k, )] ab = ab () Jab Gab (|k q|, i)b .


q

(6.3.5)

Due to the symmetry, all modes that are obtained by discrete rotations of the dual lattice cross the bifurcation point simultaneously. We consider spatial modulation which dominates over the isotropic connectivmagnitude to the base connectivity. It would be set up as ity wab (|r r |). Another important case is when the modulation is of similar wP (r, r ) = wab (|r r |) + Jab (r r ), ab original kernel (6.1.6) plus the Fourier decomposition of the modulation:
P ab (r, t) = ab (r, t) + Jab ab (r, t), q q q

(6.3.6)

with Jab (r) doublyperiodic. In that case, the model to solve would contain the

ab (r, t) = eiqr (t r/v ab ) b .


q

One would solve differential equations for ab coupled with the equation for unmodulated connectivity (6.1.17). The Fourier transforms ab are the Hankel transforms
q

H eir/vab (|k q|) = w0 ab

ir/v ab

[(ir/v ab )2 + |k q|2 ]

3/2

We cannot apply the optimal approximation here, because the spatial part of (6.1.15) is missing. We would suggest resorting to the long-wavelength approximation.

6.3.1 Example with modulation on a square lattice


Consider a square lattice with length-scale d, it is dened by the vectors l1 =
(d, 0) and l2 = (0, d). The generators of the dual lattice are then k1 =
2 d (1, 0)

146

0.2

0.2

6 4 6 2 0 0 2 4

6 4 2 0 0 2 4

a.

b.

0.2
Re
0

(kx,ky)

2
0 5

8
2 4

6 4 6 2 4 0 0 2

0 0

c.

d. -1

Figure 6.6: The dispersion surfaces Re (k) of the spatially modulated model for xed parameters: a. d = (unmodulated model), v = 4, = 5; b. d = 4, v = 4, = 15; c. d = 2, v = 4, = 20; d. d = 2, v = 10, = 50. The values of are chosen just above c .
The peaks in the surfaces are pinned by the lattice wavevectors k1,2 , |k1,2 | = 2/d.

and k2 =

2 d (0, 1)

(for introduction to the terminology of lattices and doubly-

periodic functions refer back to Section 3.5.3). We have q {k1 , k2 }. Now

we dene a doubly-periodic modulation on that lattice, Jab (r) =

1 [cos(k1 r) + cos(k2 r)]. 2

Its Fourier coefcients are Jab =


q

1 [(q k1 ) + (q + k1 ) + (q k2 ) + (q + k2 )], 4

and for strong modulation we need only consider two coupled complex PDEs
(6.3.4) indexed by k1,2 . Their numerical solution is also due to Carlo Laing. The

dispersion relation is solved in Appendix 9.6.3. In Figure 6.6 we plot the dispersion surfaces Re(k), k = (k x , k y ), for different modulation length-scales d. The parameters are selected at the instability, so that for each surface the maxima touch the zeroplane. In the limit d 147

&
?

0.5

$ " 

0.4

0.3

0.2

0.1

 

"

&

0 0

0.5

1.5

2.0

2.5

Figure 6.7: Left: Linear instabilities in the (v, ) plane for the optimal model (6.1.17), with a
periodically modulated kernel. In this example the underlying lattice is square, with spacing d. Parameters are as in Figure 6.4 with d = 1. Right: Speed (c = c /k c ) of a travelling wave at the Turing-Hopf bifurcation shown on the left, v = 1. The speed of the wave is seen to increase almost linearly with the lattice spacing d. Other parameters are as in Figure 6.4. The circles denote measurements taken from direct numerical simulations.

(k = 0, Figure 6.6a) we recover the unmodulated model. For nite d (Fig ure 6.6b,c,d) we nd that each lattice wavevector k1,2 introduces a shifted

copy of the peak of the dispersion surface from the unmodulated case. When these peaks are widely separated (for lattice spacing d tween them is weak (Figure 6.6c) and the linear instability portrait is expected

3) the interaction be-

to be analogous to that of the unmodulated model shown in Figure 6.4 (at least up to a factor of 4 coming from the particular choice of Jab above). We plot it in Figure 6.7, left. Compared to the unmodulated case the Hopf bifurcation is transformed to a Turing-Hopf bifurcation with critical wavevectors kc coinciding with those of the lattice. They are independent of the axonal velocity v. This from below. With increasing v the dominant bifurcation is also of TuringHopf
is associated with the central peaks at k1,2 in Figure 6.6c crossing through zero
q

go unstable rst, as in Figure 6.6d. Those rings correspond to the TuringHopf instability in the unmodulated model. Indeed, when d 3 the instability portraits for the modulated and unmod-

type. However, in this case it is a ring of wavevectors surrounding k1,2 that

ulated models coincide quantitatively (with a re-scaling of by 4). A major difference is however that in the anisotropic model the wavevectors of both TuringHopf instabilities are axed to those of the modulation lattice. This is 148

x 10 6

4
0.05

K--

2
0

0
2 2 1.5 1

1.5

1 0.5 0.5 0

Figure 6.8: Examples of patterns beyond the TuringHopf bifurcations in Figure 6.7, left. Both
are pinned by the lattice wavevectors. u EE is shown. Left: At v = 1, = 10, d = 1. The speed is period later than the previous one (clockwise). The domain is 7 7. c 0.182, in the x direction. Right: At v = 10, = 50, d = 1. Snapshots, each 1/4 of the pattern

not surprising result since we set up (6.3.1) such that the modulation effects are dominant in the model. In Figure 6.7, right, we plot the speed of a travelling wave at the Turing-Hopf bifurcation at v = 1. The speed of the wave is seen to increase almost linearly with the spacing of the square lattice d, since Such a lattice-directed travelling wave created in Turing-Hopf bifurcation associated with the central peak (at v = 1) is shown in Figure 6.8, left, while on the right is shown a more complicated pattern associated with the surrounding ring (at v = 10). In the regime 3 d 6 the interaction between overlapping peaks leads to more complicated shape of the dispersion surface (Figure 6.6b). Typically, four wavevectors become unstable with |k x | = |k y |. We have not shown the critical curves generated by those. For d 6 the system effectively has the instability portrait of the unmodulated case (Figure 6.4).
the emergent frequency c is independent of d, while k c coincides with |k1,2 |.

6.4

Chapter summary

The focus of this Chapter is on transforming the integral neural eld model to an equivalent PDE brainwave model. The difculty comes from the nonrational structure of the Fourier transform of the 2D neural eld equation with 149

delay. We explore several substitutions with a rational expression in the Fourier space. We clarify the guidelines that should direct the process of choosing a substitution that approximates best the original system. The guidelines that we use are: better approximation of the Laplace transform of the connectivity kernel (not the full Fourier transform); physiologically sound properties of the aproximate connectivity obtained by inverse transform; serching for the best match of the linear instability diagram with that of the original model. After discussing the shortcomings of traditionally used substitutions we propose one that copes with all of the arising difculties. We call it the optimal approximation because we included two free parameters whose values could be set by an optimal t for the linear instability diagram of the original system. We can nd the latter without resorting to approximations. We nd that linear instability diagram for the 2D model does not differ substantially from that for the 1D model. However, we note here that the pattern types that could arise beyond the instability are much more numerous in two dimensions. Further, we show that our method can be used to construc PDE formulation also for models with anisotropic activity. We present an example where the radially symmetric Mexican hat connectivity is strongly modulated on a doubly periodic lattice. As a result the travelling wave patterns are pinned to the scale of the modulation. The structure of the dispersion relation surfaces giving a Hopf and TuringHopf instabilities in the unmodulated model, is preserved, however the surfaces are shifted by translation with the dual lattice vectors.

150

C HAPTER 7

Localised solutions in a model with dynamic threshold


As a nal contribution to the theory of pattern formation in neural eld models we will review and extend some results about the local properties of solution inhomogeneities (see Section 3.4.2 for an introduction). Our purpose is to complement the discussion about emergence of globally periodic solutions which has been the focus throughout the Thesis by illustrating also methods to study the local dynamics of the interface between active and inactive neural regions. This is the so called Amari-style analysis, and it is one of the early results about 1D neural elds [151]. The aim of work in this Chapter is to set the scene for dealing with localised solutions in 2D models. The transition to a 2D system however requires the extension to a much more powerful mathematical apparatus known as interface dynamics [150]. We leave this for a future stand-alone project (see Chapter 8).

7.1

Comparison of Evans and Amari analysis

In Section 3.4.2 we reproduced the analysis from the seminal paper of Amari [151]. It is a convenient technique to establish the existence and stability of localised solutions and it can be extended to more advanced neural eld equations. Pinto and Ermentrout [231, 246] applied it to a neural eld with spike frequency adaptation (SFA): ut ( x, t) = u( x, t) + (w a a wa ) H (u h). 151 (7.1.1)

Here, is the sign for spatial convolution,

w f (u) =

w( x y) f (u(y)) dy.

We considered a similar model in Section 5.3, see there for an introduction to SFA. For mathematical simplicity Pinto and Ermentrout have picked a Heavino conductance delays and a simple excitatory connectivity, w( x ) = e| x|/ /2. The adaptation footprint wa ( x ) is described by the same prole with a range a . Coombes [247] has suggested a different approach for determining the linear stability of localised solutions, utilising an Evans function (see below). However, there was some disagreement in the results obtained for equation (7.1.1) by the two approaches [212]. The disagreement has been blamed on the approximate nature of the Amari-style analysis. The aim of the present Section is to show that the two approaches lead to identical results, and to clarify the scope of accuracy of the Amari argument. side ring rate (i.e. 1 in (3.3.4)), an exponential synaptic dynamics (3.1.8),

7.1.1 Evans approach


Here we give an exposition of the results obtained by Coombes in [212]. The Evans function is a powerful tool for the stability analysis of nonlinear waves on unbounded domains. The point spectrum of the operator obtained after the linearizing of a system about its travelling wave solution is associated with the zeroes of that complex analytic function. Details and references about the theory of Evans function, as well as applications to nonlocal models can be found in [247]. The steady state bump for equation (7.1.1) is u ( x ) = wb H ( u ( y ) h ) =
x x1 x x2

wb (y) dy,

where xi are the crossing points of u with the threshold h (see Figure 3.9). If Wb ( x ) is the primitive of the convolution kernel wb ( x ) = w a a wa , the steady state can be written simply as u( x ) = Wb ( x x1 ) Wb ( x x2 ). 152 (7.1.2)

The primitive is Wb ( x ) = W ( x ) a Wa ( x ) = (7.1.1) and formally linearize, u( x ) = () (w a a ()wa ) H (u(y) h) u(y). Note that the derivative of the Heaviside function has a legitimate interpretation as it appears always within an integral. To express it in terms of the Dirac function we need to dene the integral over the appropriate variable rst, which is u. Since the shape of the steady solution u is a xed function of x, we can make the change of variables du = |u (y)| dy. Thus we have dy = du/|u (u 1 (u))|, and in any interval where u( x ) crosses h once: H (u( x ) h) = (u(y) h) . |u (y)| (y x j ) u ( y ). |u x ( x j )| j=1,2 1 [1 e|x|/ a (1 e|x|/a )] sign( x ). 2

To determine the stability of u substitute u( x, t) = u( x ) + u( x )et in the model

(7.1.3)

Summing the intervals, for the linearised system we obtain u( x ) = () (w a a ()wa )

(7.1.4)

The formal series expansion can be interpreted along the following lines. The perturbed function u + u is assumed to have threshold crossing points x j + j . The convolution integral then has the explicit form wb H ( u ( x ) + u ( x ) h ) =
x x1 1 x x2 2

wb (y) dy = Wb ( x x1 1 ) Wb ( x x2 2 ) = wb H (u h) + wb

Wb ( x x1 ) Wb ( x x2 ) + wb ( x x1 )1 wb ( x x2 )2 + O(2 ) =

j=1,2

( x x j ) j + O ( 2 ).

Comparing (7.1.4) with this explicit expansion shows that they are equivalent with i = u( xi )/|u ( xi )|. Equation (7.1.4) simplies to

(1) j+1 w ( x x j )u( x j ) u( x ) = () ux (x j ) b j=1,2


153

bump u x ( x1 ) > 0, u x ( x2 ) < 0. Substituting the crossing points x = xi we obtain a linear algebraic system for u( xi ). Demanding that it has nontrivial solutions leads to the problem
w b (0) (1) u x ( x1 ) wb ( ) u x ( x1 ) w uxb((x ) )
2

where wb ( x x j ) = w( x x j ) a a ()wa ( x x j ). We used here that for the

w uxb((x0)) 2

1 ()

= 0,

= x2 x1 .

(7.1.5)

The determinant on the left-hand side can be identied with the Evans function. Finding its roots would give us the eigenvalues whose real part determines the stability of the bump steady state. We track in XPP/AUTO the parameters for which the static bump loses stability (Appendix 9.8), the plot is shown in Figure 7.1, left. Although there exist two bumps with different widths (they are shown on the right of the Figure), only the wider one becomes stable in the shown region. At the region boundary the eigenvalues are imaginary, suggesting that there is a Hopf bifurcation to an oscillating bump, a breather.

7.1.2 Amari approach


By the Evans approach one can accurately determine when the static bump looses its linear stability. However, an advantage of the Amari approach is that one constructs a reduced dynamical system describing the motion of the bumps crossing points only. One can then solve this simpler ODE system or evolve it numerically to learn more about the bump dynamics beyond the instability. Our aim in this Section is to show that the steady state of the ODE system has the same stability properties as the original equation. We will obtain an algebraic equation for the eigenvalues that is identical to (7.1.5). It is convenient to start with the differential form of the model (7.1.1). Setting the adaptation variable as z( x, t) = wa a(y, t), it can be written as ut = u + w H (u h) a z, zt = z + w a H ( u h ). (7.1.6)

Let u( x, t) be a solution similar to the stationary bump solution (7.1.2), u( x, t) = u( x ) + u( x, t). It will cross the threshold at nearby points xi (t) = xi + xi (t) 154

which satisfy u( xi (t), t) = h. Since this equation holds for every t, we can differentiate it and get u x ( xi (t), t) xi (t) + ut ( xi (t), t) = 0. We can substitute the derivative ut with the rst equation in (7.1.6): u x ( x i , t ) x i ( t ) = h w H ( u h ) + a z ( x i , t ). A similar procedure to involve the equation for z is less clear, because the value of z( x, t) at the points xi (t) is not determined like (7.1.7). However, one cannot solve for z as an independent unknown as [212] and [231] do, or one would overlook the double dependence of z( xi (t), t) on the time. This is the subtle point that has led to disagreement with the results obtained by Evans function. Instead, we construct equations for the values of the adaptation z at the crossing points. Dene the unknown as i (t) = z( xi (t), t). By differentiating this expression, we get i = z ( x i ( t ), t ) = z x ( x i , t ) x i + z t ( x i , t ), t and from (7.1.6): i x = z x ( x i , t ) i i + w a H ( u h ). Note that the gradients u x ( xi (t), t) and z x ( xi (t), t) are unknown. Closed equations for them cannot be added to the system because if one denes the gradient at the crossing point as a function of time, i (t) = u x ( xi (t), t), then the latters derivative would involve further unknowns i (t) = u xx ( xi (t), t) xi i + ... x (7.1.7)

The approximation that Amari and later authors do at this step, is to take u x ( xi (t), t) = u x ( xi (t)). It becomes clear that an Amari system cannot be used to evolve transient dynamics in an accurate way. To do that, one needs some information about the dynamics away from the crossing points so that one is able to construct u x and z x . On the other hand, the approximation does not affect the linear stability properties of steady states because at the linear order the system remains the same, as we will see below. 155

1.2

50 3

J
1

25

Stable bump
1 0.8

0 0 0.04

0.08

0.12

0 2 3 0 0.04

0.08

0.12

Figure 7.1: Left: The parameter region (h, ) in which the bump steady state (7.1.2) is stable,
for a = 1 and = 2. At the border a pair of eigenvalues becomes imaginary i.e. there is a Hopf bifurcation. Centre: Evolution of the right crossing point x2 (t) beyond the Hopf bifurcation Right: A bifurcation diagram of the ODE system (7.1.8) describing the dynamics of the crossing bumps (dashed line) but only the wide one may become stable (thick line). The Hopf bifurcations are clearly visible, giving rise to an unstable orbit (hollow circles). (with x2 x1 = 0) for = 1.142 and h = 0.07. The periodic orbit is in fact unstable (dotted line). point x2 , with parameters as in plots on the left. There are a wide and a narrow steady state

With the gradient approximation, the ODE system for the four variables ( x1 , x2 , 1 , 2 ) is, xi 1 = h W ((t)) + a i , u x ( xi ) i z x ( xi ) = h W ((t)) + a i u x ( xi )

i = 1, 2.

(7.1.8)

i + Wa ((t)),

We have implemented (7.1.8) in XPP and tracked its steady states (Appendix 9.8). Example dynamics of the right crossing point x2 (t) (for a bump centred at x = 0) are shown in Figure 7.1, centre. The ODEs bifurcation plot is shown on the right. The plot coincides with the results from Section 7.1.1. If we track the Hopf bifurcation points by a second parameter we will obtain exactly the curve in Figure 7.1, left. Unlike the Evans method, in this case we can also see the trajectories of the Hopf orbit beyond the bifurcation. We see that the Hopf bifurcations are subcritical, and the breather trajectories are unstable. Let us nd manually the eigenvalue problem for the linear stability of (7.1.8). One can see that this system has the same steady state as the original model (7.1.1). The perturbed and linearised ODEs, with xi (t) = xi + et xi and i (t) =

156

z( xi ) + et i , are u x ( xi ) xi = w()( x2 x1 ) + a i , x i = z x ( xi ) i i + wa ()( x2 x1 ).

(7.1.9)

Importantly, linearization of the system with time-dependent gradients gives the same result, since u x ( xi , t) xi = u x ( xi , t) + u xx ( xi , t)et xi xi + O ( x 3 ),

and also u x ( xi , t) = u x ( xi ) + u x ( xi , t) + O( x 2 ). Therefore the steady states of (7.1.8) have the same stability properties as those of (7.1.1). We will now check this. From (7.1.9) we have i = a () z x xi + wa ()( x2 x1 ) ,

substituting it in the equation for xi we get ux zx a a xi = wb ()( x2 x1 ),

where we have used the same denition of wb ( x, ) as in (7.1.5). Clearly the non-diagonal elements of the determinant (7.1.5) are recovered. Now consider transform it we will express the gradient of the steady state by differentiating (7.1.2) and z( x ) = Wa ( x x1 ) Wa ( x x2 ): u x ( x 1 ) = w (0) w ( ) a z x ( x 1 ) = u x ( x 2 ), z x ( x 1 ) = w a (0) w a ( ) = z x ( x 2 ). the upper left element, which in this case is E = wb ()
ux

ga zx . To

From these I can construct the substitution w b ( ) = w ( ) a a w a ( ) = w (0) a z x u x a a ( w a (0) z x ) to get E = w (0) a z x u x a a w a (0) + a a z x w (0) a a w a (0) u x 1 + ux zx + a a = 1+ a z x = w b (0 ) u x 1 ,

+ a a z x
157

which is exactly the upper left element of (7.1.5). To recapitulate, in this Section we have shown that despite the few steps of approximation necessitated by the Amari method, if performed carefully it gives the exact same steady state and stability properties as the Evans function. In addition the Amari results give us some information about the transient dynamics, however it is that information which is approximate (the breathers width may not be the one shown by the Hopf trajectory in Figure 7.1, centre, right). We hope that the present discussion would reinstate condence in the use of Amari approach for analysing complicated neural elds with several variables.

7.2

Weakly interacting bumps

First, we describe a novel type of neural eld model that was introduced by Coombes and Owen [248, 249]. The adaptive properties of neurons are modelled in a different way here. Rather than introducing an ad hoc negative feedback term as with the spike frequency adaptation model (Section 5.3), we consider that following a long period of ring the neuron becomes exhausted and its ring threshold effectively rises. This is the so called threshold accommodation [230, 250]. Thus, the ring rate threshold h in the model is dened as a variable equipped with its own dynamics. Coombes and Owen showed that this model can exhibit very interesting localised solutions both in 1D and 2D. One simple example is the interaction between two bumps of activity which might behave like two particles, colliding or bumping off each other. The analytic understanding of this dynamics is yet to be pursued, but here we develop an apparatus for understanding the interactions between these bumps when they are at sufciently large distances. It is, again, an adaptation of methods developed for nonlinear PDEs [251]. One obtains at the end a reduced system of equations for motion of the centres of the bumps.

158

F G

D N

Figure 7.2: Left: A solitary travelling bump in the threshold accommodation model (7.2.1).
Right: An sketch of a multibump solution. The nth bump is localised within the interval n in the sense that outside qn and pn are O().

7.2.1 A model with threshold accommodation


The system describing a neural eld with threshold accommodation is 1 u( x, t) = u( x, t) + w f (u, h) t h( x, t) = h( x, t) + h0 + g(u). t The functions f and g are nonlinear. The threshold dynamics is inuenced by the intensity of local activity through the term g(u). Again, we assume a ring rate in the form of a Heaviside function, f (u, h) = H (u h). We also assume that accommodation itself is a threshold process and set g = H (u ). When

relaxes toward a value h0 R, otherwise toward a value h0 + R. It makes work with, is 1 u = u + w H (u h) t h = h + h0 + H (u ). t

the local activity is below a reference threshold R, the dynamic threshold

sense to take the parameters such that h0 < < h0 + . The system we will

(7.2.1)

The stationary bump steady state and its stability is investigated in [248]. Here we review how to construct a travelling bump with speed c as a steady state of the system. This calculation is simpler, as well as, more relevant to the problem of interacting bumps, as those are less likely to remain stationary. We introduce the coordinate = x ct and seek functions u(, t) = u( x ct, t) and h(, t) = h( x ct, t) that satisfy equations (7.2.1). It is easier to work with the integral

form of the equations, obtained with the Greens functions (t) = et H (t)

159

and h (t) = et H (t). In the (, t) coordinates the system is


u(, t) =

(s)
0

w(y) H [u( y + cs, t s) h( y + cs, t s)] dy ds,

h(, t) =h0 +

h (s) H [u( + cs, t s) ] ds.

Let us denote the stationary solution as (u(, t), h(, t)) = (q( ), p( )). It satises

q( ) = ( ) =

(s)( + cs) ds, w(y) H [q( y) p( y)] dy,


0

p ( ) = h0 +

h (s) H [q( + cs) ] ds.

Let the crossing points of the bump with the threshold be the points 1 and 3 , and with the dynamic threshold p( ), be the points 2 and 4 . For a rightmoving bump (c > 0) we would have 1 < 2 < 3 < 4 . It is illustrated in Figure 7.2, left. Since we have that q( ) > in the interval ( 1 )/c s

Similarly, we have ( ) = q( ) = 1 c
0

( 3 )/c, we obtain the explicit form ( 1 e 3 c 1 ) e 1c p ( ) = h 0 + 1 e 3c 0


4 2

< 1, 1 3, 3 < . (7.2.2)

w(y) dy, therefore (7.2.3)

ey/c [W ( + y 4 ) W ( + y 2 )] dy.

The primitive of the standard wizard hat function is W ( x ) = xe| x| and it is straightforward to obtain q( ). The speed c and three of the unknowns 1 , . . . , 4 are determined by the simultaneous solution of the threshold crossing conditions, q( 1 ) = , q ( 2 ) = p ( 2 ), q( 3 ) = , q ( 4 ) = p ( 4 ).

One of the crossing points remains undetermined due to the translation invariance of the bump solution. It has to be set by hand. We will not review the stability analysis of the travelling bump. Coombes and Owen [248] used the Evans function approach to show that there are parameter regions in which it is stable, delimited by a loss of stability to travelling 160

breathers. Assuming that the system is in the stable regime, we will now investigate how two or more bumps may interact with each other.

7.2.2 Derivation of equations of motion for bumps


Here we follow the method proposed by Elphick et al. [251]. It was previously adapted to neural elds by Bressloff [252]. We construct a superposition of bumps decaying exponentially with a rate in a model with connectivity decaying with a rate 1/ (for the wizard hat function (3.3.5) that we use, = 1). If two or more of these bumps are placed on the real line at a distance between each two at least d, such that ed min{ , 1/,1/c} = 1, then the interactions between these bumps would be of order O() or weaker. In the following we will attempt a perturbation analysis about the small parameter . A superposition of bumps pn , qn , n = 1, . . . , N at initial positions xn R (with

| xn xm | > d), travelling with intrinsic speeds cn O(1), can be written as


u h

n =1

qn ( x/cn t xn /cn n ( )/cn ) pn 0 ( N 1) h0

n =1

Rn ( x/cn t, /cn ) . (7.2.4) Sn

The shapes (qn , pn ) are those of the solitary bumps as solutions to the system (7.2.1) (solutions for the respective travelling speeds cn ), with an incorporated scaling of cn . If we picked all cn = c, they would be translated copies of the same structure (q, p). Since each of the pn shapes adds h0 throughout the definition domain R, we subtract N 1 of these. A superposition even of widely spaced bumps cannot be an exact solution of the nonlinear system (7.2.1), there = t. These phases indicate the phenomenon we are interested in: the relative motion of the bumps with regards to each other. In the following we endeavour to extract reduced ODE equations for them.

fore we include the remainder terms ( Rn , Sn ) and slowly drifting phases n ( ),

161

A system for the remainders n Rn , n Sn We substitute the multiple-bump solution (7.2.4) in the governing system (7.2.1). In the rst equation we obtain

n =1

n cn

q + x + n

cn

Rn =

n =1

(qn + Rn ) + w H

n =1

(qn pn + Rn Sn )

Since for a solitary travelling bump as a steady state we have

p ( x/c t) = p( x/c t) + h0 + H (q ),
replacing n (q qn ) above we have n

q ( x/c t) = q( x/c t) + w H (q p),

(7.2.5)

n =1

n qn + x + + 1 Rn = cn cn wH

n =1

(qn pn + Rn Sn )

n =1

H (qn pn ) .

integral gives
N

A formal Taylor expansion with respect to ( Rn Sn ) inside the convolution

wH

n =1

(qn pn + Rn Sn )
n =1

w H

(qn pn )

+ H

n =1

( q n p n ) ( R n Sn ) + O ( 2 )
n =1 i cn ( x xn ) , |q p | n n

Similarly to Section 7.1.1 we can interpret H (qn pn ) = (qn pn ) =

i where xn are the crossing points where qn = pn . For well separated bumps

there are N pairs of crossing points. After a similar calculation with the equation for h( x, t) in (7.2.1), grouping the

162

terms of order O(), we obtain a system for the remainders (n Rn , n Sn ), 1 x w (n (qn pn )) w (n (qn pn )) n Rn n Sn

(n qn ) +

1 x

n q n cn p n

w [ H (n (qn pn )) n H (qn pn )] [ H (n qn ) n H (qn ))]

+ O(2 ). (7.2.6)

We can simplify the right-hand side and show that it is indeed of order O(), as follows. Partition the real line into non-overlapping domains n each containing one of the bumps so that ( pn , qn ) O() outside n . The bump separation time about a small parameter (qn , pn ) O(): wH d is sufciently large to permit this. Then make another Taylor expansion, this

(qn pn )
n

=
m

w( x y) H

qn (y/cn ) pn (y/cn ) qn (y/cn ) pn (y/cn )

dy = dy = dy.

w( x y) H w( x y)

qm (y/cm ) pm (y/cm ) +

n =m

H (qm pm ) + (qm pm )

n = m 1

( q n p n ) + O ( 2 )

Only nearest neighbour bumps contribute to the integral. Because H (qm Thus we have w H

pm ) = 0 outside of m , we can extend the integral boundaries back to (, ).

(qn pn )
n n

H (qn pn ) =
n

w (qn pn )(qn1 pn1 + qn+1 pn+1 ) O().


For the second row of (7.2.6) one obtains x
1 1 1 [ xn n , xn ) 3 3 3 ( xn , xn + n ],

H ( qn ) H (qn )
n n

1,3 Here xn are the left and right crossing points of the solitary bump steady state 1,3 where i are the crossing points described in Section 7.2.1), and n are per1,3 i qn ([ xn xn n ( )]/cn t) = (that is, we have xn = i cn + cn t + xn + ( ),

otherwise.

turbations introduced by the superposition. The above quantity has L2 norm of 163

order O(). Later it is used only within a scalar product, where we can legitimately write it out as

n =1

1 3 cn ( x xn ) cn ( x xn ) + |q ( 1 )| |q ( 3 )| n n

The system (7.2.6) becomes 1 x w (n (qn pn )) w (n (qn pn )) R S

(n qn )

1 x

=
n

n q n + cn p n

w n ( q n p n ) ( q n 1 p n 1 + q n +1 p n +1 ) n ( q n ) ( q n 1 + q n +1 )

+ O(), (7.2.7)

with R = n Rn , S = n Sn . We have added the vectors (q0 , p0 ) = (q N +1 , p N +1 ) = 0 to streamline the notation. We will write the above system in short as an operator equation L R S

= M.

We will obtain equations for the coordinates ( ) participating in M from the solvability condition (Section 5.1.2) for this system. Namely, the kernel of the adjoint of L has to be orthogonal to the right-hand side M. If L is the adjoint operator and ( R , S ) ker L , mathematically this is expressed as

<

R R R R , M> = < , L > = < L , S S S S

R > = 0. S

(7.2.8)

Therefore our next steps are to nd the adjoint operator of L, the basis vectors of its kernel, and project M onto them. The null space of L With a denition for the scalar product

<
we get L =

R , S

R >= S

R ( x ) R( x ) + S ( x )S( x ) dx,

1 + x (n (qn pn )) w (n qn ) (n (qn pn )) w 1 + x

(7.2.9)

164

Let us rst nd the kernel of a simpler operator, L = n 1 + x (qn pn ) w (qn ) (qn pn ) w 1 + x

This is an operator corresponding to only a single bump (qn , pn ). Later, we will


P n then for the system of all bumps is true < L ( R ), (Q)> = O(). S
n n show that for any bounded functions Q, P, if < L ( R ), (Q)> = 0, n = 1 . . . N, n S P

Now we solve L n R S

= 1 + x

(qn pn )w (qn ) R S (qn pn )w 0

R S

= 0. (7.2.10)

Let the points of threshold crossing for a travelling bump, qn ( ) = pn ( ) and qn ( ) = , be respectively 2,4 and 1,3 , with 1 < 2 < 3 < 4 , as discussed in the preceding Section 7.2.1. In the common coordinate system of the superi position (7.2.4), these points are located at xn = i cn + cn t + xn + n ( ). The

Greens function of the operator 1 + x is e x , however remember that x desecond row in (7.2.10) gives S x t = cn

notes differentiation with respect to the variable x/cn t in (7.2.4). Thus, the
4 2 ( x xn cn x ) ( x xn cn x ) + |q ( 2 ) p ( 2 )| |qn ( 4 ) p ( 4 )| 0 n n n x t dx dx = w( x cn x x ) R cn

e x cn

i =2,4

H(x

i xn )

cn e( x xn )/cn |q ( i ) p ( i )| n n

i w( xn x ) R

x t dx . cn

The rst row can be written as R x x t = S t + cn cn 1 3 ( x xn cn x ) ( x xn cn x ) e x cn + |q ( 1 )| |q ( 3 )| 0 n n

x t x dx = cn
i

( x xn )/cn xi x i cn e S n t . t + H ( x xn ) cn |q ( i )| cn n i =1,3
4

Therefore the kernel vectors have the form, R S x t = cn x t = cn

j =1 4

rj e(xxn )/cn H (x xn ),
j

(7.2.11) (7.2.12)

j =1

sj e(xxn )/cn H (x xn ),
165

with s1 = s3 = 0, s2 = r2 and s4 = r4 . For i = 1, 3 the expressions which we set to ri are calculated as ri = cn xi S n t = |q ( i )| cn n

cn ( )| |qn i

j =1

i sj e(xn xn )/cn H (xn xn ) = |q ( i )| n j

cn

rj eij (7.2.13)

j=2,4

in the expressions eij ). For i = 2, 4 we have ri = cn ( ) p ( )| |qn i n i


i w( xn x ) R 4

i (for now we will not pay attention to the value of H ( xn xn ) and lump them

x t dx = cn
x j )/c n n

cn ( ) p ( )| |qn i n i

i w( xn x ) e( x j =1

H ( x xn ) dx =

cn ( ) p ( )| |qn i n i

rj
j =1

i w( x xn + xn )e x /cn dx =

cn |q ( i ) p ( i )| n n

rj Wij .
j =1

(7.2.14)

This is a homogeneous system of four equations for four unknowns ri . From the form of the solitary bump we know that q ( 1,2 ) > 0 and q ( 3,4 ) < 0 and n n also that the differences q ( i ) p ( i ) will have opposite sign for i = 1, 2 and n n i = 3, 4. Therefore the determinant of the system is

1
cn W21 q ( 1 ) n

ne q ( c)12 ( p n 2 n

2)

0
c qn W23 ( )
n 3

cn e14 q ( 4 ) p ( 4 ) n n
24 q ( cn Wp ( ) n 4 n

1 +
n

cn W22 q ( 2 ) p ( 2 ) n n

4)

0
cn W41 q ( 1 ) n

ne q ( c)32 ( p 2 n

4)

1
c qn W43 ( )
n 3

cn e34 q ( 4 ) p ( 4 ) n n

cn W42 q ( 2 ) p ( 2 ) n n

cn W44 q ( 4 ) p ( 4 ) n n

Differentiating the explicit form of the solitary bump (7.2.3), (7.2.2) we learn namely: that the relationships q ( i ) = cn (W2i W4i ) and p ( i ) = cn (e1i e3i ) hold, n n q ( ) = n p ( ) = n
0 0

e w( + 4 ) w( + 2 ) dcn , e cn

( + 1 ) ( + 3 ) |q ( 1 )| |q ( 3 )| n n 166

d .

With their help the determinant reduces to zero, therefore there is a solution so that eij = 0 due to the involved Heaviside functions. After some algebra we obtain a solution to the system, which is r1 = 0, r2 = (W23 W43 )W24 ,

i (r1 , . . . , r4 ) multiplied by an arbitrary constant. For i < j we have xn xn < 0

r3 = e32 W24 ,

r4 = W42 + e32 (W23 1).

n Note that unlike in [252], here ( R ) for different n are not necessarily translations S n

n Combined with (7.2.11) this gives explicitly ( R ) ker L for each n = 1, . . . , N. n S

of each other since the bumps have different base speeds cn . The solvability condition and equations of motion We have found how the null vectors of the partial operators L look like. Now n we show that they give an approximation to rst order of the null vectors of the operator that we need, L . Because the bumps are localised, the threshold
i i crossing points of the superposition (7.2.4), xn + n , are distributed in separate

intervals n . For any function Q( x ) we have

Q( x )

(qn pn )
n

dx =

Q( x ) (qn pn ) dx + O().
n

Thus, the operator L (7.2.9) can be broken up into L = L + m

n =m

(qn pn )w (qn )
(qn pn )w 0

+ O ( ),

for any m = 1, . . . , N. Applying it to the mth null vector, for any bounded Q, P we obtain R m , Sm R P m , > = < L m Q Sm (qn pn )w 0 P >+ Q R m , Sm P > + O ( ) = O ( ). Q

< L

<
n =m

(qn pn )w (qn )

We look for a weak solution only, otherwise the -functions will not be properly dened. The rst scalar product is zero, the second is O() because of the bump localisation again:

<(qn pn )w R , Q> = m
i Q( xn ) |q ( )| i =2,4 n i

Q( x )(qn pn )

w( x x ) R ( x ) dx dx = m n = m,

i w( xn x ) R ( x ) dx O(), m

167

i because under the integral we have that w( xn x ) is O() outside n , and

R ( x ) is O() outside m . m

It follows that the solvability condition is R m < , M > = O() Sm instead of (7.2.8). Thus, we will obtain equations for the bump movement ( ) correct to rst order only. Again, due to the bump localisation the projection of M onto the mth null vector simplies to R m m < , cm Sm 1 R m < , Sm q m >+ p m w ( q m p m ) ( q m 1 p m 1 + q m +1 p m +1 ) (qm ) (qm1 + qm+1 )

> = O ( ).

Using (7.2.11), the rst scalar product is calculated as R m m = < , Sm rj q m q m >= p m

e(xxm )/cm H (x xm )
j =1

x xm m x xm m t + sj p t m cm cm 1 cm
j =1 0

dx =

ey rj q (y + j ) + sj p (y + j ) dy. m m

Note that = y + j is the local coordinate system for each bump shape as used in Section 7.2.1. We have j = ( x xm )/cm for every m = 1 . . . N and j = 1 . . . 4 (note that j = j (c) and thus they may be different for the different bumps). The point = 0 is the bump centre, in the superposition coordinates
0 i this is xm = xm cm i = xn + cn t + n ( ). The numbers xn are the initial values j

of the bump centres, at t = 0.

168

The second scalar product gives

< R , w ((qm pm )(qm1 pm1 + qm+1 pm+1 ))> = m


j =1 0

rj ey

cm qn (
4

w(cm y + xm x )

n = m 1

x xn n x xn n t) pn ( t) dx d = cn cn
0 i ey w(cm y + xm xm ) dy j

i cm ( x xm ) |q ( i ) p ( i )| m i =2,4 m

i =2,4

rj |q ( i ) p ( i )| m j =1 m

n = m 1

i 0 i 0 xm xn xm xn qn ( ) pn ( ) cn cn


i =2,4 n=m1

ri q n (

0 0 0 x0 xn cm xm xn cm i ) pn ( m i ) . cn cn cn cn

The multipliers ri pop up because of the equality (7.2.14). The arguments of qn , pn show that bump interaction depend mainly on the distance between the
0 0 bumps, as measured by xm xn . The complicated speed factors come in be-

cause distances are seen through the moving local coordinate system of each bump. The second part of the scalar product gives, with (7.2.13),
<Sm , w ((qm )(qm1 + qm+1 ))> =


i =1,3 n=m1

si qn (

0 0 cm xm xn i ). cn cn

Putting all this together, we get a system of equations for the bump centres
0 xm (t). Namely,

1 0 1 xm = 1 cm m

i =1

[ri (qm1 + qm+1 ) + si ( pm1 + pm+1 )] ,

m = 1, . . . , N, (7.2.15)

with the arguments of q and p as in the above calculations. All of the par0 ticipating functions and parameters other than xm are xed and can be found

beforehand as steady states to the solitary bump problem (Section 7.2.1). This is indeed a system of nonlinear ODEs. Note that although we have omitted the index m, the coefcients ri , si are different in each equation. This is due to the
i participating differences xn xn = [ i j ](cn ) in eij and Wij . j

169

Trains of travelling bumps The structure of (7.2.15) becomes more apparent if we take an example where they have identical bump shapes qn ( ) = q( ), pn ( ) = p( ), correspondingly the same crossing points within the local coordinate system and ri , si do not change across the equations. We can set the function f (x) =
4

all bumps move with the same intrinsic speed, c1 = = cn = c. In that case

i =1

ri q x/c i + si p x/c i

(7.2.16)

Equation (7.2.15) then is written simply as 1 1 0 0 0 0 0 xm = 1 f ( x m x m 1 ) + f ( x m x m +1 ) , c with =


y/c f (y) dy. 0 e

(7.2.17)

0 0 Since the bumps lie far apart (xm xm1 > d), equation (7.2.16) conrms that

the interaction between them depends on the asymptotic behaviour of the bump shape. From Section 7.2.1 we can see that it is an exponential decay with a rate 1/c for p( ) (in the direction of motion, p is zero in the other), and /c + 1/

for q( ). We can set f L ( x ) = f ( x )/ for x > 0, and f R ( x ) = f ( x )/ for

x < 0. The functions f L,R ( x ) could be thought of as the proles of forces that are

exponentially decaying with distance. Essentially, in (7.2.17) the mth bump is pulled or repelled by its left and its right neighbour. The sign of the interaction depends on the asymptotics of the connectivity function. A simple steady state solution would imply that all bumps are travelling at the same speed, in a train. This speed might deviate from the solitary bump speed c due to the superposition; let us denote it c(1 + ). The distances m =
0 0 xm xm1 would stay xed. Then (7.2.17) becomes

= f R ( m +1 ) + f L ( m ). Knowing the speed and the spacing between two bumps, from this map we can determine all others. One can test the stability of the train conguration in the usual way. Maps like the above are dynamic systems which posses rich repertoire of solutions (each would determine one patterning of R with bumps). Equations such as (7.2.17) are known as lattice equations and also are extensively studied. Here we do not have the space to go into this. References and more information can be found in [251] and [252]. 170

7.3

Chapter summary

Here we collected several of our results about bumps in 1D models. This is to serve as an introduction to a different set of methods which are used for studying localised solutions. It is a complement to the discussion of periodic pattern formation due to symmetry breaking, which is the focus of most of our work. In Section 7.1 we show two methods for determining the stability of a bump, Evans and Amari method, and compare their results due to existing doubts on the validity of the Amari method. Despite the few steps of approximation involved in it, we show that it gives the same stability conditions as the Evans method. While determining stability of localised structures by utilising Evans function has a solid theoretical base (see references in [247]), the Amari method might be sometimes advantageous in the practice. For a comlicated system, the Evans function might be too difcult to solve, and especially, to track its zeroes with regards to a varying parameter. On the other hand, the Amari method constructs a system of rst order ODEs. One can determine the latters bifurcation diagram in a standard way, or directly use tracking software such as XPP/AUTO, as we do (Appendix 9.8). Further, we use the concepts introduced by the Amari method (namely, that one could thing of the bump as a stable shape and thus consider ODE equations for the movement of only several of its points) to apply a much more advanced method of mathematical physics to a neural eld model. Interactions of the bumps through the neural eld, that are weak enough and do not alter their shapes, can be seen as kinematic forces of attraction and repulsion between the bumps depending only on the mutual distance. This is apparent intuitively when one looks at the numerical simulations with travelling bump patterns in [248]. In Section 7.2 we conrm this intuition by mathematics: we construct ODE equations governing the motion of the centres of the interacting bumps. Analysis of these equations can become a project of its own and we have left it out.

171

C HAPTER 8

Discussion
In Chapter 3 we gave an extensive exposition of why neural eld models are of importance. They occupy their own place in a large and connected spectrum of modelling effort that comprises theoretical neuroscience. At one end are the biophysical models of subneuronal molecular mechanisms, at the other the abstract models of cognitive psychology. Neural elds are often called upon to provide a link between observables in cognitive tasks and a neuronal substrate [253, 254]. Since they are positioned near the top of the modelling hierarchy, there are, unfortunately, few experimental indicators about what the ne details of the model equations should look like. Numerous authors have included one group of neuronal features and excluded another, nding the model that produces dynamics desirable in the particular study. This has led to a plethora of neural eld models with varying differences and commonalities. One of the aims of the present work is to bring some systematisation of our knowledge about how the dynamics depends on some of the common model features. We have focused on the class of 1D neural elds with delays. Our results in Chapters 4 and 5 show that the model behaviour remains similar across different delay mechanisms (conduction, diffusive, and in [140], xed discrete delay). For small and intermediate values of the delay the system can undergo a TuringHopf bifurcation to travelling wavetrains. For larger values of the delay neurons synchronise across space leading to a Hopf bifurcation. Two temporal scales in the model (the delay mechanism plus the delay inherent in synaptic processing, Section 3.1.3) are necessary to obtain patterns that are dynamic in time, just as interaction at two different spatial scales is necessary for spatially inhomogeneous patterns (Turings mechanism). However, 172

while local excitationdistal inhibition connectivity gives rise to static patterns as expected for a Turing bifurcation, it is local inhibitiondistal excitation that permits dynamic ones when combined with the delay. This rule is broken only by introducing more scales in the problem, for example, by adding an adaptive eld (Section 5.3). A number of natural extensions of the models presented here are possible. For example, a treatment of distributed axonal conduction speeds has been suggested by Hutt and Atay [255, 256]. Axonal speeds in cortex are not uniform across neurons; there is experimental evidence for a gamma distribution of velocities [215, 257]. Heterogeneous and non-translationally invariant connectivity is another feature that will make the models more realistic [258]. A problem which becomes feasible with the formulation of generalised connectivity (Section 4.2), is to use neural elds for modelling networks with simple learning and memory abilities [233]. In Chapter 5 we scrupulously derived and investigated the normal form of the steady states TuringHopf bifurcation (up to the cubic order). We found that in neural eld systems (with delay) the normal form is the non-local mean-eld GinzburgLandau equations. The dominant dynamics beyond the bifurcation is travelling wavetrains. Standing waves can be obtained only by the ne tuning of many parameters, which renders them an oddity to observe in any application. We checked the sideband stability of these solutions, but further work may be needed to learn about the dynamics that evolves beyond it. We failed to numerically nd chaos where predicted by the theory. On the other hand, our analysis suggests we shouldnt expect any novel phenomena in integral equations, not previously found in partial differential equations (PDEs). The properties of 1D neural elds are by now relatively well explored. It is not so in the domain of 2D models due to the technical difculties, related on one hand to the great variety of pattern types that could emerge, and on the other, to the challenges of numerical simulation of the models. In Chapter 6 we suggested a method to brave the latter problem for a broad class of equations. It allows the construction of PDE system with dynamics that is similar (in terms of steady states and their stability) to that of the integral equation. We tested and illustrated this method on a neural eld with space-dependent conduction delay and isotropic radially symmetric connectivity. The next challenge is to 173

tackle, both numerically and analytically, models that are not symmetric. We have made a rst step with the modulated patchy connectivity in Section 6.3. This is just a prototype model that could be further elaborated to reect cortical connectivity in the detail that neuroscientists understand today (Section 3.5.2). Work in the same direction has been initiated by [234]. The most interesting phenomena whose investigation is still at its inception are connected with localised solutions. In Chapter 7 we presented several methods for studying localised solutions of 1D neural elds. However the richness of dynamics parallels that of some reactiondiffusion systems such as the Gray Scott model [259] and three-component reaction-diffusion equations [260]. The bumps we constructed are in effect analogs of dissipative solitons (autosolitons) in PDEs [261]. They can behave like quasi-particles, in the sense that two bumps can cross, bounce off, or annihilate each other, or entrain together in a bound state (a 2-bump), depending on the system parameters [248]. To understand these processes in reactiondiffusion systems, Nishiura et al. [262, 263] have suggested the concept of a scattor, a hidden unstable solution that inuences the collision outcome. Adapting this technique to neural elds is a possible avenue of research. One can observe also self-replication (splitting) of bumps similar to [264]. There is a need for a substantial effort in the study of localised solutions of neural elds. The progress that has been made so far has relied on the choice of a Heaviside ring rate function. It is important to move beyond that and be able to tackle models with smooth ring rates like the ones used in the other parts of this Thesis. Also the effect of the delay mechanism on localised solution has not been explored as of yet. A major challenge is to extend the above mentioned methods to problems in 2D domains. All of the described phenomena have their 2D analogs as shown by recent numerical work [248, 265]. In the plane there are also novel types of instabilities such as granulation of activity rings, growth of labyrinthine patterns, rotating bound states, etc. Again, these are reminiscent of dynamics known in reactiondiffusion systems [259, 260, 266, 267]. As a rst step, one could probe the interaction of bumps that lie far apart. It can be reduced to equations of motion for their centres, as we did for the onedimensional case in Section 7.2. If the eld connectivity leads to repulsive dy174

namics at shorter distances it is natural to look at the problem of bump scattering through this mechanism. The extension of the derivation in Section 7.2.15 to a 2D problem should not cause much trouble. In PDE systems, [260, 268, 269] and others have found that scattering of bumps in this way can have complicated outcome depending on their mutual angles (moving bumps are asymmetric) and speeds. More intricate is to extend the Amari analysis of Section 7.1 to 2D. This means composing equations for the movement of a curved front (the points at which the activity crosses the nonlinearity threshold). In this case the front dynamics is strongly inuenced by its nonuniform curvature. One tries to construct equations for the normal velocity and local curvature along the front. This method is necessary for the study of labyrinth growth, or spiral-wave nucleation (spiral waves are also observed in neural elds [270]). The technique in reaction diffusion systems is known as interface dynamics [150, 271, 272]. The work in Chapters 4 and 5 is published in [273]. The work in Chapter 6 is published in [274].

175

C HAPTER 9

Appendix
Here we give details on the numerics used in this thesis and attach some of the codes. Due to the multifaceted nature of our problem, we have used three different packages as the need arises. MatLab [275] was used for the simulations of the PDE and integral systems, and as a versatile programming package for calculating and plotting various functions and transformations of the data or parameters. XPP [276] was used to solve numerically the algebraic equations, mostly the models dispersion relations, and to follow solutions as parameters are varied. Using this software we could easily track bifurcation points and plot critical curves separating regions of different system dynamics in parameter space (XPPAUT incorporates the AUTO numerical continuation software). To work out heavy algebraic expressions, most often in order to prepare them in form suitable for coding in the mentioned packages, we have used the symbolprocessing software Maple [275]. We have already described in the main body of the thesis our approach of matching the theoretical predictions (usually also calculated numerically and possibly containing bugs) with the results from brute force simulations of the full models. Often, as intermediate steps of control we have calculated the same results or generated listings for the same equations in two different ways. Some of these cross-checks will touched upon below. Quite a few of the equation listings take many pages in length, we have omitted most of these, only giving a representative code for solving the simplest version of the respective models. The interested reader can obtain the working scripts by contacting the author or visiting his website http://www.umnaglava.org.

176

9.1

The dispersion relation for 1D models

To nd the points of Turing instability in Chapter 4 we have to solve (4.3.4) L = 1 1 ()K (k, ) = 0, together with = = 0. In Section 4.5, by setting G = Re L, H = Im L we put these equations in a form more suitable for calculation (4.5.1), (4.5.2), (4.5.4), G (, , k, 1 ) = 0, H (, , k, 1 ) = 0, = 0, k G H k H G = 0.

To plot the roots of this algebraic system in dependence of various parameters, we used the Newton method and the excellent continuation capabilities implemented in the ODE solver XPP. We set up the equations as a system of 1st order ODEs with dynamic variables those variables we want to be determined. The code for the above equations looks like this:
Code listing 9.1.1, in XPP format:
g=dGom*dHk-dHom*dGk nu=eG om=eH k=nu

When we ask XPP to nd the stationary points of this system, it will equate the right-hand sides to zero (solving the equations we are interested in) and x constant values for the variables on the left-hand side. The particular arrangement of variablesequations is of no signicance. To plot the critical curves of instability shown on Figures 4.4 and 4.5, one simply tracks the stationary states of this system in XPP/AUTO with regards of a model parameter. The XPP code for some of the models we studied is listed in the following Sections. Note that we transfer the sign of the wizard hat w0 to the sign of 1 which realistically can only be positive. Now points with c < 0 become valid and denote an inverted wizard hat with a gain of the nonlinearity equal to |c |.

9.1.1 Axonal delay


In Section 4.5.1 we found the following expressions for the integral transforms 1 1 = , () = (1 + )2 Q K (k, ) = 2
v

1+

+ k2 2 +
2 2

1+

+ k2

= 2

VD PD

177

The dispersion relation can be rearranged to Q PD + 21 VD = 0 which is a polynomial. We used Maple to expand it into powers of and to extract the coefcients ai (, v, 1 , k ). Separately we worked out the real and imaginary part for each power n = ( + i )n and its derivatives n / and n /. The resulting expressions are put together in an .ode script for XPP:
Code listing 9.1.2, in XPP format:
#coefficients of the polynomials G and H a6=1 a5=4*v+2*alpha a4=2*(k^2+3)*v^2+8*alpha*v+alpha^2 a3=4*(k^2+1)*v^3+4*alpha*(3+k^2)*v^2+2*alpha^2*(g+2)*v a2=(2*k^2+1+k^4)*v^4+8*alpha*(1+k^2)*v^3+2*alpha^2*(3*g+k^2+3-g*r)*v^2 a1=2*alpha*(2*k^2+k^4+1)*v^4+2*alpha^2*(g*k^2+2+3*g+2*k^2-2*g*r)*v^3 a0=alpha^2*(k^4+2*(g+g*r+1)*k^2+1+2*g-2*g*r)*v^4 #derivatives by k da6=0 da5=0 da4=4*k*v^2 da3=(alpha+v)*8*v^2*k da2=(v^2+4*alpha*v+alpha^2)*v^2*k*4+4*k^3*v^4 da1=(g*alpha+2*v+2*alpha)*k*4*alpha*v^3+8*alpha*k^3*v^4 da0=4*alpha^2*k^3*v^4+(1+g+g*r)*k*4*v^4*alpha^2 #real and imaginary parts of the lambda^n powers, and their derivatives re1=nu im1=om dre1=0 dim1=1 re2=nu^2-om^2 im2=2*nu*om dre2=-2*om dim2=2*nu re3=nu^3-3*nu*om^2 im3=3*nu^2*om-om^3 dre3=-6*nu*om dim3=3*nu^2-3*om^2 re4=nu^4-6*nu^2*om^2+om^4 im4=4*nu^3*om-4*nu*om^3 dre4=-12*nu^2*om+4*om^3 dim4=4*nu^3-12*nu*om^2 re5=nu^5-10*nu^3*om^2+5*nu*om^4 im5=5*nu^4*om-10*nu^2*om^3+om^5 dre5=-20*nu^3*om+20*nu*om^3 dim5=5*nu^4-30*nu^2*om^2+5*om^4

178

re6=nu^6-15*nu^4*om^2+15*nu^2*om^4-om^6 im6=6*nu^5*om-20*nu^3*om^3+6*nu*om^5 dre6=-30*nu^4*om+60*nu^2*om^3-6*om^5 dim6=6*nu^5-60*nu^3*om^2+30*nu*om^4 #putting together the polinomials and coefficients eG=a6*re6+a5*re5+a4*re4+a3*re3+a2*re2+a1*re1+a0 eH=a6*im6+a5*im5+a4*im4+a3*im3+a2*im2+a1*im1 dGom=a6*dre6+a5*dre5+a4*dre4+a3*dre3+a2*dre2+a1*dre1 dHom=a6*dim6+a5*dim5+a4*dim4+a3*dim3+a2*dim2+a1*dim1 dGk=da6*re6+da5*re5+da4*re4+da3*re3+da2*re2+da1*re1+da0 dHk=da6*im6+da5*im5+da4*im4+da3*im3+da2*im2+da1*im1 #the Turing conditions (XPP equates to zero the right-hand sides) g=dGom*dHk-dHom*dGk nu=eG om=eH k=nu ##xpp options @ nmesh=80, meth=2rb, parmax=5, dsmax=0.1 ##for access to the wavespeed aux c=om/k ##various initial conditions for xpp #the minimum instability #par v=1, r=1, alpha=1 #init nu=0, om=1.73205, k=2, g=8 init nu=-1, om=0.5 #the 2nd eigen for small v #par v=0.0728988, r=1, alpha=1 #init nu=0, om=1.5537, k=17.129, g=13.653 #the 2nd eigen near minimum #par v=0.50940578848, r=1, alpha=1 #init nu=0, om=1.794, k=0, g=8.03 done

Using polynomial expansion and crude Maple algorithms results in much less compact and more computationally intensive (for XPP) form of the problem, but it has the advantages that it is easy to generate new code for versions and extensions of the model, as well as, being more secure against human errors.

179

9.1.2 Axo-dendritic delay


The model with both axonal and dendritic space-dependent delays (Section 4.5.2) has a dispersion relation 1 = ( ) E ( z0 , )

21 VD (k, ) PD (k, )

specied as follows (these awkward names have been chosen to correspond to the mnemonic names used in the code below), 1 = (1 + )2 = Q, sq = + , D E ( z0 , ) = 1 z0 sq e , D sq a = 1 + + + . v

VD = ( a 1) a2 + k2 ( a + 1),

PD = ( a2 + k2 )2 ,

Unlike Appendix 9.1.1, in this case the dispersion relation is not polynomial, and we have to write out the real and imaginary parts manually. We did this for each subexpression, combining them in a nested sequence. In the code below the equation is represented in the form Q sqE1 PD + gVD = 0, with E1 = z0 eIm sq and g = 21 /Dez0 Re sq . In this arrangement g is a real coefcient and Q sqE1 is the part that contains but no k, referred to as lambda part in the code. The axonal delay is represented as b = 1/v so that it can be easily turned off to study the model with dendritic delay only. If further, one sets = 0, one obtains the model with xed synaptic position.
Code listing 9.1.3, in XPP format:
theta=atan(om/(cD*iota+nu)) modz2=sqrt((iota+nu/cD)^2+om^2/cD^2) resq=sqrt(modz2)*cos(theta/2) imsq=sqrt(modz2)*sin(theta/2) reoverz=resq/modz2 imoverz=-imsq/modz2 reQ=(1+nu/alpha)^2-om^2/alpha^2 imQ=2*om/alpha*(1+nu/alpha) reE=cos(imsq*z0) imE=sin(imsq*z0) #this is for a=1+lambda/v+phi*sqrt(iota+lambda/cD) and a^2 rea=1+nu*b+phi*resq ima=om*b+phi*imsq rea2=(1+nu*b)^2-om^2*b^2+(iota+nu/cD)*phi^2+2*phi*((1+nu*b)*resq-om*b*imsq) ima2=2*om*b*(1+nu*b)+om/cD*phi^2+2*phi*((1+nu*b)*imsq+om*b*resq)

180

rePD=(rea2+k^2)^2-ima2^2 imPD=2*(rea2+k^2)*ima2 reVD=k^2-rea2+rea*(rea2+k^2)-ima*ima2 imVD=-ima2+ima*(rea2+k^2)+rea*ima2 relampart=reQ*resq*reE-reQ*imsq*imE-imQ*resq*imE-imQ*imsq*reE imlampart=-imQ*imsq*imE+imQ*resq*reE+reQ*imsq*reE+reQ*resq*imE #derivatives by k^2 of above expressions redPD=2*(rea2+k^2) imdPD=2*ima2 reVDdk=1+rea imVDdk=ima #a 2*k falls in both, i put it on the g1 equation below #for the derivatives d_dom = d_dlam * i,

#it is applied at the dGom, dHom lines below and only there reda=1*b+phi*reoverz/(2*cD) imda=phi*imoverz/(2*cD) rePDdlam=2*(rea*redPD*reda-rea*imdPD*imda-ima*redPD*imda-ima*imdPD*reda) imPDdlam=2*(ima*redPD*reda+rea*imdPD*reda+rea*redPD*imda-ima*imdPD*imda) reVDdlam=reda*(-2*rea+3*rea2+k^2)-imda*(-2*ima+3*ima2) imVDdlam=imda*(-2*rea+3*rea2-k^2)+reda*(-2*ima+3*ima2) #derivative of the lambda part relamdlam=exp(resq*z0)*(reE*(reQ/2+reQ*reoverz/2-imQ*imoverz/2+2/alpha*(resq-om/alpha*imsq))imE*(imQ/2+imQ*reoverz/2+reQ*imoverz/2+2/alpha*(imsq+om/alpha*resq))) imlamdlam=exp(resq*z0)*(imE*(reQ/2+reQ*reoverz/2-imQ*imoverz/2+2/alpha*(resq-om/alpha*imsq))+ reE*(imQ/2+imQ*reoverz/2+reQ*imoverz/2+2/alpha*(imsq+om/alpha*resq))) g=2*g1/(cD*exp(resq*z0)) eG=rePD*relampart-imPD*imlampart+g*reVD eH=rePD*imlampart+imPD*relampart+g*imVD dGk=redPD*relampart-imdPD*imlampart+g*reVDdk dHk=redPD*imlampart+imdPD*relampart+g*imVDdk dHom=rePDdlam*relampart-imPDdlam*imlampart+rePD*relamdlam-imPD*imlamdlam+2*g1/cD*reVDdlam dGom=-(rePDdlam*imlampart+imPDdlam*relampart+imPD*relamdlam+rePD*imlamdlam+2*g1/cD*imVDdlam) g1=(dGom*dHk-dHom*dGk)*2*k nu=eG om=eH k=nu @ nmesh=80, meth=2rb, dsmax=0.1, parmax=8 #par k=1, g1=15.1763 par z0=1, phi=0, cD=1, iota=1, alpha=1 par b=0 init om=0, nu=0 done

181

9.1.3 Models with adaptation


Each of the above models could be extended with adaptation dynamics, according to (5.3.1). The Fourier-Laplace transform of the nonlocal linear adaptation term ( f a (u) = u, Section 5.3) is a wa = 1

(1 +

2 2 )(1 + k )

1 Q a PD a

For example, for the model with axonal delay the dispersion relation looks like Q Q a PD a + a PD + 21 VD Q a PD a = 0. In the case of nonlinear adaptation with f a f simply replace a with a 1 adaptation, setting = 0 we get PD a 1 and a model with local adaptation. Similarly to Appendix 9.1.1 we expand the dispersion relation as a polynomial of . We have to replace the polynomial coefcients in Listing 9.1.2 with the following (for a linear adaptation term):
Code listing 9.1.4, in XPP format:
a7=1+sigma^2*k^2 a6=(4+4*sigma^2*k^2)*v+3*alpha+3*alpha*sigma^2*k^2 a5=(2*sigma^2*k^4+6*sigma^2*k^2+6+2*k^2)*v^2+(12*alpha+12*alpha*sigma^2*k^2)*v+ 3*alpha^2+3*alpha^2*sigma^2*k^2 a4=(4*sigma^2*k^4+4*k^2+4*sigma^2*k^2+4)*v^3+(18*alpha+18*alpha*sigma^2*k^2+ 6*alpha*sigma^2*k^4+6*alpha*k^2)*v^2+(2*alpha^2*g1*sigma^2*k^2+12*alpha^2+ 12*alpha^2*sigma^2*k^2+2*alpha^2*g1)*v+alpha^3*sigma^2*k^2+alpha^3+alpha^3*g1a a3=(1+k^4+sigma^2*k^2+2*sigma^2*k^4+2*k^2+sigma^2*k^6)*v^4+(12*alpha*sigma^2*k^4+12*alpha+ 12*alpha*sigma^2*k^2+12*alpha*k^2)*v^3+(18*alpha^2+4*alpha^2*g1+ 18*alpha^2*sigma^2*k^2+4*alpha^2*g1*sigma^2*k^2+6*alpha^2*k^2+ 6*alpha^2*sigma^2*k^4)*v^2+(2*alpha^3*g1+4*alpha^3*sigma^2*k^2+4*alpha^3*g1a+ 2*alpha^3*g1*sigma^2*k^2+4*alpha^3)*v a2=(3*alpha+3*alpha*k^4+6*alpha*k^2+6*alpha*sigma^2*k^4+3*alpha*sigma^2*k^2+ 3*alpha*sigma^2*k^6)*v^4+(12*alpha^2*sigma^2*k^4+12*alpha^2*k^2+ 12*alpha^2*sigma^2*k^2+2*alpha^2*g1*sigma^2*k^4+12*alpha^2+2*alpha^2*g1+ 2*alpha^2*g1*k^2+2*alpha^2*g1*sigma^2*k^2)*v^3+(6*alpha^3*g1a+4*alpha^3*g1+ 6*alpha^3*sigma^2*k^2+6*alpha^3+2*alpha^3*sigma^2*k^4+2*alpha^3*k^2+ 4*alpha^3*g1*sigma^2*k^2+2*alpha^3*g1a*k^2)*v^2 a1=(3*alpha^2*k^4+4*alpha^2*g1*k^2+3*alpha^2*sigma^2*k^6+6*alpha^2*k^2+ 6*alpha^2*sigma^2*k^4+3*alpha^2*sigma^2*k^2+4*alpha^2*g1*sigma^2*k^4+ 3*alpha^2)*v^4+(2*alpha^3*g1+2*alpha^3*g1*k^2+4*alpha^3*sigma^2*k^4+ 2*alpha^3*g1*sigma^2*k^2+4*alpha^3*sigma^2*k^2+4*alpha^3*g1a*k^2+4*alpha^3*k^2+ 4*alpha^3*g1a+4*alpha^3+2*alpha^3*g1*sigma^2*k^4)*v^3 a0=(alpha^3+2*alpha^3*g1a*k^2+alpha^3*g1a*k^4+alpha^3*k^4+alpha^3*g1a+2*alpha^3*sigma^2*k^4+ 4*alpha^3*g1*k^2+alpha^3*sigma^2*k^6+alpha^3*sigma^2*k^2+ 4*alpha^3*g1*sigma^2*k^4+2*alpha^3*k^2)*v^4

(do this also in the code below). Setting a = 0 we get the model without

182

da7=2*sigma^2*k da6=8*sigma^2*k*v+6*alpha*sigma^2*k da5=(8*sigma^2*k^3+12*sigma^2*k+4*k)*v^2+24*alpha*sigma^2*k*v+6*alpha^2*sigma^2*k da4=(16*sigma^2*k^3+8*k+8*sigma^2*k)*v^3+(36*alpha*sigma^2*k+24*alpha*sigma^2*k^3+ 12*alpha*k)*v^2+(4*alpha^2*g1*sigma^2*k+24*alpha^2*sigma^2*k)*v+2*alpha^3*sigma^2*k da3=(4*k^3+2*sigma^2*k+8*sigma^2*k^3+4*k+6*sigma^2*k^5)*v^4+ (48*alpha*sigma^2*k^3+24*alpha*sigma^2*k+24*alpha*k)*v^3+(36*alpha^2*sigma^2*k+ 8*alpha^2*g1*sigma^2*k+12*alpha^2*k+24*alpha^2*sigma^2*k^3)*v^2+ (8*alpha^3*sigma^2*k+4*alpha^3*g1*sigma^2*k)*v da2=(12*alpha*k^3+12*alpha*k+24*alpha*sigma^2*k^3+6*alpha*sigma^2*k+ 18*alpha*sigma^2*k^5)*v^4+(48*alpha^2*sigma^2*k^3+24*alpha^2*k+ 24*alpha^2*sigma^2*k+8*alpha^2*g1*sigma^2*k^3+4*alpha^2*g1*k+ 4*alpha^2*g1*sigma^2*k)*v^3+(12*alpha^3*sigma^2*k+8*alpha^3*sigma^2*k^3+ 4*alpha^3*k+8*alpha^3*g1*sigma^2*k+4*alpha^3*g1a*k)*v^2 da1=(12*alpha^2*k^3+8*alpha^2*g1*k+18*alpha^2*sigma^2*k^5+12*alpha^2*k+24*alpha^2*sigma^2*k^3+ 6*alpha^2*sigma^2*k+16*alpha^2*g1*sigma^2*k^3)*v^4+(4*alpha^3*g1*k+ 16*alpha^3*sigma^2*k^3+4*alpha^3*g1*sigma^2*k+8*alpha^3*sigma^2*k+8*alpha^3*g1a*k+ 8*alpha^3*k+8*alpha^3*g1*sigma^2*k^3)*v^3 da0=(4*alpha^3*g1a*k+4*alpha^3*g1a*k^3+4*alpha^3*k^3+8*alpha^3*sigma^2*k^3+8*alpha^3*g1*k+ 6*alpha^3*sigma^2*k^5+2*alpha^3*sigma^2*k+16*alpha^3*g1*sigma^2*k^3+4*alpha^3*k)*v^4

We have put = 1 beforehand in order to shorten the code. Note that here we have to add to the polynomial expressions a seventh power of + i.

9.2

1D neural eld numerics in integral framework

Here we show how to evolve numerically the dynamics of the full nonlinear neural eld model with axonal delays of Chapter 4, in its original integral formulation (4.2.1). In fact, we will solve an integro-differential equation because we prefer the synaptic dynamics to be represented as a differential operator Q = (1 + t )2 rather than as its Greens function (t) = 2 tet H (t) (see Section 3.1.3). This spares us calculating one more integral on history, whilst allowing us to take advantage of the efcient MatLab algorithms for solving differential equations. Generally, an equation of the form Qu( x, t) = , where the right-hand side contains the integral, will be solved as the 1st order ODE system (with u( x, t) = u0 ) u0 = u1 , u1 = 2u1 u0 + ( x, t, u0 ). (9.2.1)

183

The integral is = w( x y) f u y, t | x y|/v dy w(z) f u x z, t |z|/v dz,

R R

the wizard hat w(z) falls off exponentially to zero with distance, we need only integrate over an interval z [ L, L] for some large L. While we have not attempted to estimate the method error, in practice we found that taking an L =

where z = x y is the distance between two communicating neurons. Since

12 we can get the instability points within 0.01 from the theoretically predicted values for c . To use the symmetry of w(z) we will calculate the integral as L =
L 0

w(z) f u( x z, t z/v) + f u( x + z, t z/v) dz.

Below we will refer to the right part of this product as G (z) (in the code Gz or Gl + Gr). Introducing a grid zn = nz, n = 0, . . . , N with spacing z such that L = Nz, we can split the integral into L =
N 1

z n +1 zn

w(z) G (z) dz.

(9.2.2)

n =0

The same spatial grid serves both coordinates x and z. The difculty comes from the diagonal relationship between them, i.e. for the data stored in G ( x, t) and G ( x, z) if we were to represent the history of dynamics data at a given moment in an array G ( x, t), the input to a neuron x consists of left- and rightFig. 9.1. We present three different ways to code up this problem, which we used as crosschecks and to hunt out bugs. In all cases, we solve the equation on a domain with periodic boundary conditions, with initial conditions assumed to have been constant in time until the starting point of the simulation. The codes below expect that the user has preloaded in the MatLab environment the model parameters v, 1 , 2 , 3 , k, , h, etc., as well as the desired grid parameters (good starting values are given as comments in Listing 9.2.1). The value of k is only used to make sure that the simulation domain is the correct size to t periodic solutions with that wavenumber. For visualisation of the solution one can use the commands from Listing 9.3.4. going diagonals G ( x nz, t nz/v), n = 0, . . . , N in this array, as shown on

184

u x

t z u wz

history of

( ) (

+
input trails

Figure 9.1: A sketch of the data transformation in the algorithms for an integral model with
space-dependent delays, z = x/v.. The actual integration is contained in the matrix multiplication with w(z).

Manual time-stepping scheme First we introduce a simpler algorithmic scheme, which does not make use of the MatLab differential solvers. We will use an explicit Euler method instead. The integrals of (9.2.2) are approximated by the trapezoidal rule, which we have incorporated within the multiplication with the wizard hat wz. Having our own time-step loop allows us to update and manipulate the data history G ( x, t) manually (in practice there are two histories, Gl represents input from neurons and to present more straightforward code to the reader. The sideways shifting to the left f ( x z, t z/v), and Gr, from those to the right, f ( x + z, t z/v))

of older data is taken care of once at every time-step, unlike later algorithms where we would have to rediagonalise the same data every time. Despite this, the code is not efcient due to the necessity of very small grid spacings to minimise the numerical error to acceptable levels. The spatial and time grids are tied together rather inexibly through t = z and x = t/v, and the sizes of data arrays can become prohibitively large. One has to worry also about the numerical stability of the explicit differentiation scheme.
Code listing 9.2.1, in MatLab format:
%example parameters % v=1; k=2; % g1=8.1; g2s=0; g3s=-0.2; % NH=100; % L=10; % tfinal=50; pt=200; period=4*2*pi/k; dx=period/NH; %the wavenumber k is only used to determine the simulation domain size %for a cubic nonlinearity just beyond turing-hopf bifurc %No of spatial points %spatial cutoff %length of simulation and number of time points %the domain size - in this case - to fit four instability waves

185

x=dx*(0:NH-1); dt=dx*v; z=[0:dt:L]; Nl=size(z,2);

%the spatial grid %to make symmetrical grid around zero from -L to L %all grid is 2*Nl+1 points %right half of wiz hat %for the trapezoidal rule

wz=(abs(z)-1).*exp(-abs(z)); wz([1 end])=wz([1 end])/2; tspan=dt:dt:tfinal; %%various initial conditions %bas=rand(NH,1)*2-0.5; %uinit=zeros(NH*2,1);

% bas=exp(-2*(x-8).^2);%+exp(-0.2*(x-18).^2)/2; bas=cos(k*x); %bas=[zeros(NH/4,1); ones(NH/4,1); zeros(NH/2,1)]; %we set up initial conditions as constant over remembered history %we need to roll them over one by one with each time step back Gl=bas; Gr=bas; %z=x-y %z=x+y %shifting in space previous states to get diagonalization

for ti=1:Nl-1 Gr=[Gr(:,2:end) Gr(:,1)]; Gl=[Gl(:,end) Gl(:,1:end-1)]; Gr=[bas; Gr]; Gl=[bas; Gl]; end Gl=A*Gl; Gr=A*Gr; %comment next two lines to continue a previous solution u0=A*bas; u1=zeros(NH,1); tic %the time-stepping for the ODEs u=u0; for ti=tspan fu=(g1*u0+g2s*u0.^2+g3s*u0.^3); Gr=[Gr(:,2:end) Gr(:,1)]; Gl=[Gl(:,end) Gl(:,1:end-1)]; Gr=[fu; Gr(1:end-1,:)]; Gl=[fu; Gl(1:end-1,:)]; psi=wz*(Gr+Gl)*dt; nu0=u0+dt*u1; u1=u1+dt*(-2*u1-u0+psi); u0=nu0; u=[u u0]; end toc t=[0 tspan]; %trapezoidal rule on half integrals %explicit Euler %storing current state and loosing the farthest ago %a cubic nonlinearity, g1 is the bifurcation param gamma_1 %shifting in space previous states to get diagonalization

186

Hutt et al. scheme Here we follow the more sophisticated approach described in [201], however we implement their formulas with a MatLab differential solver. Since in the right-hand side of the ODEs (9.2.1) one needs to access the values of the dynamic variable u0 at previous points in time, we have to use a solver for delayed differential equations, dde23. It expects as a parameter a list of xed delay intervals, which, inside every call of the function specifying the ODEs right-hand side, would generate an array of solution snapshots at the correct time moments. We pass on as such list the grid zn /v. This allows us to keep the temporal and spatial discretisation largely independent. The main le is
Code listing 9.2.2, in MatLab format:
%ae=6; ai=5; re=0.5; ri=1; muP=2.5; %ae=ae/(2*re); %ai=ai/(2*ri); Nl=ceil(L*NH/period); dx=period/NH; x=dx*(0:NH-1); dz=dx; z=dz/v*(1:Nl); %the time delay is z/v %z does not include the current moment (z = 0) %the last point may overshoot the value of L Nl=Nl+1; %since in RHS the variables will include also the point z=0 %the number of points in wizhat, so that they have the same density as in x %if using the Mex hat connectivity of Hutts paper

%various initial conditions %bas=rand(NH,1)*2-0.5; %uinit=zeros(NH*2,1); bas=exp(-2*(x-8).^2)+exp(-0.2*(x-18).^2)/2; % bas=cos(k*x); %bas=[zeros(NH/4,1); ones(NH/4,1); zeros(NH/2,1)]; uinit=[bas zeros(NH,1)]; uinit=A*uinit; %A is the amplitude of initial condition

%using MatLab solver for delayed ODEs options = ddeset(RelTol,1e-6,AbsTol,1e-6); %,InitialY,uinit tic sol = dde23(Huttrhs, z, uinit, 0:tfinal/pt:tfinal, options, NH, Nl, dz, g1, g2s,g3s); %... ae, ai, re, ri); toc u=sol.y(1:NH,:); t=sol.x; %z picks the list of delay intervals % Extract u variable from solution array %we can omit that for a const history

The right-hand side (RHS) of the differential system is specied by the function in Listing 9.2.3. Here the array Z passed on by the solver is two-dimensional, 187

each column being a snapshot u( x, t zn /v), with the rst column containing spatial point xn . To nd the input to point xn in the current moment we need to nd the diagonals (upwards and downwards) starting from that point. Below, we may have to manipulate the shape of the array to prepare it for automatic extraction of the diagonals by the MatLab function spdiags. If the time history is longer than the width of the domain the diagonals have to wrap around the edges of the periodic domain. To do this we chop up the array in square parts and use spdiags on each. The square shape ensures that when we join back the results, the diagonals will match up correctly. Unfortunately, since Z is provided automatically, the diagonalisation procedure has to be repeated every time (at every call of the RHS function) on the entire data. The formula used for the actual integration of L is derived after the code.
Code listing 9.2.3, in MatLab format:
function dudt=HuttRHS(t,u,Z,N,Nl,dz,g,g2,g3) %Z - the solution in past moments %Z: x increases downwards, t increases (longer ago) to the right u=reshape(u,[],2); u0=u(:,1); u1=u(:,2); Z=[u0 Z(1:N,:)]; % beta=4*g; h=0; %the nonlinearity at all those points %we dont need the derivatives in the past, we add z = 0 %separate u_0 and the derivative u_1 %...ae,ai,re,ri) %the array u conains the solution at the current moment t

the most recent time moment. A row, thus, represents the history of a single

%fu=1./(1+exp(-beta*(Z-h))); fu=(g*Z+g2*Z.^2+g3*Z.^3); %effectively, we work with G_n = G(z_n) %in the following loop we cut up the array into square blocks %if it has longer horizontal (t) than vertical side (x) beg=0; eGd=zeros(1,N); eGu=eGd; while beg<Nl Gp=fu(:,beg+1:min(beg+N,Nl)); if size(Gp,2)>1 Gdown=spdiags(Gp,[0:-1:-N+1])+[zeros(size(Gp,2),1) spdiags(Gp,[N-1:-1:1])]; %now in each column we have the past history of inputs to that neuron (from left) else Gdown=Gp; end Gp=Gp(N:-1:1,:); if size(Gp,2)>1 Gup=spdiags(Gp,[0:-1:-N+1])+[zeros(size(Gp,2),1) spdiags(Gp,[N-1:-1:1])]; else Gup=Gp; %turn it around and repeat the same to get the inputs from right side %first block %extracted diagonals will be stored here %we work simultaneously with upwards and downwards diagonals

188

end beg=beg+N; eGd=[eGd; Gdown]; eGu=[eGu; Gup]; end eGd=eGd(2:Nl+1,:); eGu=eGu(2:Nl+1,:); eGu=eGu(:,N:-1:1); G=eGd+eGu; G=[G; G(1,:)]; Gn=G(1:Nl-1,:); Gnp1=G(2:Nl,:); Gnp2=G(3:Nl+1,:); z=dz*(0:Nl-1); L=z(end); %to make L correspond to the last point exactly %for the mex hat connectivity in Hutt %exp1=ae*exp(-re*dz*(1:Nl-1))./re^2; %exp2=-ai*exp(-ri*dz*(1:Nl-1))./ri^2; %wz=ndgrid(exp1+exp2,ones(N,1)); z=z(2:end); bl=ndgrid(exp(-z).*(1+z),ones(N,1)); psi=sum(bl.*(Gnp2-2*Gnp1+Gn)); % %the integration for mex hat connectivity: %psi=psi+(ae/re-ai/ri).*dz.*G(1,:)+(ae/re^2-ai/ri^2).*(G(2,:)-G(1,:))-... % % (ae*exp(-re*L)/re-ai*exp(-ri*L)/ri).*dz.*G(Nl,:)+... (ae*exp(-re*L)/re^2-ai*exp(-ri*L)/ri^2).*(G(Nl+1,:)-G(Nl,:)); %G(L) is G(Nl+1,:) %G(0) is G(1,:) %for our wiz hat connectivity %the integration %end terms of the sum %this is NOT w(z) %remember we had the order reversed %now x increases to the right, z - downwards %get rid of initializing row of zeros %preparing for next block %re-join the square matrices of the results

%here we prepare for Hutts integration formula:

psi=psi+G(2,:)-G(1,:)-exp(-L).*(G(Nl,:).*L.*dz+(1+L).*(G(Nl+1,:)-G(Nl,:)));

psi=psi./dz; %finally, the ODE system: dudt=[u1; -2*u1-u0+psi];

In the code above we have not yet explained the expressions we used for approximating the integrals (9.2.2). First, we followed the complicated scheme offered by Hutt et al. [201]. It takes advantage of the explicit form of w(z), involving exponentials, to partially integrate the expression for L . Within each interval z one makes a linear approximation of G (z) as follows, G (z) G (zn ) + It is now easy to integrate L = 1 z
N 1

G ( z n +1 ) G ( z n ) ( z z n ). z

z n +1 zn

n =0

(z 1)ez Gn (z + zn ) Gn+1 zn + Gn+1 Gn z ,

189

(we set G (zn ) = Gn ). One obtains the sum L = 1 z 1 z


N 1

n =0

ezn ( Gn zn+1 Gn+1 zn )zn + ( Gn+1 Gn )(z2 + zn + 1) + n

N 1

n =0

ezn+1 ( Gn zn+1 Gn+1 zn )zn+1 ( Gn+1 Gn )(z2 +1 + zn+1 + 1) n

Rearranging the terms, the formula used in Listing 9.2.3 is L = 1 z


N 2

n =0

ezn+1 ( Gn+2 Gn+1 + Gn )(1 + zn+1 )+ .

G (z) G (0) e L G ( L) Lz + (1 + L) G ( L) G ( L z) z z

The formula given in [201] is slightly different. It can be obtained from this one if one assumes G ( L + z) = G (z). However, in our view this is incorrect the domain of the equation is periodic by x, but G (z) includes also the time t which is not periodic. Although the scheme in Listings 9.2.2, 9.2.3 takes advantage of manual preintegration of the connectivity function, the resulting complicated formula turned out to be more computationally expensive, at least in the MatLab implementation that we use. Another disadvantage is that for a different connectivity one has to recalculate all integrals. We had to do this when we wanted to run for comparison our code on the specic model used by Hutt et al. Their connectivity is a Mexican hat dened by a difference of exponentials (see Section 3.3.1), ments in the code. In the last part of this Appendix we set about simplifying this scheme. ae e| x|re ai e| x|ri . The new expressions for the integrals are given as com-

Efcient scheme We keep the ideas of using delayed ODE solver and diagonalisation of history array. However we use a trapezoid rule to approximate the general expression (9.2.2) directly. Similar to Listing 9.2.1, we incorporate the rule into the denition of the wizard hat wz:
Code listing 9.2.4, in MatLab format:
Nl=ceil(L*NH/period); %points in L so that the distance between points in Nl and x is the same %Nl is the number of points spanning the connectivity function (cutoff to L)

190

dx=period/NH; x=dx*(0:NH-1); dz=dx; z=dz/v*(1:Nl); %distance between past times is z/v so that it corresponds to the x-grid %z does not include current moment %the last point z(Nl) may lie beyond L wz=exp(-z).*(z-1)*dz; wz(end)=wz(end)/2; wz=[-dz/2 wz]; uinit=[bas zeros(NH,1)]; uinit=A*uinit; options = ddeset(RelTol,1e-6,AbsTol,1e-6); tic sol = dde23(integralRHS, z, uinit, 0:tfinal/pt:tfinal, options, NH, h, wz, g1, g2s,g3s); toc u=sol.y(1:NH,:); t=sol.x; % Extract u variable from solution array %the wizard hat %to include w(z)*dz w(0)=-1, the current moment %for the composite trapezoidal rule

Additionally, we came up with a simpler setup of the history array which makes extraction of diagonals independent of its shape. The specication of RHS becomes just
Code listing 9.2.5, in MatLab format:
function dudt=integralRHS(t,u,Z,N,dz,wz,g1,g2,g3) %in matrix Z of past times: x is downwards, t to the right u=reshape(u,[],2); u0=u(:,1); u1=u(:,2); Z=[u0 Z(1:N,:)]; %we dont need the derivatives of the past, adds the current moment %cubic nonlinearity %separate u and u-dot

fu=(g1*Z+g2*Z.^2+g3*Z.^3); fu=[fu; fu];

%ensures that the diagonals for points x near the edges are correct

%this is for L<period. if L>period, the history traces will wrap around the periodic domain %simply add more copies of fu as necessary furev=fu(end:-1:1,:); %for the upward diagonals psi=wz*(spdiags(fu,-[0:N-1])+spdiags(furev,-[N-1:-1:0])); %the result of spdiags - x is to the right, z downwards %the integral sum is accomplished by the matrix multiplcation with the connectivity %finally, the ODE: dudt=[u1; -2*u1-u0+psi];

This code in practice is about one third faster than that of Hutt et al., whilst results match precisely.

191

9.3

Simulation of the PDE form

Simulating the integral neural eld equation in its original form is quite demanding on computational power. However for some choices of the connectivity function there is an equivalent PDE system. We discuss in Section 4.6 how to obtain the PDE forms for the example models we work with in Chapter 4. In the following we describe the implementation of the numerical schemes for those PDE equations. Since the patterns we are looking at are periodic in space, we used periodic boundary conditions to approximate the innite domain of the problem. Care has to be taken to choose the simulation domain width period such that an integer number of wavelengths of the desired pattern t in it or else it would be suppressed by the boundary conditions. We solve the PDEs using the spectral collocation method [277]. It is based on the same ideas as the nite differences method, however it interpolates the sought solution globally in the entire domain. Similarly to nite differences, the operation of differentiation by the spatial variables is replaced by matrix multiplication. The matrices in this case are not sparse, however the convergence rates of the method are greatly improved for smooth solutions they can be exponential. To generate the differentiation matrices we use the MatLab package created by Weideman and Reddy [278] (it can be downloaded from the MatLab Central), namely the function fourdif for a periodic interval. The remaining system of coupled ODEs is evolved with the basic MatLab solver ode45.

9.3.1 Models with rational Fourier transform


Here we present the code for solving the basic neural eld with axonal delays (4.2.1) and a number of extensions suggested in Chapter 5. Since the Fourier Laplace transforms of their generalised connectivity kernels K are rational, we can transform these models to PDEs with polynomial differential operators. In fact, we can take advantage of the expanded polynomial form of their dispersion relations that we obtained in Appendix 9.1. The terms and k are replaced with derivatives t and i x , whilst the parameter 1 marks the positions of derivatives of the nonlinearity f u. Then the equation is rearranged into a 1st 192

order system of ODEs by the substitutions u0 = u( x, t), u1 = t u, . . . , u6 = 6 u, t etc. The equations are clearly laid out in the specication of the ODE right-hand side for the MatLab solver (we assume = 1 for tidiness):
Code listing 9.3.1, in MatLab format:
function dudt=fullPDErhs(t,u,flag,numder,N,D1,D2,D4,Doper,r,v,g1,g1a,al,h,beta,g2,g3,g4,g5) %to avoid dividing by v, set b=1/v; u0=u(:,1); u1=u(:,2); u2=u(:,3); u3=u(:,4); u4=u(:,5); u5=u(:,6); % u6=u(:,7); % u7=u(:,8); %sigmoid f(u) and its derivatives % ex=exp(-beta.*(u0-h)); % % ex=exp(-beta*u0)*al; %if we use al instead of h % f0=1./(1+ex); % f1=beta.*f0.*(1-f0); % f2=beta.*f1.*(1-2*f0); % f3=beta.*(f2.*(1-2*f0)-2*f1.^2); % f4=beta.*(f3.*(1-2*f0)-6*f1.*f2); % f5=beta.*(f4.*(1-2*f0)-8*f1.*f3-6*f2.^2); % f6=beta.*(f5.*(1-2*f0)-10*f1.*f4-20*f2.*f3); %cubic f(u) and its derivatives f0=g1*u0+g2*u0.^2+g3*u0.^3+g4*u0.^4+g5*u0.^5; f1=g1+2*g2*u0+3*g3*u0.^2+4*g4*u0.^3+5*g5*u0.^4; f2=2*g2+6*g3*u0+12*g4*u0.^2+20*g5*u0.^3; f3=6*g3+24*g4*u0+60*g5*u0.^2; % f4=24*g4+120*g5*u0; % f5=120*g5; % f6=0; %for a standard wizard hat (w_0 = -1) % f0=-f0; f1=-f1; f2=-f2; f4=-f4; u=reshape(u,[],numder); v*t --> t and b=alpha/v %we assume alpha is 1 %numder is the number of derivatives in the PDE model %extracting u and derivatives

%deriatives of the f(u(t)) composition f=f0; ft=f1.*u1; ftt=f2.*u1.^2+f1.*u2; fttt=f3.*u1.^3+3*f2.*u1.*u2+f1.*u3; % ftttt=f4.*u1.^4+6*f3.*u1.^2.*u2+f2.*(3*u2.^2+4*u1.*u3)+f1.*u4; % fttttt=f5.*u1.^5+10*f4.*u1.^3.*u2+5*f3.*(3*u1.*u2.^2+2*u1.^2.*u3)+5*f2.*(2*u2.*u3+u1.*u4)+f1.*u5; % ftttttt=f6.*u1.^6+15*f5.*u1.^4.*u2+5*f4.*(9*u1.^2.*u2.^2+4*u1.^3.*u3)+... % 15*f3.*(u2.^3+4*u1.*u2.*u3+u1.^2.*u4)+f2.*(10*u3.^2+15*u2.^u4+6*u1.*u5)+f1.*u6; (no adaptation)

% %basic model with axonal delays

193

u6=-D4*(u2+b^2*u0+2*u1*b)... +D2*(2*b^2*ft+(2*b^2*r+2*b^2)*f+2*u4+(4*b+4)*u3+(8*b+2+2*b^2)*u2+(4*b+4*b^2)*u1+2*b^2*u0)... -((4+2*b)*u5+(8*b+b^2+6)*u4+(12*b+4+4*b^2)*u3+(1+6*b^2+8*b)*u2+(2*b+4*b^2)*u1+b^2*u0)... -(2*b^2*fttt+(-2*b^2*r+6*b^2)*ftt+(-4*b^2*r+6*b^2)*ft+(-2*b^2*r+2*b^2)*f); % %for zeta=k^2 % u6=-D4*(u2+b^2*u0+2*u1*b)... % % % % +D2*(2*u4+(4*b+4)*u3+(8*b+2+2*b^2)*u2+(4*b+4*b^2)*u1+2*b^2*u0)... -((4+2*b)*u5+(8*b+b^2+6)*u4+(12*b+4+4*b^2)*u3+(1+6*b^2+8*b)*u2+(2*b+4*b^2)*u1+b^2*u0)... -Doper\(-D2*(2*b^2*ft+(2*b^2*r+2*b^2)*f)+... 2*b^2*fttt+(-2*b^2*r+6*b^2)*ftt+(-4*b^2*r+6*b^2)*ft+(-2*b^2*r+2*b^2)*f);

% % %operatora na 1+g1a*zeta deli nelineinata tchast na uravnenieto (VD) %for the model with linear adaptation - zeta = eta*etaa, etaa=1/(1+lambda/beta), beta=1 % u7=-D4*((b^3+b^3*g1a)*u0+3*u2*b+u3+3*u1*b^2)... % % % % % % % +D2*(2*b^2*ftt+(4*b^2+2*b^3)*ft+4*b^3*f... +2*u5+(4+6*b)*u4+(2+6*b^2+12*b)*u3+(2*b^3+2*b^3*g1a+12*b^2+6*b)*u2... +(6*b^2+4*b^3+4*b^3*g1a)*u1+(2*b^3+2*b^3*g1a)*u0)... -(2*b^2*ftttt+(4*b^2+2*b^3)*fttt+(2*b^2+4*b^3)*ftt+2*b^3*ft... +(4+3*b)*u6+(6+3*b^2+12*b)*u5+(4+12*b^2+18*b+b^3*g1a+b^3)*u4... +(12*b+1+4*b^3*g1a+18*b^2+4*b^3)*u3+(3*b+12*b^2+6*b^3*g1a+6*b^3)*u2... +(3*b^2+4*b^3+4*b^3*g1a)*u1+(b^3+b^3*g1a)*u0);

%nonlocal linear adaptation %replaced g1a in the linear adapt above by g1a/(1+simga^2 k^2) % u7=Doper\(-D4*(b^3*u0)+D2*(2*b^3*u2+4*b^3*u1+2*b^3*u0)... % % % % % % % -(u4*b^3+4*u3*b^3+6*u2*b^3+4*b^3*u1+b^3*u0)).*g1a... -D4*(u3+3*u2*b+3*b^2*u1+b^3*u0)... +D2*(2*b^2*ftt+(4*b^2+2*b^3)*ft+4*b^3*f... +2*u5+(6*b+4)*u4+(6*b^2+12*b+2)*u3+(12*b^2+6*b+2*b^3)*u2+(4*b^3+6*b^2)*u1+2*b^3*u0)... -(2*b^2*ftttt+(4*b^2+2*b^3)*fttt+(2*b^2+4*b^3)*ftt+2*b^3*ft... +(3*b+4)*u6+(3*b^2+12*b+6)*u5+(b^3+12*b^2+4+18*b)*u4+(18*b^2+12*b+1+4*b^3)*u3... +(12*b^2+6*b^3+3*b)*u2+(4*b^3+3*b^2)*u1+b^3*u0);

%nonlinear nonlocal adaptation %replaced u0 - u4 in front of g1a by f - ftttt, in the nonlocal linear adapt above % u7=Doper\(-D4*(b^3*f)+D2*(2*b^3*ftt+4*b^3*ft+2*b^3*f)-... % % % % % % % (b^3*ftttt+4*b^3*fttt+6*b^3*ftt+4*b^3*ft+b^3*f)).*g1a... -D4*(b^3*u0+3*b^2*u1+u3+3*b*u2)... +D2*(2*b^2*ftt+(4*b^2+2*b^3)*ft+4*b^3*f... +2*u5+(6*b+4)*u4+(2+6*b^2+12*b)*u3+(6*b+2*b^3+12*b^2)*u2+(6*b^2+4*b^3)*u1+2*b^3*u0)... -(2*b^2*ftttt+(4*b^2+2*b^3)*fttt+(4*b^3+2*b^2)*ftt+2*b^3*ft... +(3*b+4)*u6+(3*b^2+12*b+6)*u5+(b^3+4+18*b+12*b^2)*u4+(18*b^2+1+12*b+4*b^3)*u3... +(12*b^2+3*b+6*b^3)*u2+(3*b^2+4*b^3)*u1+b^3*u0);

%v* to get back the original time t dudt=v*[u1; u2; u3; u4; u5; u6]; %u7];

The variables D2, D4, Doper are the spatial differentiation matrices. They are dened in Listing 9.3.2. A left division with such matrix in fact performs an integration, and corresponds to a denominator in the dispersion relation (contain194

ing only k terms). Since v always comes within the expression t /v, to save on division operations we have incorporated it into the time variable. We have set b = /v for the terms of the synaptic dynamics, Q = (1 + t /)2 = (1 + t /b), and multiplied all equations by b2 . We have provided for the use of two different ring rate functions, the cubic A comparison and discussion of issues is given in Section 5.2.3. Here we mention only that instead of h we could specify the sigmoidal function using = exp( (u h)) ( al, also in Listing 9.3.3). For easy comparison with the linear stability analysis we prefer to work with 1 = f (u) as a parameter and express and h in terms of it. Each model (with the exception of the diffusive one) is a special case of the one below it, the model with nonlinear nonlocal adaptation (5.3.1) being the most general one. However we kept in the script all special cases to be able to use more efcient code when we are simulating a simpler model. It is also the historical order in which we built the models and corresponding machinery to investigate them. For convenience the script setting up the problem is split into two parts. The part fullPDEperiodic has to be run only once, when the user has decided parameters of the spatial grid. It sets up the grid and the differentiation matrices:
Code listing 9.3.2, in MatLab format:
%N=200; %period=2*pi; %sc=2*pi/period; sc=k/4; %number of points in the spatial grid %domain size %fourdif generates a 2*pi domain by default, with sc we will resize it %k/n - we want the domain to fit n periods with wavenumber k %generating the differentiation matrices, and a spatial grid x

f (u) = 1 u + 2 u2 + 3 u3 and the sigmoid f (u) = (1 + exp( (u h)))1 .

[x,D1]=fourdif(N,1); [x,D2]=fourdif(N,2); [x,D4]=fourdif(N,4); %[x,D6]=fourdif(N,6); x=x/sc; D1=D1*sc; D2=D2*sc^2; D4=D4*sc^4; %Doper=eye(N)-g1a.*D2; %diff operator (1-g1a*Dxx) for the model with zeta = k^2 Doper=eye(N)-sigma^2.*D2; %for the model with nonlocal nonlinear adaptation %resizing the grid to desired domain period

The parameter k is given by us, as a wavenumber of the periodic solution we are looking for. It could be supplied for example by the codes presented in Appendix 9.5. 195

The script in Listing 9.3.3, fullPDEmodel, denes the initial conditions, calls the ODE solver and visualizes the solution in various ways. It expects to nd most model parameters in the MatLab environment (we have given some example values in a commented form in the script however). Here the variable numder specifying the highest derivative by time needs to be updated according to the model we want to simulate. Beyond a TuringHopf instability (Chapter 4), the equation dynamics will evolve from any nonzero initial condition towards a spatiotemporal oscillatory steady state, a travelling wave (TW) or a standing wave (SW). However, since transients sometimes could be quite long-lasting, we have provided code (commented) that constructs these steady states as initial conditions. It uses data arrays for the exact model parameters, wavenumbers and eigenvalues at the instability which are provided by other routines, see Appendix 9.5 and Chapter 5. The full list of variables is vdat, kdat, omdat, gdat, Atw, Asw, ar, ai, br, bi, cr, ci, q, ups, kap1. These are used in combination with the user-dened parameters. For example the expression g1-gdat(ind) tells how much the simulation run to be beyond the bifurcation point. The variable ind is chosen by the user according to the desired value of the parameter v (or another parameter). With vdat(ind) v, the other data arrays will yield the instability properties at For more details see Appendix 9.5.2.
Code listing 9.3.3, in MatLab format:
numder=6; %example parameters %tfinal=50; %pt=200; r=1; %v=1; %g1=8.1; %h=0; beta=4*g1; %al=1; %g2s=0; g3s=-0.2; g4=0; g5=0; %g1a=0; %sigma=0; %A=0.01; %time of simulation %time grid for the results %wizhat parameter A, %axonal speed v %gamma_1 - the first derivative of f(0) %threshold and gain for a sigmoidal f(u) %this is the case when h=0 and beta=4*g1 %parameters for the cubic f(u) %for models utilising higher derivatives of f %adaptation strength %spatial spread of adaptation %amplitude of the initial condition 1 gives balanced connectivity %the highest time derivative in the current model

that point. The variable ind2 similarly indexates the nonlinearity parameter 2 .

%various initial conditions u(x,0) bas=rand(N,1).*2-1; % bas=ones(N,1); % bas=[zeros(3*N/8,1); ones(N/8,1); zeros(N/2,1)];

196

% bas=exp(-2*(x-8).^2)+exp(-0.2*(x-18).^2)/2; % bas=cos(k*x)+0.1*cos((k-2*kap1)*x); bas=cos(k*x); % bas=bas+0.05*(rand(N,1).*2-1); uinit=[bas; zeros(N*(numder-1),1)]; %to add a bit of noise %all derivatives u_t are put to zero

% %the following loads in a travelling wave which is a steady state beyond Turing-Hopf % % ind=200; % %this is ind=(vdat==v) % kx=k*x; %(k+(g1-gdat(ind))*q)*x; om=abs(omdat(ind)); % % % ind2=g2==g2s; % %a TW shape: % ome=om+(g1-gdat(ind))^2*(ai-ar*bi(ind2)/br(ind2)); % % % % % %a SW shape: % % ome=om; +(g1-gdat(ind))^2*(ai-ar*(bi(ind2)+ci(ind2))/(br(ind2)+cr(ind2))); % % % %the latter part for ome is 0 for gamma_2 = 0 cos(kx)*ome^4; zeros(N,1)]; % -cos(kx)*ome^6; zeros(N,1)]; % SW forma v t=0 %random init cond %to load a previously evolved TW %forecast for the amplitude by twswcubic %applying the amplitude parameter % % uinit=[ cos(kx); zeros(N,1); -cos(kx)*ome^2; zeros(N,1);... %-(g1-gdat(ind)).*ups(ind).*q uinit=[ cos(kx); -sin(kx)*ome; -cos(kx)*ome^2; sin(kx)*ome^3;... cos(kx)*ome^4; -sin(kx)*ome^5; -cos(kx)*ome^6];% sin(kx)*ome^7];

%uinit=0.3*(rand(N*numder,1).*2-1); %uinit=twinit; %A=Atw(ind2)*sqrt(g1-gdat(ind)) uinit=A*uinit;

%parameters for a sigmoidal f when supplying al - to be at instability % beta=gdat(ind)*(1+al)^2/al; % h=-log(al)./beta; % beta=beta+0.1; %uses the end of a previous run as init conditions for this one % uinit=up(end,:,:); uinit=uinit(:); disp(continuing previous solution...); disp(continuing previous solution with perturb); % uinit=uinit+(rand(numder*N,1).*2-1)*0.02;

% uinit=uinit+[rand(2*N,1).*2-1; zeros((numder-2)*N,1)]*0.1; tspan tic [t,up] = ode45(fullPDErhs, tspan, uinit, options, numder,N,D1,D2,D4,Doper,... r,v,g1,g1a,al,h,beta,g2s,g3s,g4,g5); toc u=reshape(up,[],N,numder); u=u(:,:,1); %normalising and recording the end result for an initial condition % twinit=up(end,:,:); twinit=twinit(:); twinit=twinit/max(u(end,:)); %separating u and derivatives = [0:tfinal/pt:tfinal]; %the time grid

options = odeset(RelTol,1e-6,AbsTol,1e-6);

We have used the following commands to visualise the solutions:


Code listing 9.3.4, in MatLab format:
%space-time plot

197

figure(1) pcolor(x,t,u); shading interp; xlabel(space); ylabel(time); %the solution maximum with time figure(3) umax=max(u,[],2); plot(t,umax,b); %modes in Fourier space figure(2) uf=fft(u,4096); ks=0:2*pi/4096/x(2):pi; uf=uf(:,1:size(ks,2)); pcolor(ks,t,abs(uf)); shading interp; %Fourier spectrum at the end of simulation figure(6) last2osc=abs(uf(end-floor(4*pi/abs(om)*pt/tfinal):end,:)); plot(ks,max(last2osc));

9.3.2 Dendritic delay with xed synapses


To solve the system (4.6.2)

(t + (1 +

1 Dzz ) v( x, z, t) = I ( x, t)(z z0 ), D

t 2 ) (1 xx )2 I ( x, t) = 4w0 xx f v( x, 0, t)

we need to set up two spatial grids. The domain is innite by x, we can approximate it with a nite interval with periodic boundary conditions and use the spectral collocation methods we described earlier. By z it is semi-innite, we cut it off to a nite interval at some large value cl with a boundary condition both there and at the origin, z v = 0. We opt to use here the simpler nite difference method to calculate a differentiation matrix that satises these boundary conditions (it will become clear in Appendix 9.3.4 why we choose this over spectral collocation method with Hermite polynomials which is suitable for semi-innite domains). Here follows the script dendriteperiodic, for more detailed commentary refer to the similar Listing 9.3.2.
Code listing 9.3.5, in MatLab format:
%N=100; M=60; %example size of the grids, by x and by z

198

%cl=5; scx=k/2; [x,Dxx]=fourdif(N,2); x=x/scx; Dxx=Dxx*scx^2; Doper=(eye(N)-Dxx)^2;

%length of the cable %x domain size: k/n - n waves with wavenumber k

%finite differences for solving the cable equation dz=cl/(M-1); z=0:dz:cl; Dzz=spdiags(ones(M,1)*[1 -2 1],-1:1,M,M); Dzz(1,2)=2; Dzz(end,end-1)=2; Dzz=Dzz./dz^2; %boundary conditions dv/dz=0 at both ends

The essential part of dendritemodel is also self-explanatory. The synapse positions are represented by the vector z0vec, the input magnitude is scaled by the grid density 1/dz. The synapse is automatically set at the closest to z0 grid point. This however might decrease the accuracy of the simulation, since it will run at a slightly different parameter value.
Code listing 9.3.6, in MatLab format:
[ans,z0pos]=min(abs(z-z0)); z0a=z(z0pos) z0vec=zeros(1,M); z0vec(z0pos)=1/dz; %this is delta(z-z_0) %there might not be a grid point at z0 %the actual z0 we will work with

%setting up the initial conditions v=bas*[zeros(1,10) ones(1,5) zeros(1,M-15)]; uinit=[v(:); zeros(2*N,1)]; %uinit=[zeros(N*M,1); bas; zeros(N,1)]; uinit=A*uinit; % uinit=vp(end,:,:); uinit=uinit(:); %continue previous run %as the input current I %as some voltage in the cable

tspan tic

= [0:tfinal/pt:tfinal];

options = odeset(RelTol,1e-6,AbsTol,1e-6); [t,vp] = ode45(@dendriteRHS, tspan, uinit, options, N, M, Dzz, Dxx, Doper,... g1,z0vec,cD,iota,g2s,g3s); toc v=reshape(vp,[],N,M+2); u=v(:,:,1); I=v(:,:,M+1);

Finally, the systems right-hand side is


Code listing 9.3.7, in MatLab format:
function dvdt=dendriteRHS(t,v,N,M,Dzz,Dxx,Doper,g1,z0vec,cD,iota,g2,g3) v=reshape(v,N,M+2); I=v(:,M+1);

199

dI=v(:,M+2); v(:,[M+1 M+2])=; u=v(:,1); fu=g1*u+g2*u.^2+g3*u.^3; dvdt=-iota*cD*v+cD*(Dzz*v)+I*z0vec; dIdt=[dI; -(2*dI+I)+Doper\Dxx*fu*4]; dvdt=[dvdt(:); dIdt]; %the cable eq, (cD is D, iota = 1 / cD tau_D) %the soma activity

%dynamics of the input

9.3.3 Axo-dendritic delays


The system (4.6.3)

(t +

1 Dzz ) v( x, z, t) = I ( x, t)(z z0 ), D t 1+ v
2 2

t (1 + )2

xx 2w0

I ( x, t) = t v t 1+ v
2

xx 2 +

t v

f v( x, 0, t)

represents both a dendrite with xed synapse, and axonal conduction delay. The dynamics for the synaptic input I ( x, t) are the same as in the model without dendrites, we can combine the code from Listings 9.3.7 and 9.3.1:
Code listing 9.3.8, in MatLab format:
function dvdt=axodendriteRHS(t,v,N,M,Dzz,D2,D4,g1,v,z0vec,cD,iota,g2,g3) v=reshape(v,N,M+6); I0=v(:,M+1); I1=v(:,M+2); I2=v(:,M+3); I3=v(:,M+4); I4=v(:,M+5); I5=v(:,M+6); v(:,M+1:M+6)=; dvdt=-iota*cD*v+cD*(Dzz*v)+I0*z0vec; dvdtt=-iota*cD*dvdt+cD*(Dzz*dvdt)+I1*z0vec; dvdttt=-iota*cD*dvdtt+cD*(Dzz*dvdtt)+I2*z0vec; u0=v(:,1); u1=dvdt(:,1); u2=dvdtt(:,1); u3=dvdttt(:,1); f0=g1*u0+g2*u0.^2+g3*u0.^3; f1=g1+2*g2*u0+3*g3*u0.^2; f2=2*g2+6*g3*u0; %cubic nonlinearity and derivatives %the soma activity, u = v(x,0,t) %an easy way to get the time derivatives of u %is to differentiate the cable eq %v(x,z) is NxM and six derivatives of I(x), Nx1

200

f3=6*g3; f=f0; ft=f1.*u1; ftt=f2.*u1.^2+f1.*u2; fttt=f3.*u1.^3+3*f2.*u1.*u2+f1.*u3; b=1/v; r=1; I6=-D4*(I2+b^2*I0+2*I1*b)... +D2*(2*b^2*ft+(2*b^2*r+2*b^2)*f+2*I4+(4*b+4)*I3+(8*b+2+2*b^2)*I2+(4*b+4*b^2)*I1+2*b^2*I0)... -((4+2*b)*I5+(8*b+b^2+6)*I4+(12*b+4+4*b^2)*I3+(1+6*b^2+8*b)*I2+(2*b+4*b^2)*I1+b^2*I0)... -(2*b^2*fttt+(-2*b^2*r+6*b^2)*ftt+(-4*b^2*r+6*b^2)*ft+(-2*b^2*r+2*b^2)*f); dIdt=[I1; I2; I3; I4; I5; I6]/b; dvdt=[dvdt(:); dIdt]; %ODEs for the input %add the cable eq %differentiating the composite function f(u(t))

This routine can be evoked by the scripts in Appendix 9.3.2 after minimal alterations.

9.3.4 Dendritic delay with correlated synapses


In this case on each neurons dendrite there is a spread of synapses, from a distance z0 from the soma, to some point where the axonal connectivity w( zz0 ) becomes negligible. The synaptic input I ( x, z, t) can be seen as only dened on this synaptic portion of the dendrite as it is zero elsewhere. The system we are solving (4.6.4), can be written 1 Dzz ) v( x, z, t) = I ( x, z, t), D

(t + (1 +

t 2 ) I ( x, z0 + y, t) = w (y)[ f v( x y, 0, t) + f v( x + y, 0, t)],

for a y R + . The relationship z = z0 + y necessitates an algorithm similar to the diagonalisation of solution history in Appendix 9.3.1, and a xed dependence between the discretisations of x and z domains (this is the reason we chose to use nite differences method for the cable equation both it and the spectral collocation method with Fourier interpolants generate regularlyspaced grids). The code taking care of this is:
Code listing 9.3.9, in MatLab format:
scx=k/2; [x,Dxx]=fourdif(N,2); x=x/scx; Dxx=Dxx*scx^2; %finite differences for solving the cable equation dx=x(2) %x domain size: k/n - n waves with wavenumber k %generates a grid for x with N points

201

period=x(end) dz=phi*dx; z=0:dz:cl; M=size(z,2) Dzz(1,2)=2; Dzz(end,end-1)=2; Dzz=Dzz./dz^2;

%to display the actual domain size %to avoid interpolations of data, I fix dx = dz/phi z0+phi*L! %cl is the dendrte cut-off. it has to be larger than

%the number of points in z this time depends on N and cl %boundary conditions dv/dz=0 at both ends

Dzz=spdiags(ones(M,1)*[1 -2 1],-1:1,M,M);

The user has to make sure manually that the now nite dendrite with length cl does include the entire synaptic portion, as well as some free length beyond that to allow the voltage to decay towards the tip. We set up a subgrid zact only for the synaptic (active) portion of the dendrite. Naturally, the same grid is used for the cut-off axonal connectivity w( x ) although in reality their grid spacings differ by a factor of this has to be taken care of when data is transferred between the two interpretations. At the end of the following script we have preserved the code we used to visualise the solutions in various ways, including sometimes following the dynamics step-by-step through time.
Code listing 9.3.10, in MatLab format:
[ans,z0pos]=min(abs(z-z0)); z0a=z(z0pos) minumum_cl=z0+phi*L wx=0:dx:L+dx; zact=z0a+phi*wx; wx=(wx-1).*exp(-wx)/phi; wx(1)=wx(1)/2; Nl=size(wx,2); if (Nl>N) you might have to extend the diagonalisation array in RHS end wx=wx*ones(1,N); %a stack of vertical hats %there might not be a grid point at z0 %so the actual z0 we will work with %prints the outer boundary of the active portion with current params %it is the minimum length for the cut-off dendrite %a grid for the cut-off wizhat (subgrid of x) %the image of the wizhat onto the dendrite i.e. its synaptic portion %wizard hat for x>0, phi = dz/dx cares about the grid scaling %because later on w(0) will be added two times %number of grid points covering the cut-off wizhat/the synaptic dendrite

%setting up the initial conditions v=bas*[zeros(1,10) ones(1,5) zeros(1,M-15)]; uinit=[v(:); zeros(N*2*Nl,1)]; uinit=A*uinit; % uinit=vp(end,:,:); uinit=uinit(:); %continue previous run tspan tic [t,vp] = ode45(@corrdendriteRHS, tspan, uinit, options, N, M, Nl, x,zact,wx,Dzz,g1,... z0pos,phi,cD,iota,g2s,g3s); toc v=reshape(vp,[],N,M+2*Nl); u=v(:,:,1); v1=squeeze(v(:,1,1:M)); %activity at the somas %the dendrite of neuron #1 = [0:tfinal/pt:tfinal]; %as some voltage in the cable

options = odeset(RelTol,1e-6,AbsTol,1e-6);

202

I1=squeeze(v(:,1,M+1:M+Nl)); %%%visualisation commands figure(1) pcolor(x,t,u); shading interp; xlabel(space - axonal); ylabel(time); figure(3) umax=max(u,[],2); plot(t,umax); xlabel(maximum of u); figure(2) pcolor(z,t,v1); shading interp;

%the synaptic input to neuron #1

%spacetime plot of the soma activity u(x,t)

%maximum of the soma activities u_max(t)

%spacetime plot of the voltage in cable #1 - v(0,z,t)

xlabel(space - voltage in first cable); ylabel(time); % figure(6) %spacetime of the synaptic input to cable #1 - I(0,z,t) % pcolor(zact,t,I1); shading interp; % xlabel(space - input current to first cable); % ylabel(time); % %track all cables (voltage) through time % figure(7) % for i=1:size(t,1) % % % vm=squeeze(v(i,:,1:M)); pcolor(z,x,vm); shading interp; pause;

% end; % %to track the voltage (blue) and the input current (red) along three cables, with time % [ans,n2]=max(bas); % n2=n2(1); % [ans,n1]=max(wx); % v2=squeeze(v(:,n2,1:M)); % v3=squeeze(v(:,n2+n1(1),1:M)); % v1=squeeze(v(:,n2-n1(1),1:M)); % I2=squeeze(v(:,n2,M+1:M+Nl)); % I3=squeeze(v(:,n2+n1(1),M+1:M+Nl)); % I1=squeeze(v(:,n2-n1(1),M+1:M+Nl)); % ax=[0 cl -0.02 0.02] % figure(4) % for i=1:2:size(t,1) % % % % % % % % % % subplot(3,1,1); plot(z,v1(i,:),LineWidth,2); hold on plot(zact,I1(i,:),r); hold off axis(ax); xlabel([the cable at x= num2str(x(n2-n1(1)))]); plot(z,v2(i,:),LineWidth,2); plot(zact,I2(i,:),r); hold off axis(ax); xlabel([the cable at x= num2str(x(n2))]); plot(z,v3(i,:),LineWidth,2); hold on hold on %step through time %fix some axis amplitude %and the synaptic inputs %find points where wiz hat will project n2 maximally %get the cables of those three neurons %find the point where init condition is max (e.g. for a bump)

% subplot(3,1,2);

% subplot(3,1,3);

203

% % % %

plot(zact,I3(i,:),r); hold off axis(ax); xlabel([the cable at x= num2str(x(n2+n1(1)))]); pause;

% end; % %to track the content of presynaptic input f(u) to the three neurons through time % %this code comes from the function for RHS % figure(7) % for i=1:size(t,1) % us=u(i,:); % fu=g1*us+g2s*us.^2+g3s*us.^3; % fu=fu*ones(1,Nl); % fu=[fu; fu]; % furev=fu(end:-1:1,:); % fu=wx.*(spdiags(fu,-[0:N-1])+spdiags(furev,-[N-1:-1:0])); % plot(zact,fu(:,[n2-n1(1) n2 n2+n1(1)])) % axis([0 cl -0.01 0.01]) % pause % end;

Follows the function corrdendriteRHS dening the system right-hand side. For an extensive commentary of the diagonalisation procedure see Appendix 9.2 where similar operation was necessary for the space-dependent delays.
Code listing 9.3.11, in MatLab format:
function dvdt=corrdendriteRHS(t,v,N,M,Nl,x,z,wx,Dzz,g1,z0pos,phi,cD,iota,g2,g3) %in the variables - x is downwards, z to the right v=reshape(v,N,M+2*Nl); I=v(:,M+1:M+Nl); dI=v(:,M+Nl+1:M+2*Nl); v(:,M+1:end)=; u=v(:,1); %link of the cable and soma %output from the somas %replicating input over the entire synaptic portion %if L>domain size you have to add more fus here %for the upward diagonals %w*fu(x-z)+w*fu(x+z) %so that the diagonals will wrap around the periodic domain furev=fu(end:-1:1,:); %now z is downwards, x to the right fu=fu; %fixed that fu=wx.*(spdiags(fu,-[0:N-1])+spdiags(furev,-[N-1:-1:0])); fu=g1*u+g2*u.^2+g3*u.^3; fu=fu*ones(1,Nl); fu=[fu; fu]; %needs the input dynamics only on the synaptic portion of dendrite (Nl<M)

%I(x,z) is of size NxNl, only a part of NxM. add zeroes up to z0, and then behind until cable end dvdt=-iota*cD*v+cD*(Dzz*v)+[zeros(N,z0pos-1) I zeros(N,M-z0pos+1-Nl)]; dI=dI(:); I=I(:); dvdt=[dvdt(:); dIdt(:)]; dIdt=[dI; -(2*dI+I)+fu(:)]; %combining variables in one string %cable eq + input %each point of I(x,z,t) evolves independently - no spatial coupling in eq

204

9.4

The adjoint operator

In Section 5.1.1 we dened the operator Lu = I c K u = u( x, t, , ) c where


R R3

(r t)K ( x y, t s)u(y, s, , ) dy ds dr, (9.4.1)

| (t)| dt < ,

R2

|K (y, s)| dy ds < .

For any u continuous and bounded, Lu is nite. The solutions ui are periodic on the O(1) scales x and t, so that we can restrict L to the subspace of bounded be the periodicity domain. We will dene the scalar product in the space of functions dened over , as functions of period 2/k c by x and 2/c by t. Let = [0, 2/k c ] [0, 2/c ]

<u, v> =

k c c 4 2

u( x, t, )v( x, t, ) dt,

(9.4.2)

where the bar denotes complex conjugation. The validity of this denition follows from a Theorem. Suppose v has a period a and is L1 (0, a) with Fourier coefcients {dn }, and that K L1 (, ) has continuous Fourier transform K. Then we may dene a convolution k ( x ) by

k( x) =

and k will be L LOC and of period a with Fourier coefcients k n given by k n = dn K n , a n = 0, 1, 2, . . . [279] (theorem 15.25).

K (y)v( x y) dy,

We will use the second part of this theorem to prove that < Lu, v> = <u, L v>, for L = I c K and (t) = (t), K ( x, t) = K ( x, t) (the functions and K are reected around the point t = 0). We cannot use integration by

parts for that purpose, because while the scalar product is dened on the periodic domain , the convolutions in L are dened on the entire real line. For <u, Lv> to be correctly dened we have to prove that Lv also belongs to the space of functions periodic on . By change of variables in the integral and

205

by the periodicity of v:

( K v)( x, t +

2 r )K ( x y, r s)v(y, s, ) dy ds dr = c R3 2 s)v(y, s, ) dy ds dr = (t r )K ( x y, r + 3 c R 2 , ) dy ds dr = (t r )K ( x y, r s)v(y, s + c R3 (t + (t r )K ( x y, r s)v(y, s, ) dy ds dr = ( K v)( x, t, ).

2 , ) = c

R3

The periodicity along x is proved similarly. Next, we express the solutions u, v L1 () by their Fourier series
+

u( x, t, , , ) = v( x, t, , , ) = Since cmn , dmn = O


1 m2 + n2

m,n= +

cmn (, , )ei(mc t+nkc x) , dmn (, , )ei(mc t+nkc x) .

m,n=

gration and sumation:

as m, n , we can exchange the order of inte (r )K (y, s) dmn eimc (trs)+inkc ( xy) dy ds dr
m,n R

( K v)( x, t, ) = =
m,n

R3

dmn ei(mc t+nkc x)

(r )eimc r dr

R2

K (y, s)ei(mc s+nkc y) dy ds

dmn ei(mc t+nkc x) (mi )K(nk, mi ),


m,n

and similarly but using that , K are real and also that K is even function of x:

( K u)( x, t, ) = =
m,n

R3

(r )K (y, s) cmn eimc (trs)inkc ( xy) dy ds dr


m,n

cmn ei(mc tnkc x)

(r )eimc r dr

R2

K (y, s)ei(mc s+nkc y) dy ds

cmn ei(mc tnkc x) (mi )K(nk, mi ).


m,n

206

Finally, we can write out the scalar products

<u, K v> =

cmn ei(mc tnkc x)


m,n

dop ei(oc t+ pkc x) (oi )K( pk, oi )


o,p

dt

= =

m,n,o,p

cmn dop (oi )K ( pk, oi )

ei(om)c t+i( pn)kc x dt

m,n,o,p

cmn dop (oi )K ( pk, oi )mo np =

cmn dmn (ni )K(nk, mi )


m,n

and along the same lines

< K u, v> =

cmn ei(mc tnkc x) (ni )K(nk, mi )


m,n

dop ei(oc t+ pkc x)


o,p

dt

cmn dmn (ni )K(nk, mi ),


m,n

and see that L = K is the adjoint of L = K .

9.5

TWSW

In Chapter 5 we derive amplitude equations for the neural eld model (4.2.6) and use them to differentiate several types of dynamics possible in the neighbourhood of a TuringHopf bifurcation. More specically, Section 5.2 derives conditions for selection between a travelling wave (TW) and a standing wave (SW), as well as for their sideband stability. Here we give the details of calculating these conditions in terms of the original model parameters. We list the codes that we used to plot the parameter space of the various models we investigated. For exibility, we have divided the calculation in three steps, executed by separate modules of code. The rst step is to obtain values for those variables xed by the requirement that the system is at the bifurcation point. These are found by linear stability analysis, by using the machinery described at length in Appendix 9.1. Once we have obtained a critical curve (see Section 4.5.1) in XPP/AUTO, using the command All info we export all instability data along the curve as a list of numbers. 207

The second step, presented in the following Appendix 9.5.1 determines all quantities (5.1.11) participating in the amplitude equations, apart from the nonlinear parameters 2 and 3 . This makes it independent of the particular choice of ring rate function f (u). Still, we have enough information to determine the model parameter values for which the system outlook is either a TWgenerator or a TW/SWgenerator (refer to Section 5.2.2). We can also plot the relative sizes of the regions in parameter space for the different types of solution. This is often enough for characterising the behaviour expected by the model. We need to take the third step only if we require precise data on the selection region boundaries, on the amplitudes of the emerging patterns or we work with a complicated ring rate function. In Apppendix 9.5.2 we discuss how to plot selection regions in the (2 , 3 )-plane, as well as how to map this data to the parameter space of a non-cubic ring rate. Finally, in Appendix 9.5.3 we present codes that allow us to conveniently summarise in two-dimensional parameter space (which does not include the bifurcation parameter) the types of instability observed in a given model. There the concept of generators is instrumental.

9.5.1 Data at the linear order


Application of the conditions for selection of TWs and SWs (5.2.3) to a particular model involves calculating the expressions (5.2.4). In Section 5.2.2 we introduced the notion of generator as a convenience that gives us information whether it is possible to observe SW for given model parameters, or not. In particular, it allows to describe the system dynamics without resorting to the nonlinear parameters of the model. Rather than calculating ar , br and cr , we need to nd the values only of the second group of quantities in (5.2.4), e + iei = D, f + i fi = D (2C00 + C22 ),

g = Re D (C00 + C02 + C20 ) . This is what we set out to do in the following scripts. First, we load the data describing the linear instability along a critical curve we are interested in. For each point of the curve we need the values of c , k c , c . Assume these are stored in a le allinfo.dat generated earlier by XPP. The attached CD contains such 208

data les for all models we investigate in this dissertation. The script we use to load and process such a le in MatLab is:
Code listing 9.5.1, in MatLab format:
allinfo=load(../linear/allinfo.dat); pdat=allinfo(:,3); gdat=allinfo(:,6); omdat=allinfo(:,8); kdat=allinfo(:,9); %set according to the model investigated: w0=1; %inverted wiz hat connectivity ( w0 = 1 ), standard wiz hat ( w0 = -1 ) %inverting the connectivity is the same as inverting the nonlinearity gdat=w0*gdat; %load data from an XPP export file %the critical curve parameter %gamma_c, the bifurcation parameter %omega_c, critical frequency %k_c, critical wavenumber

[Dr,Di,brp,bip,crp,cip,drp,dip,ups]=efg4(pdat,gdat,kdat,omdat,w0); %[Dr,Di,brp,crp,bip,cip,drp,dip]=efgdendr(pdat,gdat,kdat,omdat);

We have devised two separate MatLab functions for calculating e, f and g. One for models with axonal delay only, efg4, and another for models including a dendritic delay, efgdendr. The functions are built so that minor edits of those les can service various model extensions. The function efg4 is set up for the model with axonal delay and nonlocal linear adaptation (Section 5.3.2), u = K f a a wa u, where a ( t ) = e a t , Setting = 0 or a 1 |x| e . 2 = 0 we get the model with local adaptation, respecwa ( x) =

tively, with no adaptation. The integral kernels participate in the amplitude equations by their FourierLaplace transforms. We calculate the expressions
m m K mn = (imc )K (nk c , imc ) and m a wn = (imc )a (imc )w(nk c ) and

their derivatives to second order. Note that these expressions can be written equivalently using the corresponding differential operators (Section 4.6): K = 2 VD Q PD , a wa = 1 Q Q a PD a .

In the context of Section 5.3.1, the strength of adaptive feedback a forms a linear f a (u) = a u. However, since we do not perturb along this parameter, we have a = 0. The code is quickly modied also for a nonlinear adaptation term, with f a f and = a . According to the results in Section 5.3.1, the 209

parameters are dened as (5.3.4),


11 11 b = D 3dr3 + 2dr2 (2C00 + C22 ) , 11 11 c = 2D 3dr3 + 2dr2 (C00 + C02 + C20 ) ,

Cmn

m mn m 2 K mn + a a 2 a wn a dr2 = , = mn 1 drc m 1 m c K mn + a a 1 a wn a

where a 2 = 0, a 1 = 1 for the linear, and a 2 = 2 , a 1 = c for the nonlinear adaptation case. In order to keep the form of (5.2.4) and the denitions of f and g the same as before, we have to redene D and Cmn . For a linear or no adaptation term we have D = K ( K a a w a )
drc i . drc i ,

and for nonlinear adaptation: D =

The denition of Cmn looses the 2 s in the numerator.

All these are the expressions seen in the code that follows.
Code listing 9.5.2, in MatLab format:
function [Dr,Di,brp,bip,crp,cip,drp,dip,ups]=efg4(param,g1,k,om,w0) %setting model parameters %these have to be the parameter values used in XPP to generate the instability data alpha=1; r=1; v=1; ga=0; sigma=0; beta=1; v=param; %synaptic time-constant %wizhat excitation/inhibition ratio %axonal speed %strength of adaptation %spatial spread of adaptive feedback %time-constant of the adaptation term %here is the parameter which we vary, usually v or ga %this assings the grid of points used by XPP (r=1 - balanced kernel)

%calculating the tranforms eta(i*omega_c)K(k_c,i*omega_c) = -2 VD/(Q PD) and derivatives lam=i*om; nk=k; Q=(1+lam/alpha).^2; Qdlam=2*(1+lam/alpha)/alpha; Qdlam2=2/alpha^2; PD=((1+lam./v).^2+nk.^2).^2; PDdlam=4*((1+lam./v).^2+nk.^2).*(1+lam./v)./v; PDdk=4*((1+lam./v).^2+nk.^2).*nk; PDdlam2=4*(3*(1+lam./v).^2+nk.^2)./v.^2; PDdklam=8*(1+lam./v).*nk./v; PDdk2=4*((1+lam./v).^2+3*nk.^2); VD=(1+lam./v-r).*(1+lam./v).^2+nk.^2.*(1+lam./v+r); VDdlam=((1+lam./v).*(3*(1+lam./v)-2*r)+nk.^2)./v; VDdk=2*nk.*(1+lam./v+r); VDdlam2=(6*(1+lam./v)-2*r)./v.^2; VDdklam=2*nk./v;

210

VDdk2=2*(1+lam./v+r); %to invert the connectivity if it is a standard wizhat (w0 = -1) VD=w0*VD; VDdlam=w0*VDdlam; VDdk=w0*VDdk; VDdlam2=w0*VDdlam2; VDdklam=w0*VDdklam; VDdk2=w0*VDdk2; %the adaptation term and derivatives PDa=1+sigma.^2.*nk.^2; PDadlam=0; PDadk=2*sigma.^2.*nk; PDadlam2=0; PDadklam=0; PDadk2=2*sigma.^2; Qa=1+lam/beta; Qadlam=1/beta; Qadlam2=0;

%putting everything together etaK=-2*VD./(Q.*PD); etaKdlam=(VDdlam./VD-Qdlam./Q-PDdlam./PD).*etaK; etaKdk=(VDdk./VD-PDdk./PD).*etaK; etaKdk2=(VDdk./VD-PDdk./PD).*etaKdk+etaK.*(VDdk2./VD-PDdk2./PD-(VDdk.^2./VD.^2-PDdk.^2./PD.^2)); etaKdlam2=(VDdlam./VD-Qdlam./Q-PDdlam./PD).*etaKdlam+etaK.*(VDdlam2./VD-Qdlam2./Q... -PDdlam2./PD-(VDdlam.^2./VD.^2-Qdlam.^2./Q.^2-PDdlam.^2./PD.^2)); etaKdklam=(VDdk./VD-PDdk./PD).*etaKdlam... +etaK.*(VDdklam./VD-PDdklam./PD-(VDdlam.*VDdk./VD.^2-PDdlam.*PDdk./PD.^2)); %zeta is the adaptation term zeta=1./(Q.*Qa.*PDa); zetadlam=-(Qdlam./Q+Qadlam./Qa+PDadlam./PDa).*zeta; zetadk=-PDadk./PDa.*zeta; zetadlam2=-(Qdlam./Q+Qadlam./Qa+PDadlam./PDa).*zetadlam+zeta.*(-(Qdlam2./Q+Qadlam2./Qa... +PDadlam2./PDa)+Qdlam.^2./Q.^2+Qadlam.^2./Qa.^2+PDdlam.^2./PD.^2); zetadk2=-PDadk./PDa.*zetadk+zeta.*(-PDadk2./PDa+PDadk.^2./PDa.^2); zetadklam=-PDadk./PDa.*zetadlam+zeta.*(-PDadklam./PDa+PDadk.*PDadlam./PDa.^2); %for the case of nonlinear adaptation with f_a = f % ga=g1.*ga;

%the linear dispersion relation is the equation dr=g1.*etaK-ga.*zeta; drdlam=g1.*etaKdlam-ga.*zetadlam; drdik=-i*(g1.*etaKdk-ga.*zetadk); drdlam2=g1.*etaKdlam2-ga.*zetadlam2; drdik2=-(g1.*etaKdk2-ga.*zetadk2); drdiklam=-i*(g1.*etaKdklam-ga.*zetadklam);

dr = 1

%this plot confirms that the calculations are correct %(use this to check that the manually given parameters above correspond with the XPP file)

211

gc=real((1+ga.*zeta)./etaK); figure(3) plot(param,gc,-); hold on; plot(param,g1,r--); hold off;

%gamma_c from the calculations here

%gamma_c from the data file - should coincide

xlabel(The linear instability. The blue and red curve must coincide!); ups=drdik./drdlam; drp=real(d); dip=imag(d); %for the case of linear or no adaptation D=-etaK./drdlam; %for the case of nonlinear adaptation with f_a = f % ga=ga./g1; % D=-(etaK-ga.*zeta)./drdlam; Dr=real(D); Di=imag(D); %another check: a_r must be always positive beyond the bifurcation (gamma_1 > gamma_c) figure(2) plot(param,Dr,b); xlabel(D_r - must be always positive!); %revert to the parameter ga %the group speed v_g

d=-1./(2*drdlam).*(drdik2-2*ups.*drdiklam+ups.^2.*drdlam2);

%calculating the C_mn expressions for various arguments C02=Cmn(2*k,0,v,g1,r,ga,sigma,w0); C00=Cmn(0,0,v,g1,r,ga,sigma,w0); C20=Cmn(0,2*i*om,v,g1,r,ga,sigma,w0); C20r=real(C20); C20i=imag(C20); C22=Cmn(2*k,2*i*om,v,g1,r,ga,sigma,w0); C22r=real(C22); C22i=imag(C22); %these are f and g brp=Dr.*(2*C00+C22r)-Di.*C22i; bip=Di.*(2*C00+C22r)+Dr.*C22i; crp=Dr.*(C00+C02+C20r)-Di.*C20i; cip=Di.*(C00+C02+C20r)+Dr.*C20i;

The last four lines are the expressions f, fi , g and gi . To calculate Cmn for different multiples of the unstable mode, we call the function
Code listing 9.5.3, in MatLab format:
function C=Cmn(nk,lam,v,g1,r,ga,sigma,kappa,w0); alpha=1; %some of the rarely used parameters beta=1; Q=(1+lam/alpha).^2; VD=(1+lam./v-r).*(1+lam./v).^2+nk.^2.*(1+lam./v+r); PD=((1+lam./v).^2+nk.^2).^2; PDa=1+sigma.^2.*nk.^2;

212

Qa=1+lam./beta; zeta=1./(Q.*Qa.*PDa); etaK=-w0*2*VD./(Q.*PD); %linear adaptation C=etaK./(1-g1.*etaK+ga.*zeta); %nonlinear adaptation with f_a = f % C=(etaK-ga.*zeta)./(1-g1.*(etaK-ga.*zeta));

In the case of nonlinear adaptation we have a 2,3 = 2,3 . In both cases, as mentioned earlier, we have taken out 2 from the expression to put it in (5.2.4). At this point we can plot the relative proportions in which the TWSW selection regions divide the (2 , 3 )-parameter space. One way to summarise data along the critical curve (see Fig. 4.4) is by plotting the curvatures of the regions
parabolic boundaries. The second derivative 3 (2 ) = 3 is a constant that

gives us all information about a parabola. From (5.2.4), these are, respectively, br = 0 : br + cr = 0 : br cr = 0 : br dr + bi di = 0 : 4 f , 3e 4 f + 2 g 3 = , 9 e 4 f 2 g , 3 = 3 e 4 dr f + di fi 3 = , 3 dr e + d i ei
3 =

(9.5.1)

for the TW existence, SW existence, stability and BenjaminFeir parabolas. In the last line we used the denitions ei = Im D, fi = Im D (2C00 + C22 ) . We would like to have a purely visual measure telling us relatively how much of the parameter space is taken up by each type of dynamics. However, as a parabola is made steeper and delimiting a thinner region around the 3 -axis,
the curvature 3 tends to innity. If there are two regions of similar size

(what this means is not mathematically dened), limited by parabolic borders with curvatures 1 , 2 and 3 , 4 , and one of the regions is close to the 2 axis (shallow parabolas), the other to the 3 -axis (steep parabolas), then we itive) indicator for the size of the region. Instead, we will use as a measure would have 3 4 >> 1 2 . Thus the curvatures are not a good (intu-

the vertical coordinate at which the parabola crosses the unit circle. Solving

213

2 3 = 2 /2,

2 2 2 + 3 = , we obtain

3 =

1 (1 +

1 + 2 2 ).

It is these quantities that we plot in Fig. 5.3, 5.4, 5.7, 5.8, with replaced by the second derivatives we found in (9.5.1). They are, of course, not well dened as measures for regions that extend to innity. We argue that they sufce for the aims of the investigator, because one would like to simulate the system not too far from the origin 2 = 3 = 0. As one moves away from the origin various system variables become large and the numerical accuracy suffers. In this sense, plotting the proportions of the unit circle ( = 1), in which it is divided by the TWSW selection regions is good enough. The following visualisation code can be added to the function efg4 in Listing 9.5.2:
Code listing 9.5.4, in MatLab format:
%parabola curvatures kappa tw=-(4*brp./(3*Dr)); sw=-(4*(brp+2*crp)./(9*Dr)); bj=-(4/3*(drp.*brp+dip.*bip)./(drp.*Dr+dip.*Di)); stb=(4*(brp-2*crp)./(3*Dr)); %beta isolines for the sigmoidal function [b,g1]=meshgrid([40 120],g1); b(b<4*g1)=NaN; %b(:,1)=4*gc; %array of beta values %removes beta points falling under the minimum for instability %for preimage of the minimal betas %alpha %TW existence %SW existence %Benjamin-Feir instability %stability line, TW-SW

a=(b./g1-2+sqrt((b./g1-2).^2-4))/2; kap=atan(4/3*(a.^2-4*a+1)./(a-1).^2./g1); figure(6) %ratio() is defined below

plot(param,ratio(tw),-.y,param,ratio(sw),-.g,param,ratio(stb),k,LineWidth,2); hold on plot(param,ratio(tw),k,param,ratio(sw),k,param,ratio(bj),r,param,zero,:r); plot(param,ratio(4/3./g1),--k); plot(param,ratio(kap),-.); hold off xlabel([the curvature of TW (yellow), SW (green), BF (thin red) and Stability (thick black) ... parabolas \newline dashed - isolines for \beta: blue - \beta=10, green - \beta=100, ... black - \beta\rightarrow\infty]); %the crossing point of parabola with circle of radius circ function rr=ratio(kappa) circ=1; rr=(-1+sqrt(1+kappa.^2*circ^2))./(kappa*circ);

Some of the code above is to help us see which of the plotted region boundaries would fall into a xed range of s when mapped by the sigmoidal ring rate 214

(see Sections 5.2.3 and 5.2.5). It plots on the same graph the steepest parabolas that could be region boundaries, whose image would cross a given constant
2 . Solving 3 = 2 /2 together with the projections found in (5.2.5), for the

parabola curvature we get = 4 2 4 + 1 . 3c ( 1)2

To put it in terms of we use the expression for given in Section 5.2.3: = 1 2c 1 2c


2

1.

that line would not be observable in the model with sigmoidal ring rate.

For the limit , we have lim = 4/(3c ). Region portions lying above

The organisation of the function efgdendr differs from efg4 but the type of expressions calculated for the model with dendritic delay does not. The new layout was guided by a desire to minimise the input of formulas (and bugs) at the expense of code efciency. The subroutine Cmndendr this time returns values for the entire dispersion relation dr, as dened in Section 5.3.1.
Code listing 9.5.5, in MatLab format:
function [Dr,Di,brp,crp,bip,cip,drp,dip,tw,sw,bj]=efgdendr(param,g1,k,om) [etaK,dr,D,d,zetag]=Cmndendr(k,i*om,param,g1); Dr=real(D); Di=imag(D); drp=real(d); dip=imag(d); %the check of correct linear instability gc=(zetag+g1.*etaK)./etaK; figure(3) plot(param,gc,param,g1,r--); figure(1) plot(param,Dr); %D_r should be positive for all gamma_1 > gamma_c %the two curves should coincide

%only etaK and dr are used from the output of the following calls [etaK,dr,D,d,zetag]=Cmndendr(0,0,param,g1); C00=etaK./(1-dr); [etaK,dr,D,d,zetag]=Cmndendr(2*k,0,param,g1); C02=etaK./(1-dr); [etaK,dr,D,d,zetag]=Cmndendr(0,2*i*om,param,g1); C20r=real(etaK./(1-dr)); C20i=imag(etaK./(1-dr)); [etaK,dr,D,d,zetag]=Cmndendr(2*k,2*i*om,param,g1); C22r=real(etaK./(1-dr));

215

C22i=imag(etaK./(1-dr)); %f and g brp=Dr.*(2*C00+C22r)-Di.*C22i; bip=Di.*(2*C00+C22r)+Dr.*C22i; crp=Dr.*(C00+C02+C20r)-Di.*C20i; cip=Di.*(C00+C02+C20r)+Dr.*C20i;

The parameters and formulas are situated inside Cmndendr. The FourierLaplace transforms that we need, are K = 2c VD PDD Q sqez0 sq ,

given in Appendix 9.1.2. There is also a linear adaptation term which is the same dened for the axonal model.
Code listing 9.5.6, in MatLab format:
function [etaK,dr,D,d,zetag]=Cmndendr(k,lam,param,g1); %setting model parameters %these have to be the parameter values used in XPP to generate the instability data alpha=1; iota=1; cD=1; z0=0; phi=0; b=1; g1a=0; beta=1; sigma=0; phi=param; %synaptic time constant %iota = 1 / D*cD, D is the cable diffusion constant %cable time constant tau_D %position of first synapse %synaptic spread %the axonal delay b = 1/v %in this way the delay can be removed by setting b = 0 %strength of adaptation %time-constant of the adaptation term %spatial spread of adaptive feedback %here is the parameter which we vary, usually z0, phi or b %this assings the grid of points used by XPP Q=(1+lam/alpha).^2; Qdlam=2*(1+lam/alpha)/alpha; Qdlam2=2/alpha^2; sq=sqrt(iota+lam./cD); a=1+lam.*b+phi.*sq; da=b+phi./(2*sq.*cD); dda=-phi./(4*sq.^3.*cD.^2); PD=(a.^2+k.^2).^2; PDdlam=4*(a.^2+k.^2).*a.*da; PDdk=4*(a.^2+k.^2).*k; PDdlam2=4*(3*a.^2+k.^2).*da.^2+4*(a.^2+k.^2).*a.*dda; PDdklam=8*k.*a.*da; PDdk2=4*(a.^2+3*k.^2); VD=k.^2-a.^2+a.*(a.^2+k.^2);

216

VDdlam=(-2*a+3*a.^2+k.^2).*da; VDdk=2*k.*(1+a); VDdlam2=(6*a-2).*da.^2+(-2*a+3*a.^2+k.^2).*dda; VDdklam=2*k.*da; VDdk2=2*(1+a); etaK=-2*VD./(PD.*cD.*Q.*sq.*exp(sq.*z0)); etaKdlam=etaK.*(VDdlam./VD-PDdlam./PD-Qdlam./Q-1./(2*sq.^2)-z0./(2*sq)); etaKdk=(VDdk./VD-PDdk./PD).*etaK; etaKdk2=(VDdk./VD-PDdk./PD).*etaKdk+etaK.*(VDdk2./VD-PDdk2./PD-(VDdk.^2./VD.^2-PDdk.^2./PD.^2)); etaKdlam2=(VDdlam./VD-Qdlam./Q-PDdlam./PD-1/2./(iota+lam)-z0./(2*sq)).*etaKdlam... +etaK.*(VDdlam2./VD-Qdlam2./Q-PDdlam2./PD+1./(4*sq.^4)-z0.*(1./sq+z0)./(4*sq.^2)... -(VDdlam.^2./VD.^2-Qdlam.^2./Q.^2-PDdlam.^2./PD.^2-1./(2*sq.^2).^2-z0.^2./(2*sq).^2)); etaKdklam=(VDdk./VD-PDdk./PD).*etaKdlam+etaK.*(VDdklam./VD-PDdklam./PD... -(VDdlam.*VDdk./VD.^2-PDdlam.*PDdk./PD.^2)); %the adaptation term Qa=1+lam/beta; Qadlam=1/beta; Qadlam2=0; PDa=1+sigma.^2.*k.^2; PDadlam=0; PDadk=2*sigma.^2.*k; PDadlam2=0; PDadklam=0; PDadk2=2*sigma.^2; zeta=1./(Q.*Qa.*PDa); zetadlam=-(Qdlam./Q+Qadlam./Qa+PDadlam./PDa).*zeta; zetadk=-PDadk./PDa.*zeta; zetadlam2=-(Qdlam./Q+Qadlam./Qa+PDadlam./PDa).*zetadlam... +zeta.*(-(Qdlam2./Q+Qadlam2./Qa+PDadlam2./PDa)... +Qdlam.^2./Q.^2+Qadlam.^2./Qa.^2+PDdlam.^2./PD.^2); zetadk2=-PDadk./PDa.*zetadk+zeta.*(-PDadk2./PDa+PDadk.^2./PDa.^2); zetadklam=-PDadk./PDa.*zetadlam+zeta.*(-PDadklam./PDa+PDadk.*PDadlam./PDa.^2); %the dispersion relation dr=g1.*etaK-g1a.*zeta; drdlam=g1.*etaKdlam-g1a.*zetadlam; drdik=-i*(g1.*etaKdk-g1a.*zetadk); drdlam2=g1.*etaKdlam2-g1a.*zetadlam2; drdik2=-(g1.*etaKdk2-g1a.*zetadk2); drdiklam=-i*(g1.*etaKdklam-g1a.*zetadklam); zetag=g1a.*zeta; %this is only used for checking the instability in efgdendr

%these are only used when the function is called with arguments n = 1, m = 1 D=-etaK./drdlam; ups=drdik./drdlam; %ipsilon d=-1./(2*drdlam).*(drdik2-2*ups.*drdiklam+ups.^2.*drdlam2);

217

9.5.2 The nonlinear step


Model-specic information is all contained in the expressions e, f, g, which are found by the script in the preceding Appendix 9.5.1. Therefore, the last piece of code, which plots the nonlinear parameter portrait, does not have to be altered for each model. For the cubic ring rate, it calculates simply (5.2.4), ar = e,
2 br = 33 e + 22 f, 2 cr = 63 e + 42 g

and plots where br , br + cr , br cr and br dr + bi di are negative. These correspond

respectively to TW existence, SW existence, TW stability (and SW instability), and BenjaminFeir stability. We have to pick a single point from the instability curve. This is done manually by assigning its index in the data array to the variable ind (the value of the parameter is seen by printing pdat(ind)).

Code listing 9.5.7, in MatLab format:


warning off MATLAB:divideByZero %ind=200; %ind=551; %setting up the (gamma_2,gamma_3)-plane in desired coordinates % g2=-250:10:1000; % g3=-1000:10:500; % % g2=-2.5:0.001:0; g3=-1:0.05:1;

%g3=-0.2*ones(size(g2)); g2=-2.5:0.05:2.5; g3=-0.5:0.01:0.5; [g2,g3]=meshgrid(g2,g3); %taking the data for only one bifurcation point brp1=brp(ind); bip1=bip(ind); crp1=crp(ind); cip1=cip(ind); Dr1=Dr(ind); Di1=Di(ind); drp1=drp(ind); dip1=dip(ind); %the amplitude equation parameters b and c br=3*g3*Dr1+2*g2.^2*brp1; bi=3*g3*Di1+2*g2.^2*bip1; cr=6*g3*Dr1+4*g2.^2*crp1; ci=6*g3*Di1+4*g2.^2*cip1; benj=br.*drp1+bi.*dip1; q=sqrt(Dr1.*br.*benj./(2*drp1.*(br.^2+bi.^2)+br.*drp1.*benj)); %benjamin-feir %benjamin-feir-eckhaus

218

figure(5) xlabel(g2); ylabel(g3);

%regions in the (gamma_2,gamma_3)-plane

[C,ha,CF]=contourf(g2,g3,-br,[0 0]); set(ha,FaceColor,y) hold on [C,ha,CF]=contourf(g2,g3,-(br+cr),[0 0]); set(ha,FaceColor,g,FaceAlpha,0.5)

% yellow - TW exists

% green -

SW exists

% thick black - stability line, pink tint shows TW stability [C,ha,CF]=contourf(g2,g3,(br-cr),[0 0]); set(ha,LineWidth,3,EdgeColor,k,FaceColor,r,FaceAlpha,0.2); % thick red - Benjamin-Feir line, red tint shows sideband stability [C,ha,CF]=contourf(g2,g3,-benj,[0 0]); set(ha,LineWidth,3,EdgeColor,r,FaceColor,m,FaceAlpha,0.3); hold off xlabel([g2 - g3 plot for fixed param = num2str(pdat(ind))... \newline TW exists - yellow, SW exists - green]);

For the case of sigmoidal ring rate, we can use as an independent parameter (Section 5.2.3). In the code below we have used as a basic parameter, but we can plot the selection regions by any of the variables simply by remapping the MatLab plot axes (the variable vert). In that case, one has to pay attention to set up the range of so that the map is not multivalued. Fig. 5.2, left comes in handy for that purpose. We use the trick with the axes to implement the sigmoidal mapping . The formulas (5.2.5), 2 = 1 2 ( 1) , 2 (1 + )3 3 = 1 3 (2 4 + 1) , 6 (1 + )4 either h, or = e (u h). They are related by the equation = c (1 + )2

essentially are the inverse map 1 . We use it to obtain the specication of the amplitude equation parameters b and c in the same cubic form (5.2.4) as in Listing 9.5.7. Then we plot the TW/SW conditions back into the sigmoidal coordinate system.
Code listing 9.5.8, in MatLab format:
warning off MATLAB:divideByZero %a is the parameter alpha bar %a=0.21:0.01:4.69; %a=0.01:0.01:1; a=1:0.1:10; evec=ones(size(a)); %setting up the vertical plot parameter % vert=(1./gdat)*(a.*log(a)./(1+a).^2); %to plot by h %lower branch of beta(h), h moves from -0.22285 to 0.22385 %positive branch of h(beta) (h > 0) %negative branch of h(beta), it will give the same picture

219

% vert=ones(size(pdat))*a; vert=gdat*((1+a).^2./a); % vert=ones(size(gdat))*(a.*log(a)./(1+a).^2); g2=gdat.^2*((1+a).*(a-1)./a/2); g3=gdat.^3*((1+a).^2.*(a.^2-4*a+1)./a.^2)/6; br=3*g3.*(Dr*evec)+2*g2.^2.*(brp*evec); cr=6*g3.*(Dr*evec)+4*g2.^2.*(crp*evec); bi=3*g3.*(Di*evec)+2*g2.^2.*(bip*evec); benj=br.*(drp*evec)+bi.*(dip*evec);

%to plot by alpha bar %to plot by beta %h without modification by g_c %inverse of the sigmoidal map %b,c in the (gamma_2,gamma_3)-plane

%Benjamin-Feir instability

%solving the conditions br<0, etc., in the (pdat,vert)-plane figure(4) [C,ha,CF]=contourf((pdat*evec),vert,-br,[0 0]); set(ha,FaceColor,y) xlabel(v); hold on [C,ha,CF]=contourf((pdat*evec),vert,-(br+cr),[0 0]); set(ha,FaceColor,g,FaceAlpha,0.5) [C,ha,CF]=contourf((pdat*evec),vert,(br-cr),[0 0]); set(ha,LineWidth,3,EdgeColor,k,ha,FaceAlpha,0,FaceColor,r) [C,ha,CF]=contourf((pdat*evec),vert,-benj,[0 0]); set(ha,LineWidth,3,EdgeColor,r,FaceAlpha,0.3,FaceColor,m); hold off ylabel(\beta);

At this point we can also calculate some important quantities, such as the amplitudes of the TW and SW solutions. They are the main tool to cross-check the correctness of the amplitude equations derivation and computer implementation, against the full model simulation code in Appendix 9.3. From Section 5.2.1,

| ATW (2 , 3 )| = 2

ar , br

| ASW (2 , 3 )| = 4

ar . br + cr

value at which we run the simulation. The amplitudes are necessary also to

For the denition of (within ar ) we use 1 c , where 1 is the parameter

set up initial conditions for simulation of the model that are close to the stable patterns. In fullPDEmodel (Listing 9.3.3), we may specify the following initial dynamics:
Code listing 9.5.9, in MatLab format:
g2s=g2(ind3,ind2), g3s=g3(ind3,ind2) delta=g1-gdat(ind) ar=Dr1*sqrt(delta); ai=Di1*sqrt(delta); br=br(ind3,ind2); %%%TW form ome=abs(omdat(ind))+delta^2*(ai-ar*bi(ind3,ind2)/br(ind3,ind2))-delta.*ups(ind).*q(ind3,ind2) uinit=[ cos(kx); -sin(kx)*ome; -cos(kx)*ome^2; sin(kx)*ome^3; cos(kx)*ome^4;... -sin(kx)*ome^5; -cos(kx)*ome^6];% sin(kx)*ome^7]; %u, u, u, etc. at t = 0 bi=bi(ind3,ind2); %D is a constant by gamma_2,3

220

%%%SW form % ome=abs(omdat(ind))+delta^2*(ai-ar*(bi(ind2)+ci(ind2))/(br(ind2)+cr(ind2))); % uinit=[ cos(kx); zeros(N,1); -cos(kx)*ome^2; zeros(N,1); cos(kx)*ome^4; zeros(N,1)]; % -cos(kx)*ome^6; zeros(N,1)];

The variables ind2 and ind3 are again manually specied. They x the values of 2 and 3 , respectively, in Listing 9.5.10 below. They have to be coordinated with the way we have set up the arrays g2 and g3 at the beginning of Listing 9.5.7. In the case of BejaminFeir instability (Section 5.2.4) we also calculate the best perturbation wavenumber 1 , in the sense that perturbations with that wavenumber will grow to O(1) for the shortest simulation time. The latter is also printed, as growthtime, it is the minimum time for which we have to evolve the model in order to notice the effect of BenjaminFeir instability. The perturbation wavenumber and growthrate are given by the formulas (5.2.9) and (5.2.10), 1 = 1 di

2 ( ) =

d2 | b |2 2 br dr + bi di i 2 2 2 . br 2r br

br dr + bi di ar b2 , |b|2 (br + cr ) r

Here cr should be deleted from the expression if we are looking at the stability of a TW rather than SW. In the script we plot k c /1 in dependence of the nonlinear parameters, because we would like to nd a 1 (2 , 3 ) which ts an integer number of times into the initial solutions wavenumber. If, for example, looking at the plot we pick parameters such that 111 k c , we can start the while setting up the domain with sc=kap1 in Listing 9.3.2 (fullPDEperiodic). In this case the domain will t eleven wavelengths of the primary pattern (TW) and ten or twelve wavelengths of the periodic sideband perturbation. The following lines are added to Listing 9.5.7 (they require some modication for a sigmoidal ring rate, which we omit):
Code listing 9.5.10, in MatLab format:
%the pattern amplitudes ar=abs(Dr1); %.*(g-gdat(ind)); ai=abs(Di1); %.*(g-gdat(ind)); %abs because delta, which also has a sign, was taken outside of the expression Atw=real(2*sqrt(-ar./br)); Asw=real(4*sqrt(-ar./(br+cr)));

simulation with bas=cos(11*kap1*x)+0.1*cos((10*kap1)*x); in Listing 9.3.3,

221

stab=ar./(br-cr); %picking a point in the parameter plane by (ind2, ind3) %it has to be coordinated with the way the array is setup above % ind3=-0.2*100+101; ind2=-0.26*100+201; % ind3=41; ind2=51; % ind3=0.2*100+51; ind2=-1.22*100+251; %a horizontal section of parameter space g21=g2(ind3,:); Atw=Atw(ind3,:); Asw=Asw(ind3,:); stab=stab(ind3,:); figure(4) plot(g21,Atw,b,g21,Asw,g,g21,stab,k,g21,benj(ind3,:)*1000,r); xlabel([the amplitudes along g2, for g3 = num2str(g31)]) %information for sideband instabilities figure(7) kap=0:0.01:1; br2=br(ind3,ind2); bi2=bi(ind3,ind2); benj2=benj(ind3,ind2); ar2=Dr1*sqrt(0.1); %g1-gdat(ind)); %for a SW c_r participates in the formula, for a TW set it to zero %cr2SW=0; cr2SW=cr(ind3,ind2); %growthrates of sideband modes plot(kap,sig2) %print the most unstable perturbation: kap1=sqrt(-benj2.*ar2.*br2.^2./(br2.^2+bi2.^2)./dip1.^2./(br2+cr2SW)) maxsig=-kap1.^2./br2.*(benj2+dip1.^2.*(br2+cr2SW)./(2*ar2.*br2.^2).*(br2.^2+bi2.^2).*kap1.^2) growthtime=1./maxsig %print fastest time for which perturbation becomes O(1) sigma_2(kappa) sig2=-kap.^2./br2.*(benj2+dip1.^2.*(br2+cr2SW)./(2*ar2.*br2.^2).*(br2.^2+bi2.^2).*kap.^2); %kappa - the perturbation of central mode k_c %g3 = -0.2, g2 = -0.26 %most often using g2=0, g3=-0.2 %a BjF-unstable point in lin adapt model, v=10, g1a=3.6

%maximal perturbation kappa_1 depending on gamma_2 figure(4) br3=br(ind3,:); bi3=bi(ind3,:); benj3=benj(ind3,:); ar3=Dr1*sqrt(0.1); %g1-gdat(ind)); %cr3SW=0; cr3SW=cr(ind3,:); kap1=sqrt(-benj3.*ar3.*br3.^2./(br3.^2+bi3.^2)./dip1.^2./(br3+cr3SW)); plot(g2(ind3,:),kdat(ind)./kap1) hold on plot(g2(ind3,:),imag(kap1),r) hold off %to check that weve got the correct bands of kappa xlabel( the max destabilizing perturb (blue) and whether it is real (red must be at 0)) %we want to find a kappa_1 that is a divisor of k_c

222

9.5.3 Two-parameter plots


The diagrams in Fig. 5.5, 5.6, 5.10 show the type of the bifurcation when 1 = c at the respective parameter points. The possibilities are: a Hopf bifurcation (when k c = 0 and c = 0) beyond which the entire system domain oscillates synchronously, a Turing bifurcation (c = 0, k c = 0) leading to static patches of activity, or a TuringHopf bifurcation (k c , c = 0) giving raise to travelling or standing wavetrain. Weakly nonlinear analysis can differentiate two types of TuringHopf instability: in the rst case that we call TWgenerator, only travelling waves (TWs) could emerge as stable solutions, while in the second, TW/SW-generator, one could also obtain a standing wave (SW) by choosing appropriately parameters of the model nonlinearity (Section 5.2.2). A diagram showing only the bifurcation type allows us to summarise information about the expected patterns with regards to two model parameters. Here we describe how to obtain these diagrams. The bifurcation type (Hopf, Turing or TuringHopf), is determined by the linear stability analysis of the model. The system switches from one type to another if two lowest critical curves cross (see examples in Fig. 4.4, 4.5, 5.8), since the instability is always given by the curve with smallest c . Thus, by tracking the codimention two points where two critical curves cross, i.e. two different sets of eigenvalues have zero real part for the same bifurcation parameter 1 , we will determine the borders of the dynamics types. To do this, for any of the linear stability codes in Appendix 9.1 we duplicate all the lines, replacing the variables k, om with a second set k0, om0. We attach zero to the name of each duplicated expression to avoid confusion. The variables g and nu participate in both sets of lines, since the value of c and = 0 should be equal for the two eigenmodes. We extend the system of Listing 9.1.1 as follows:
Code listing 9.5.11, in XPP format:
g1=dGom*dHk-dHom*dGk nu=eG om=eH k=nu om0=eG0/(k^2-k0^2) k0=eH0 v=dGom0*dHk0-dHom0*dGk0 #v=k0

The rst four lines nd a set destabilising eigenvalues, as do the next three 223

lines. They, however, x a value for an additional model parameter (in this case the axonal conduction velocity v). Dividing one of the equations by k2 c k20 ensures that XPP will capture two different eigenmodes (we could use also c

last line simply with the commented v=k0. Finally, as in Appendix 9.1 we track the stationary points of this ODE system in XPP/AUTO with respect to another parameter. Of the 1D models that we investigate in this thesis, only a few are interesting enough to warrant two-parameter plots. One is the model with both axonal and dendritic delays, while the others combine a delay with an adaptation mechanism. Another possibility is that a single eigenmode changes its character as some parameter is varied. For example, thecritical wavenumber of a TuringHopf bifurcation might reach zero (k c 0) making it a Hopf bifurcation. Or, a pair of pass through = 0 to become two real eigenvalues. In this case, a TuringHopf instability transforms into a Turing instability. This is the so called Takens Bogdanov bifurcation (Section 5.3.2). These are codimension two points, that is we need to tune two system parameters to nd them, one to nd where is the eigenvalue character changes, the other to ensure that this eigenvalue has zero real parts. XPP/AUTO can recognise and track this type of points as branch points (BP) within the linear stability codes from Appendix 9.1. A point where k c becomes zero is also found by the code below, because there f = g = 0. To nd the border between a TW and TW/SWgenerator in the case of Turing Hopf instability, we have to determine the points where f = g, with f, g dened by (5.2.4) (see also Appendix 9.5.1). The codes below use a mixture between manual separation of the real and imaginary parts and automated expansion by Maple where possible. The Listings contents should be merged to the appropriate linear stability code in Appendix 9.1, which ensures that the system is at the instability point. The following is the code for axonal delay model with adaptation term:
Code listing 9.5.12, in XPP format:
#calculating the dr_i^mn expressions, real and imaginary parts #with arguments (i*m*omega_c, n*k_c) Qr(momv)=1-v^2*momv^2/alpha^2 Qi(momv)=2*v*momv/alpha PDr(nk,momv)=momv^4+(-2*nk^2-6)*momv^2+1+nk^4+2*nk^2

2 2 c c0 ). If we expect one of the instabilities to be Hopf, we can replace the

critical eigenvalues that are imaginary (complex conjugated, = i), might

224

PDi(nk,momv)=-4*momv^3+(4*nk^2+4)*momv VDr(nk,momv)=(-3+r)*momv^2+1-r+nk^2*(1+r) VDi(nk,momv)=-momv^3+(3-2*r+nk^2)*momv QPDr(nk,momv)=Qr(momv)*PDr(nk,momv)-Qi(momv)*PDi(nk,momv) QPDi(nk,momv)=Qi(momv)*PDr(nk,momv)+Qr(momv)*PDi(nk,momv) #the derivative for D is needed only at m = 1, n = 1 QPDdlamr=2*((10*v+5*alpha)*om^4+(-6*k^2*v^2*alpha-6*v*alpha^2-6*k^2*v^3-6*v^318*v^2*alpha)*om^2+v^4*alpha+k^4*v^4*alpha+2*k^2*v^4*alpha+ 2*alpha^2*k^2*v^3+2*alpha^2*v^3)/(alpha^2*v^3)/v QPDdlami=2*om*(3*om^4+(-2*alpha^2-12*v^2-16*alpha*v-4*k^2*v^2)*om^2+8*k^2*v^3*alpha+2*k^2*v^4+ 8*alpha*v^3+v^4+k^4*v^4+2*alpha^2*k^2*v^2+6*alpha^2*v^2)/(alpha^2*v^3)/v VDdlamr=-(-3*v^2+3*om^2+2*r*v^2-k^2*v^2)/v^3 VDdlami=-2*om*(-3+r)/v^2 #function for real and imaginary parts of a division operation factr(wr,wi,zr,zi)=(wr*zr+wi*zi)/(zr^2+zi^2) facti(wr,wi,zr,zi)=(wi*zr-wr*zi)/(zr^2+zi^2) #the adaptation term PDa(nk)=1+sigma^2*nk^2 QQar(momv)=Qr(momv)-Qi(momv)*v*momv/beta QQai(momv)=Qi(momv)+Qr(momv)*v*momv/beta zetar(nk,momv)=factr(1,0,QQar(momv),QQai(momv))/PDa(nk) zetai(nk,momv)=facti(1,0,QQar(momv),QQai(momv))/PDa(nk) #derivative by i*omega of 1/Q*Q_a dzetar=alpha^2*beta*(-2*alpha^3*beta^3+2*alpha^3*beta*om^2+6*alpha*beta^3*om^2+ 10*alpha*beta*om^4+6*om^2*alpha^2*beta^2+6*om^4*alpha^2-om^4*beta^2-3*om^6beta^2*alpha^4+om^2*alpha^4)/((alpha^2+om^2)^3*(beta^2+om^2)^2)/PDa(k) dzetai=2*alpha^2*beta*om*(2*alpha^3*beta^2-2*alpha*om^2*beta^2-4*alpha*om^4+3*beta^3*alpha^2+ 3*beta*alpha^2*om^2-beta^3*om^2-2*beta*om^4+beta*alpha^4)/ ((alpha^2+om^2)^3*(beta^2+om^2)^2)/PDa(k) #eta*K = VD / Q*PD etaKr(nk,momv)=-2*factr(VDr(nk,momv),VDi(nk,momv),QPDr(nk,momv),QPDi(nk,momv)) etaKi(nk,momv)=-2*facti(VDr(nk,momv),VDi(nk,momv),QPDr(nk,momv),QPDi(nk,momv)) #its derivative, missing a multiplication by etaK (it is below) etaKdlamr=factr(VDdlamr,VDdlami,VDr(k,om/v),VDi(k,om/v))factr(QPDdlamr,QPDdlami,QPDr(k,om/v),QPDi(k,om/v)) etaKdlami=facti(VDdlamr,VDdlami,VDr(k,om/v),VDi(k,om/v))facti(QPDdlamr,QPDdlami,QPDr(k,om/v),QPDi(k,om/v)) #replace the lines below with these for linear adaptation #ag=abs(g) #denomr=etaKdlamr-factr(dzetar,dzetai,1/ga+zetar(k,om/v),zetai(k,om/v)) #denomi=etaKdlami-facti(dzetar,dzetai,1/ga+zetar(k,om/v),zetai(k,om/v)) #Dr=factr(-1/ag,0,denomr,denomi) #Di=facti(-1/ag,0,denomr,denomi) #Cmnr(nk,momv)=factr(etaKr(nk,momv),etaKi(nk,momv),1-ag*etaKr(nk,momv)+ga*zetar(nk,momv), -ag*etaKi(nk,momv)+g1a*zetai(nk,momv)) #Cmni(nk,momv)=facti(etaKr(nk,momv),etaKi(nk,momv),1-ag*etaKr(nk,momv)+ga*zetar(nk,momv), -ag*etaKi(nk,momv)+g1a*zetai(nk,momv)) #for nonlinear adaptation (zeta) w / z

225

drr(nk,momv)=etaKr(nk,momv)-ga*zetar(nk,momv)/PDa(nk) dri(nk,momv)=etaKi(nk,momv)-ga*zetai(nk,momv)/PDa(nk) drdlamr=etaKr(k,om/v)*etaKdlamr-etaKi(k,om/v)*etaKdlami-ga*dzetar drdlami=etaKi(k,om/v)*etaKdlamr+etaKr(k,om/v)*etaKdlami-ga*dzetai Dr=factr(-1/g1^2,0,drdlamr,drdlami) Di=facti(-1/g1^2,0,drdlamr,drdlami) Cmnr(nk,momv)=factr(drr(nk,momv),dri(nk,momv),1-g1*drr(nk,momv),-g1*dri(nk,momv)) Cmni(nk,momv)=facti(drr(nk,momv),dri(nk,momv),1-g1*drr(nk,momv),-g1*dri(nk,momv)) f=Dr*(2*Cmnr(0,0)+Cmnr(2*k,2*om/v))-Di*Cmni(2*k,2*om/v) g=Dr*(Cmnr(0,0)+Cmnr(2*k,0)+Cmnr(0,2*om/v))-Di*Cmni(0,2*om/v) #this finds places where f,g blow up #v=dGom^2+dHom^2 v=f-g g1=dGom*dHk-dHom*dGk nu=eG om=eH k=nu @ nmesh=80, meth=2rb, dsmax=0.1, parmax=15 #par v=10 par g1a=1 par sigma=0, r=1, alpha=1, beta=1 init nu=0, om=2.8079, k=1.0274, g1=18.8462 done

The following code should be merged with Listing 9.1.3, for the model with axo-dendritic delays (no adaptation included):
Code listing 9.5.13, in XPP format:
#real and imaginary part of division w / z factr(wr,wi,zr,zi)=(wr*zr+wi*zi)/(zr^2+zi^2) facti(wr,wi,zr,zi)=(wi*zr-wr*zi)/(zr^2+zi^2) #these had to be retyped to include an argument Qr(mom)=1-mom^2/alpha^2 Qi(mom)=2*mom/alpha sqr(mom)=sqrt(sqrt(iota^2+mom^2/cD^2))*cos(atan(mom/cD/iota)/2) sqi(mom)=sqrt(sqrt(iota^2+mom^2/cD^2))*sin(atan(mom/cD/iota)/2) Er(mom)=cos(sqi(mom)*z0) Ei(mom)=sin(sqi(mom)*z0) ar(mom)=1+phi*sqr(mom) ai(mom)=mom*b+phi*sqi(mom) ar2(mom)=1-mom^2*b^2+iota*phi^2+2*phi*(sqr(mom)-mom*b*sqi(mom)) ai2(mom)=2*mom*b+mom/cD*phi^2+2*phi*(sqi(mom)+mom*b*sqr(mom)) PDr(nk,mom)=(ar2(mom)+nk^2)^2-ai2(mom)^2 PDi(nk,mom)=2*(ar2(mom)+nk^2)*ai2(mom) VDr(nk,mom)=nk^2-ar2(mom)+ar(mom)*(ar2(mom)+nk^2)-ai(mom)*ai2(mom) VDi(nk,mom)=-ai2(mom)+ai(mom)*(ar2(mom)+nk^2)+ar(mom)*ai2(mom)

226

Kr(nk,mom)=factr(VDr(nk,mom),VDi(nk,mom),PDr(nk,mom),PDi(nk,mom)) Ki(nk,mom)=facti(VDr(nk,mom),VDi(nk,mom),PDr(nk,mom),PDi(nk,mom)) lampartr(mom)=Qr(mom)*sqr(mom)*Er(mom)-Qr(mom)*sqi(mom)*Ei(mom)-Qi(mom)*sqr(mom)*Ei(mom)Qi(mom)*sqi(mom)*Er(mom) lamparti(mom)=-Qi(mom)*sqi(mom)*Ei(mom)+Qi(mom)*sqr(mom)*Er(mom)+Qr(mom)*sqi(mom)*Er(mom)+ Qr(mom)*sqr(mom)*Ei(mom) etaKr(nk,mom)=-2*factr(Kr(nk,mom),Ki(nk,mom),lampartr(mom),lamparti(mom))/(cD*exp(sqr(mom)*z0)) etaKi(nk,mom)=-2*facti(Kr(nk,mom),Ki(nk,mom),lampartr(mom),lamparti(mom))/(cD*exp(sqr(mom)*z0)) #the following is for etaKdlam which is evaluated only at m=1, n=1 so we reuse code #from the unpublished half of the script (gamma-axodendrites.ode) derivsumr=factr(reVDdlam,imVDdlam,reVD,imVD)-factr(rePDdlam,imPDdlam,rePD,imPD)factr(relamdlam,imlamdlam,relampart,imlampart) derivsumi=facti(reVDdlam,imVDdlam,reVD,imVD)-facti(rePDdlam,imPDdlam,rePD,imPD)facti(relamdlam,imlamdlam,relampart,imlampart) etaKdlamr=etaKr(k,om)*derivsumr-etaKi(k,om)*derivsumi etaKdlami=etaKi(k,om)*derivsumr+etaKr(k,om)*derivsumi Dr=factr(-1/g1^2,0,etaKdlamr,etaKdlami) Di=facti(-1/g1^2,0,etaKdlamr,etaKdlami) Cmnr(nk,mom)=factr(etaKr(nk,mom),etaKi(nk,mom),1-g1*etaKr(nk,mom),-g1*etaKi(nk,mom)) Cmni(nk,mom)=facti(etaKr(nk,mom),etaKi(nk,mom),1-g1*etaKr(nk,mom),-g1*etaKi(nk,mom)) f=Dr*(2*Cmnr(0,0)+Cmnr(2*k,2*om))-Di*Cmni(2*k,2*om) g=Dr*(Cmnr(0,0)+Cmnr(2*k,0)+Cmnr(0,2*om))-Di*Cmni(0,2*om) #b=1/v phi=f-g g1=(dGom*dHk-dHom*dGk)*2*k nu=eG om=eH k=nu @ nmesh=80, meth=2rb, dsmax=0.1, parmax=8 #par phi=0.15 par b=0 par z0=1, cD=1, iota=1, alpha=1 done

9.6

Linear analysis of neural elds in 2D

In Chapter 6 we study neural elds dened on the plane R2 . The linear analysis generally leads to a dispersion relation, connecting the bifurcation parameter = f (hss ), the wavevector k = (k x , k y ), and the eigenvalue = + i. Solving the dispersion relation E = 0 at the instability point, would give us a critical value of the bifurcation parameter, c and a dispersion surface (k x , k y ) which touches the plane = 0 from below with its maximum when = c . The point of contact gives us the linear properties of the instability, xing the critical 227

wavevector kc = (kc , kc ) and the critical frequency c . x y For a model of two populations (one excitatory and one inhibitory, denoted by the indices E and I) we have to solve the dispersion relation (6.2.3),

E (k, ; ) = (1 I G I )(1 E GE ) 2 E GE I G I = 0.
Here we have done the identication of parameters used in Section 6.2 which led to ab = b , Gab = Gb and a = , for a, b { E, I }. The expressions that participate in E (k, ; ) are the Laplace transform ab = ab () of the synaptic connectivity (6.1.5). They are given below for each of the models we look at.

lter (3.1.9) and the 3D Fourier transform Gab = Gab (k, i) of the generalised

9.6.1 Isotropic neural eld models


In Section 6.2 we study a model with rotationally symmetric connectivity function G (|r|, t), i.e. the model is isotropic. This leads to dispersion relation where only the absolute value of the wavevector, k = |k|, participates. In this case

is solved just as described in Section 4.5 and Appendix 9.1 for 1D neural elds. (For a discussion of an anisotropic model refer below to Appendix 9.6.3.) We have again the XPP system given in Listing 9.1.1, however the determinant (6.2.3) is more complicated to set up:
Code listing 9.6.1, in XPP format:
gI=gE wII=wEI #calculating the equation (Qi-gi*Gi)*(Qe-ge*Ge)-ge*gi*Ge*Gi=0 eqEr=Qr(alphaE)-gE*Gr(sigmaE) eqEi=Qi(alphaE)-gE*Gai(sigmaE) eqIr=Qr(1)-gI*wII*Gr(1) eqIi=Qi(1)-gI*wII*Gai(1) eG=eqIr*eqEr-eqIi*eqEi-gE*gI*wEI*(Gr(sigmaE)*Gr(1)-Gai(sigmaE)*Gai(1)) eH=eqIi*eqEr+eqIr*eqEi-gE*gI*wEI*(Gai(sigmaE)*Gr(1)+Gr(sigmaE)*Gai(1)) #and its derivatives (Qi-gi*Gi)*(Qe-ge*Ge)+(Qi-gi*Gi)*(Qe-ge*Ge)-ge*gi*(Gi*Ge+Gi*Ge)=0 #by omega and by k dGom=(Qdomr(1)-gI*wII*Gdomr(1))*eqEr-(Qdomi(1)-gI*wII*Gdomi(1))*eqEi +eqIr*(Qdomr(alphaE)-gE*Gdomr(sigmaE))-eqIi*(Qdomi(alphaE)-gE*Gdomi(sigmaE)) -gE*gI*wEI*(Gdomr(sigmaE)*Gr(1)-Gdomi(sigmaE)*Gai(1)+Gr(sigmaE)*Gdomr(1)-Gai(sigmaE)*Gdomi(1)) dHom=(Qdomi(1)-gI*wII*Gdomi(1))*eqEr+(Qdomr(1)-gI*wII*Gdomr(1))*eqEi +eqIi*(Qdomr(alphaE)-gE*Gdomr(sigmaE))+eqIr*(Qdomi(alphaE)-gE*Gdomi(sigmaE)) -gE*gI*wEI*(Gdomi(sigmaE)*Gr(1)+Gdomr(sigmaE)*Gai(1)+Gai(sigmaE)*Gdomr(1)+Gr(sigmaE)*Gdomi(1))

E = 0 is onedimensional complex algebraic equation equation for (k) which

228

dGk=-gI*wII*(Gdkr(1)*eqEr-Gdki(1)*eqEi)-gE*(eqIr*Gdkr(sigmaE)-eqIi*Gdki(sigmaE)) -gE*gI*wEI*(Gdkr(sigmaE)*Gr(1)-Gdki(sigmaE)*Gai(1)+Gr(sigmaE)*Gdkr(1)-Gai(sigmaE)*Gdki(1)) dHk=-gI*wII*(Gdki(1)*eqEr+Gdkr(1)*eqEi)-gE*(eqIi*Gdkr(sigmaE)+eqIr*Gdki(sigmaE)) -gE*gI*wEI*(Gdki(sigmaE)*Gr(1)+Gdkr(sigmaE)*Gai(1)+Gai(sigmaE)*Gdkr(1)+Gr(sigmaE)*Gdki(1)) #the formal ODE equations gE=dGom*dHk-dHom*dGk nu=eG om=eH k=nu @ nmesh=80, meth=2rb, dsmax=0.1, parmax=12 par v=1, wEI=-4, sigmaE=2, alphaE=1 done

Here, we have pre-set wEE = w IE = wE = 1, I = 1 and I = 1. The functions used above are re-dened for each isotropic model. The function Q(a) calculates 1/a , in all models it is Q a () = 1+ a
2

a { E , 1}.

The Fourier transform of the connectivity for the integral model is (6.1.7), Ga (k, ) = A ( )
3/2 2 A a ( ) + k2

For the long-wavelength model it is (6.1.8), namely,


L Ga (k, ) =

1
2 A a ( ) + 3 k2 2

For the Bessel model, (6.1.11),


B Ga (k, ) =

4 3

1
2 A a ( ) + k2

1 4
2 A a ( ) + k2

Everywhere, the expression A(a) stands for A a () = 1 + , a v a {E , 1}.

Here we will list the code only for the integral model (6.1.2):
Code listing 9.6.2, in XPP format:
#Q=(1+lam/a)^2 #A^2=(1/a+lam/v)^2 Qr(a)=(1+nu/a)^2-om^2/a^2

229

Qi(a)=2*(1+nu/a)*om/a A2r(a)=(1/a+nu/v)^2-om^2/v^2 A2i(a)=2*(1/a+nu/v)*om/v #R - numerator, P - denumenator of the propagators transform - Ghat= R/P #R=A, P=(A^2+k^2)*sqrt(A^2+k^2) Rr(a)=1/a+nu/v Ri(a)=om/v factr(wr,wi,zr,zi)=(wr*zr+wi*zi)/(zr^2+zi^2) facti(wr,wi,zr,zi)=(wi*zr-wr*zi)/(zr^2+zi^2) modz(a)=sqrt((A2r(a)+k^2)^2+A2i(a)^2) argz(a)=sign(A2i(a))*acos((A2r(a)+k^2)/modz(a)) sqr(a)=sqrt(modz(a))*cos(argz(a)/2) sqi(a)=sqrt(modz(a))*sin(argz(a)/2) Pr(a)=(A2r(a)+k^2)*sqr(a)-A2i(a)*sqi(a) Pai(a)=(A2r(a)+k^2)*sqi(a)+A2i(a)*sqr(a) Gr(a)=factr(Rr(a),Ri(a),Pr(a),Pai(a)) Gai(a)=facti(Rr(a),Ri(a),Pr(a),Pai(a)) #derivatives by omega #Q=2*i*(1+lam/a)/a #(A^2)=2*i*(1/a+lam/v)/v Qdomi(a)=2/a*(1+nu/a) Qdomr(a)=-2/a*om/a A2domi(a)=2/v*(1/a+nu/v) A2domr(a)=-2/v*om/v #R=A, P=3/2*sqrt(A^2+k^2)*(A^2) Rdomr(a)=0 Rdomi(a)=1/v Pdomr(a)=3/2*(A2domr(a)*sqr(a)-A2domi(a)*sqi(a)) Pdomi(a)=3/2*(A2domr(a)*sqi(a)+A2domi(a)*sqr(a)) #G=G*(R/R-P/P) RdomRr(a)=factr(Rdomr(a),Rdomi(a),Rr(a),Ri(a)) RdomRi(a)=facti(Rdomr(a),Rdomi(a),Rr(a),Ri(a)) PdomPr(a)=factr(Pdomr(a),Pdomi(a),Pr(a),Pai(a)) PdomPai(a)=facti(Pdomr(a),Pdomi(a),Pr(a),Pai(a)) Gdomr(a)=Gr(a)*(RdomRr(a)-PdomPr(a))-Gai(a)*(RdomRi(a)-PdomPai(a)) Gdomi(a)=Gai(a)*(RdomRr(a)-PdomPr(a))+Gr(a)*(RdomRi(a)-PdomPai(a)) #derivatives by k #(1/P)= -3/2 * (A^2+k^2)^(-5/2)*2*k = -3*k / [ (A^2+k^2)^2 * sqrt(A^2+k^2) ] #G=R*(1/P) squarer(a)=(A2r(a)+k^2)^2-A2i(a)^2 squarei(a)=2*(A2r(a)+k^2)*A2i(a) PdkP2r(a)=-factr(3*k,0,squarer(a)*sqr(a)-squarei(a)*sqi(a),squarer(a)*sqi(a)+squarei(a)*sqr(a)) PdkP2i(a)=-facti(3*k,0,squarer(a)*sqr(a)-squarei(a)*sqi(a),squarer(a)*sqi(a)+squarei(a)*sqr(a)) Gdkr(a)=Rr(a)*PdkP2r(a)-Ri(a)*PdkP2i(a) Gdki(a)=Ri(a)*PdkP2r(a)+Rr(a)*PdkP2i(a)

230

This code is combined with Listing 9.6.1 to complete the .ode le for nding instabilities of the integral model. It was used to generate the plots in Figures 6.3 and 6.4.Here we discussed the manual separation of the real and imaginary parts of E = 0. We have validated these scripts by comparing the results also with Maple-expanded scripts on the lines of Appendix 9.1.1.

9.6.2 The optimal model


We left the optimal model (6.1.17) for a separate discussion, although it ts perfectly in the framework discussed above. The transform of its connectivity is (6.1.16),
O Ga (k, ) =

2 1

1 2 2

( A2 1 ( ) + k 2 ) a;

( A2 2 ( ) + k 2 ) a;

some additional code which allows us to work with 1,2 as free parameters, to adjust the critical curves to some xed values (given by the critical curves of the integral model). Since we have only two new parameters, we could t only

with A a;i () = 1/(i a) + /v, a {E , 1}. In this case we would like to have

two points of the critical curves. One possibility we explored was to pick two speeds, for example v1 = 6 and v = 10 and require that the TuringHopf critical curve passes through the bifurcation values taken from the integral model
Code listing 9.6.3, in XPP format:
eps1=(gE-10.68)*(gE-7.26) v=(v-10)*(v-6) gE=dGom(k,om)*dHk(k,om)-dHom(k,om)*dGk(k,om) k=nu nu=eG(k,om) om=eH(k,om) @ nmesh=80, meth=2rb, dsmax=0.1, parmax=4 par eps2=0.6, sigmaE=2, wEI=-4, alphaE=1 done

c (v1 ) 7.26 and c (v2 ) 10.68. This is coded as follows:

Tracking xed points of this system with XPP/AUTO gives isolines in the (1 , 2 )plane where one of the equalities c (v1 ) = 7.26 or c (v2 ) = 10.68 is satised. (Care has to be taken to manually exclude the cross-pairs, as well as crossings of different the bifurcation branches. Those would also satisfy the above condi231

tions.) In the ideal case two isolines would cross at some point (1 , 2 ) signify-

ing a critical curve passing through both desired points. Since the shapes of the critical curves are not arbitrary, two matching points would probably lead to an excellent match between the models. Unfortunately, we could not nd values of 1,2 or v1,2 where this happens. Having to resort only to a match by one point, we decided to pursue the point
c (v ) = c where the Hopf and the TuringHopf curves cross. That is, to have both v and c match for the two models. From the integral model we have (v , c ) (6.658, 8.446). The code we used to nd nearby point in the optimal

model is:

Code listing 9.6.4, in XPP format:


eps1=dGom(k0,om0)*dHk(k0,om0)-dHom(k0,om0)*dGk(k0,om0) gE=dGom(k,om)*dHk(k,om)-dHom(k,om)*dGk(k,om) k=nu nu=eG(k,om) om=eH(k,om) k0=eG(k0,om0) om0=eH(k0,om0)/abs(k-k0) @ nmesh=80, meth=2rb, dsmax=0.1, parmax=4 par eps2=0.6, v=6.658, sigma=2, wEI=-4, alpha=1 #init gE=8.446, nu=0, om=3.223, k=0.984, om0=4.462, k0=0, eps1=0.545

Here we nd two instability points with different k and k0 but with the same This would be the exchange point between the Hopf and TuringHopf instabilities (similar points we nd also in Appendix 9.5.3). Having v as a parameter set to 6.658 xes one of the s. Then, we can track the value of c with the other and nd where it lies closest to the required 8.446. It turns out there is a unique pair (1 , 2 ) (0.545, 0.6) where the point is closest. We used the pair (0.6, 0.6) for the plot in Figure 6.4. Here is the code that completes both Listings 9.6.3 and 9.6.4:
Code listing 9.6.5, in XPP format:
wII=wEI gI=gE #eps2=eps1 factr(wr,wi,zr,zi)=(wr*zr+wi*zi)/(zr^2+zi^2) facti(wr,wi,zr,zi)=(wi*zr-wr*zi)/(zr^2+zi^2) # temporal part: Q=(1+lam/a,k,om)^2

bifurcation parameter c . This is achieved by dividing by |k c k0 | in the code. c

#A^2=(1/a/eps+lam/v)^2

232

Qr(a,k,om)=(1+nu/a)^2-om^2/a^2 Qi(a,k,om)=2*(1+nu/a)*om/a A2r(a,k,om,e)=(1/a/e+nu/v)^2-om^2/v^2 A2i(a,k,om,e)=2*(1/a/e+nu/v)*om/v #R - numerator, P - denumenator of the propagators transform - Ghat= R/P #R=1/eps1^2*eps2^2*sigma^2 *( 1+2*eps1*eps2*sigma/(eps1+eps2) *lam/v ), #P=(Aeps1^2+k^2)*(Aeps2^2+k^2) Rr(a,k,om)=1/(eps1*eps2*a)^2*(1+2*eps1*eps2*a/(eps1+eps2)*nu/v) Ri(a,k,om)=2/(eps1*eps2*a)^2*eps1*eps2*a/(eps1+eps2)*om/v Pr(a,k,om)=(A2r(a,k,om,eps1)+k^2)*(A2r(a,k,om,eps2)+k^2)-A2i(a,k,om,eps1)*A2i(a,k,om,eps2) Pai(a,k,om)=(A2r(a,k,om,eps1)+k^2)*A2i(a,k,om,eps2)+A2i(a,k,om,eps1)*(A2r(a,k,om,eps2)+k^2) Gr(a,k,om)=factr(Rr(a,k,om),Ri(a,k,om),Pr(a,k,om),Pai(a,k,om)) Gai(a,k,om)=facti(Rr(a,k,om),Ri(a,k,om),Pr(a,k,om),Pai(a,k,om)) #derivatives by omega #Q=2*i*(1+lam/a,k,om)/a #(A^2)=2*i*(1/a+lam/v)/v Qdomi(a,k,om)=2/a*(1+nu/a) Qdomr(a,k,om)=-2/a*om/a A2domi(a,k,om,e)=2/v*(1/a/e+nu/v) A2domr(a,k,om,e)=-2/v*om/v #R=1/eps1^2*eps2^2*sigma^2 * 2*eps1*eps2*sigma/(eps1+eps2)/v ), #P=(Aeps1^2)*(Aeps2^2+k^2)+(Aeps1^2+k^2)*(Aeps2^2) Rdomr(a,k,om)=0 Rdomi(a,k,om)=2/(eps1*eps2*a)^2*eps1*eps2*a/(eps1+eps2)/v Pdomr(a,k,om)=A2domr(a,k,om,eps1)*(A2r(a,k,om,eps2)+k^2)-A2domi(a,k,om,eps1)*A2i(a,k,om,eps2) +(A2r(a,k,om,eps1)+k^2)*A2domr(a,k,om,eps2)-A2i(a,k,om,eps1)*A2domi(a,k,om,eps2) Pdomi(a,k,om)=A2domi(a,k,om,eps1)*(A2r(a,k,om,eps2)+k^2)+A2domr(a,k,om,eps1)*A2i(a,k,om,eps2) +(A2r(a,k,om,eps1)+k^2)*A2domi(a,k,om,eps2)+A2i(a,k,om,eps1)*A2domr(a,k,om,eps2) #G=G*(R/R-P/P) RdomRr(a,k,om)=factr(Rdomr(a,k,om),Rdomi(a,k,om),Rr(a,k,om),Ri(a,k,om)) RdomRi(a,k,om)=facti(Rdomr(a,k,om),Rdomi(a,k,om),Rr(a,k,om),Ri(a,k,om)) PdomPr(a,k,om)=factr(Pdomr(a,k,om),Pdomi(a,k,om),Pr(a,k,om),Pai(a,k,om)) PdomPai(a,k,om)=facti(Pdomr(a,k,om),Pdomi(a,k,om),Pr(a,k,om),Pai(a,k,om)) Gdomr(a,k,om)=Gr(a,k,om)*(RdomRr(a,k,om)-PdomPr(a,k,om))-Gai(a,k,om)*(RdomRi(a,k,om)-PdomPai(a,k,om)) Gdomi(a,k,om)=Gai(a,k,om)*(RdomRr(a,k,om)-PdomPr(a,k,om))+Gr(a,k,om)*(RdomRi(a,k,om)-PdomPai(a,k,om)) #derivatives by k #(1/P)= -2*k* (Aeps1^2+Aeps2^2+2*k^2) / P^2 #G=R*(1/P) PdkP2r(a,k,om)=-2*k*factr(A2r(a,k,om,eps1)+A2r(a,k,om,eps2)+2*k^2,A2i(a,k,om,eps1)+A2i(a,k,om,eps2), Pr(a,k,om)^2-Pai(a,k,om)^2,2*Pr(a,k,om)*Pai(a,k,om)) PdkP2i(a,k,om)=-2*k*facti(A2r(a,k,om,eps1)+A2r(a,k,om,eps2)+2*k^2,A2i(a,k,om,eps1)+A2i(a,k,om,eps2), Pr(a,k,om)^2-Pai(a,k,om)^2,2*Pr(a,k,om)*Pai(a,k,om)) Gdkr(a,k,om)=Rr(a,k,om)*PdkP2r(a,k,om)-Ri(a,k,om)*PdkP2i(a,k,om) Gdki(a,k,om)=Ri(a,k,om)*PdkP2r(a,k,om)+Rr(a,k,om)*PdkP2i(a,k,om) #calculating the equation (Qii-gi*Gii)*(Qee-ge*Gee)-ge*gi*Rei*Rie=0

eqEr(k,om)=Qr(alpha,k,om)-gE*Gr(sigma,k,om) eqEi(k,om)=Qi(alpha,k,om)-gE*Gai(sigma,k,om)

233

eqIr(k,om)=Qr(1,k,om)-gI*wII*Gr(1,k,om) eqIi(k,om)=Qi(1,k,om)-gI*wII*Gai(1,k,om) eG(k,om)=eqIr(k,om)*eqEr(k,om)-eqIi(k,om)*eqEi(k,om)-gE*gI*wEI*(Gr(sigma,k,om)*Gr(1,k,om) -Gai(sigma,k,om)*Gai(1,k,om)) eH(k,om)=eqIi(k,om)*eqEr(k,om)+eqIr(k,om)*eqEi(k,om)-gE*gI*wEI*(Gai(sigma,k,om)*Gr(1,k,om) +Gr(sigma,k,om)*Gai(1,k,om)) #and its derivatives by omega and k #(Qii-gi*Gii)*(Qee-ge*Gee)+(Qii-gi*Gii)*(Qee-ge*Gee)-ge*gi*(Rei*Rie+Rei*Rie)=0 dGom(k,om)=(Qdomr(1,k,om)-gI*wII*Gdomr(1,k,om))*eqEr(k,om)-(Qdomi(1,k,om) -gI*wII*Gdomi(1,k,om))*eqEi(k,om)+eqIr(k,om)*(Qdomr(alpha,k,om)-gE*Gdomr(sigma,k,om)) -eqIi(k,om)*(Qdomi(alpha,k,om)-gE*Gdomi(sigma,k,om))-gE*gI*wEI*(Gdomr(sigma,k,om)*Gr(1,k,om) -Gdomi(sigma,k,om)*Gai(1,k,om)+Gr(sigma,k,om)*Gdomr(1,k,om)-Gai(sigma,k,om)*Gdomi(1,k,om)) dHom(k,om)=(Qdomi(1,k,om)-gI*wII*Gdomi(1,k,om))*eqEr(k,om)+(Qdomr(1,k,om) -gI*wII*Gdomr(1,k,om))*eqEi(k,om)+eqIi(k,om)*(Qdomr(alpha,k,om)-gE*Gdomr(sigma,k,om)) +eqIr(k,om)*(Qdomi(alpha,k,om)-gE*Gdomi(sigma,k,om))-gE*gI*wEI*(Gdomi(sigma,k,om)*Gr(1,k,om) +Gdomr(sigma,k,om)*Gai(1,k,om)+Gai(sigma,k,om)*Gdomr(1,k,om)+Gr(sigma,k,om)*Gdomi(1,k,om)) dGk(k,om)=-gI*wII*(Gdkr(1,k,om)*eqEr(k,om)-Gdki(1,k,om)*eqEi(k,om))-gE*(eqIr(k,om)*Gdkr(sigma,k,om) -eqIi(k,om)*Gdki(sigma,k,om))-gE*gI*wEI*(Gdkr(sigma,k,om)*Gr(1,k,om) -Gdki(sigma,k,om)*Gai(1,k,om)+Gr(sigma,k,om)*Gdkr(1,k,om)-Gai(sigma,k,om)*Gdki(1,k,om)) dHk(k,om)=-gI*wII*(Gdki(1,k,om)*eqEr(k,om)+Gdkr(1,k,om)*eqEi(k,om))-gE*(eqIi(k,om)*Gdkr(sigma,k,om) +eqIr(k,om)*Gdki(sigma,k,om))-gE*gI*wEI*(Gdki(sigma,k,om)*Gr(1,k,om) +Gdkr(sigma,k,om)*Gai(1,k,om)+Gai(sigma,k,om)*Gdkr(1,k,om)+Gr(sigma,k,om)*Gdki(1,k,om))

9.6.3 An anisotropic neural eld


In Section 6.3.1 we dene a 2D neural eld with a spatially anisotropic modulation of the connectivity. The modulation is as a doublyperiodic function set up on a square lattice with wavevectors (generators of the dual lattice),
k1 =
2 d (1, 0)

and k2 =

2 d (0, 1).

One obtains the PDE model as a system

of four equations given by (6.3.4). The dispersion relation is the determinant (6.3.5) of Fourier transforms of these equations, which for a square lattice gives

[D(k, )] ab = b ab ()

R2

Jab Gab (|k q|, i) dq


4 j =1

= b ab () Gab (|k k |, i), j


q

k3,4 = k1,2 .

Here we used that the Fourier coefcients Jab of the modulation are a sum of deltafunctions (Section 6.3.1). The rst part of the XPP code is the same as in Listing 9.6.5, because above we obtained a sum of the unmodulated Gab s (and we pick the optimal approximation in this case). We only need to restructure the 234

determinant is then calculated by making four calls with adding or subtracting kgen= |k | = j
wII=wEI gI=gE eps2=eps1 factr(wr,wi,zr,zi)=(wr*zr+wi*zi)/(zr^2+zi^2) facti(wr,wi,zr,zi)=(wi*zr-wr*zi)/(zr^2+zi^2) # temporal part: Q=(1+lam/a)^2

expressions so that the square k2 = |k|2 is passed as a function argument. The


2 d

suitably.

Code listing 9.6.6, in XPP format:

#A^2=(1/a/eps+lam/v)^2 Qr(a)=(1+nu/a)^2-om^2/a^2 Qi(a)=2*(1+nu/a)*om/a A2r(a,e)=(1/a/e+nu/v)^2-om^2/v^2 A2i(a,e)=2*(1/a/e+nu/v)*om/v #R - numerator, P - denumenator of the propagators transform - Ghat= R/P #R=1/eps1^2*eps2^2*sigma^2 *( 1+2*eps1*eps2*sigma/(eps1+eps2) *lam/v ), #P=(Aeps1^2+k^2)*(Aeps2^2+k^2) Rr(a)=1/(eps1*eps2*a)^2*(1+2*eps1*eps2*a/(eps1+eps2)*nu/v) Ri(a)=2/(eps1*eps2*a)^2*eps1*eps2*a/(eps1+eps2)*om/v Pr(a,k2)=(A2r(a,eps1)+k2)*(A2r(a,eps2)+k2)-A2i(a,eps1)*A2i(a,eps2) Pai(a,k2)=(A2r(a,eps1)+k2)*A2i(a,eps2)+A2i(a,eps1)*(A2r(a,eps2)+k2) Gr(a,k2)=factr(Rr(a),Ri(a),Pr(a,k2),Pai(a,k2)) Gai(a,k2)=facti(Rr(a),Ri(a),Pr(a,k2),Pai(a,k2)) #derivatives by omega #Q=2*i*(1+lam/a)/a #(A^2)=2*i*(1/a+lam/v)/v Qdomi(a)=2/a*(1+nu/a) Qdomr(a)=-2/a*om/a A2domi(a,e)=2/v*(1/a/e+nu/v) A2domr(a,e)=-2/v*om/v #R=1/eps1^2*eps2^2*sigma^2 * 2*eps1*eps2*sigma/(eps1+eps2)/v ), #P=(Aeps1^2)*(Aeps2^2+k^2)+(Aeps1^2+k^2)*(Aeps2^2) Rdomr(a)=0 Rdomi(a)=2/(eps1*eps2*a)^2*eps1*eps2*a/(eps1+eps2)/v Pdomr(a,k2)=A2domr(a,eps1)*(A2r(a,eps2)+k2)-A2domi(a,eps1)*A2i(a,eps2) +(A2r(a,eps1)+k2)*A2domr(a,eps2)-A2i(a,eps1)*A2domi(a,eps2) Pdomi(a,k2)=A2domi(a,eps1)*(A2r(a,eps2)+k2)+A2domr(a,eps1)*A2i(a,eps2) +(A2r(a,eps1)+k2)*A2domi(a,eps2)+A2i(a,eps1)*A2domr(a,eps2) RdomRr(a)=factr(Rdomr(a),Rdomi(a),Rr(a),Ri(a)) RdomRi(a)=facti(Rdomr(a),Rdomi(a),Rr(a),Ri(a)) PdomPr(a,k2)=factr(Pdomr(a,k2),Pdomi(a,k2),Pr(a,k2),Pai(a,k2)) PdomPai(a,k2)=facti(Pdomr(a,k2),Pdomi(a,k2),Pr(a,k2),Pai(a,k2)) Gdomr(a,k2)=Gr(a,k2)*(RdomRr(a)-PdomPr(a,k2))-Gai(a,k2)*(RdomRi(a)-PdomPai(a,k2)) Gdomi(a,k2)=Gai(a,k2)*(RdomRr(a)-PdomPr(a,k2))+Gr(a,k2)*(RdomRi(a)-PdomPai(a,k2))

235

#derivatives by k^2 #(1/P)= - (Aeps1^2+Aeps2^2+2*k^2) / P^2 #G=R*(1/P) PdkP2r(a,k2)=-factr(A2r(a,eps1)+A2r(a,eps2)+2*k2,A2i(a,eps1)+A2i(a,eps2), Pr(a,k2)^2-Pai(a,k2)^2,2*Pr(a,k2)*Pai(a,k2)) PdkP2i(a,k2)=-facti(A2r(a,eps1)+A2r(a,eps2)+2*k2,A2i(a,eps1)+A2i(a,eps2), Pr(a,k2)^2-Pai(a,k2)^2,2*Pr(a,k2)*Pai(a,k2)) Gdkr(a,k2)=Rr(a)*PdkP2r(a,k2)-Ri(a)*PdkP2i(a,k2) Gdki(a,k2)=Ri(a)*PdkP2r(a,k2)+Rr(a)*PdkP2i(a,k2)

In the remainder of the code each of the components of the wavevector k =

(k x , k y ) is an unknown variable, since the model is not isotropic. The dispersion


surface (k x , k y ) is not necessarily rotationally symmetric as it was in Appendix 9.6.1. The conditions for Turing instability remain the same: (kc , kc ) = 0 and x y that point being a maximum on the surface. To determine properly the local maximum of a twodimensional surface one should use the Sylvester criterion [280], however it involves the use of second derivates. Since for us the function (k, ) is given implicitly it would be difcult to express these (remember the trouble we had with the rst derivatives, Section 4.5). Instead, we prefer to calculate the gradients along the two easiest directions, the x- and y-axes, use this system to nd candidate points for local maxima. Then, we check if they are true maxima manually, by plotting the surface (k). The system is
Code listing 9.6.7, in XPP format:
#the determinant sumr(a)=(Gr(a,(kx-kgen)^2+ky^2)+Gr(a,(kx+kgen)^2+ky^2)+Gr(a,(ky-kgen)^2+kx^2) +Gr(a,(ky+kgen)^2+kx^2))/4 sumi(a)=(Gai(a,(kx-kgen)^2+ky^2)+Gai(a,(kx+kgen)^2+ky^2)+Gai(a,(ky-kgen)^2+kx^2) +Gai(a,(ky+kgen)^2+kx^2))/4 sumdomr(a)=(Gdomr(a,(kx-kgen)^2+ky^2)+Gdomr(a,(kx+kgen)^2+ky^2)+Gdomr(a,(ky-kgen)^2+kx^2) +Gdomr(a,(ky+kgen)^2+kx^2))/4 sumdomi(a)=(Gdomi(a,(kx-kgen)^2+ky^2)+Gdomi(a,(kx+kgen)^2+ky^2)+Gdomi(a,(ky-kgen)^2+kx^2) +Gdomi(a,(ky+kgen)^2+kx^2))/4 sumdkxr(a)=(Gdkr(a,(kx-kgen)^2+ky^2)*(kx-kgen)+Gdkr(a,(kx+kgen)^2+ky^2)*(kx+kgen) +Gdkr(a,(ky-kgen)^2+kx^2)*kx+Gdkr(a,(ky+kgen)^2+kx^2)*kx)/2 sumdkxi(a)=(Gdki(a,(kx-kgen)^2+ky^2)*(kx-kgen)+Gdki(a,(kx+kgen)^2+ky^2)*(kx+kgen) +Gdki(a,(ky-kgen)^2+kx^2)*kx+Gdki(a,(ky+kgen)^2+kx^2)*kx)/2 sumdkyr(a)=(Gdkr(a,(kx-kgen)^2+ky^2)*ky+Gdkr(a,(kx+kgen)^2+ky^2)*ky+Gdkr(a,(ky-kgen)^2+kx^2)*(ky-kgen) +Gdkr(a,(ky+kgen)^2+kx^2)*(ky+kgen))/2 sumdkyi(a)=(Gdki(a,(kx-kgen)^2+ky^2)*ky+Gdki(a,(kx+kgen)^2+ky^2)*ky+Gdki(a,(ky-kgen)^2+kx^2)*(ky-kgen) +Gdki(a,(ky+kgen)^2+kx^2)*(ky+kgen))/2 eqEr=Qr(alpha)-gE*sumr(sigma) eqEi=Qi(alpha)-gE*sumi(sigma) eqIr=Qr(1)-gI*wII*sumr(1) eqIi=Qi(1)-gI*wII*sumi(1) #the equations

236

eG=eqIr*eqEr-eqIi*eqEi-gE*gI*wEI*(sumr(sigma)*sumr(1)-sumi(sigma)*sumi(1)) eH=eqIi*eqEr+eqIr*eqEi-gE*gI*wEI*(sumi(sigma)*sumr(1)+sumr(sigma)*sumi(1)) dGom=(Qdomr(1)-gI*wII*sumdomr(1))*eqEr-(Qdomi(1)-gI*wII*sumdomi(1))*eqEi+eqIr*(Qdomr(alpha) -gE*sumdomr(sigma))-eqIi*(Qdomi(alpha)-gE*sumdomi(sigma))-gE*gI*wEI*(sumdomr(sigma)*sumr(1) -sumdomi(sigma)*sumi(1)+sumr(sigma)*sumdomr(1)-sumi(sigma)*sumdomi(1)) dHom=(Qdomi(1)-gI*wII*sumdomi(1))*eqEr+(Qdomr(1)-gI*wII*sumdomr(1))*eqEi+eqIi*(Qdomr(alpha) -gE*sumdomr(sigma))+eqIr*(Qdomi(alpha)-gE*sumdomi(sigma))-gE*gI*wEI*(sumdomi(sigma)*sumr(1) +sumdomr(sigma)*sumi(1)+sumi(sigma)*sumdomr(1)+sumr(sigma)*sumdomi(1)) dGkx=-gI*wII*(sumdkxr(1)*eqEr-sumdkxi(1)*eqEi)-gE*(eqIr*sumdkxr(sigma)-eqIi*sumdkxi(sigma)) -gE*gI*wEI*(sumdkxr(sigma)*sumr(1)-sumdkxi(sigma)*sumi(1) +sumr(sigma)*sumdkxr(1)-sumi(sigma)*sumdkxi(1)) dHkx=-gI*wII*(sumdkxi(1)*eqEr+sumdkxr(1)*eqEi)-gE*(eqIi*sumdkxr(sigma)+eqIr*sumdkxi(sigma)) -gE*gI*wEI*(sumdkxi(sigma)*sumr(1)+sumdkxr(sigma)*sumi(1) +sumi(sigma)*sumdkxr(1)+sumr(sigma)*sumdkxi(1)) dGky=-gI*wII*(sumdkyr(1)*eqEr-sumdkyi(1)*eqEi)-gE*(eqIr*sumdkyr(sigma)-eqIi*sumdkyi(sigma)) -gE*gI*wEI*(sumdkyr(sigma)*sumr(1)-sumdkyi(sigma)*sumi(1) +sumr(sigma)*sumdkyr(1)-sumi(sigma)*sumdkyi(1)) dHky=-gI*wII*(sumdkyi(1)*eqEr+sumdkyr(1)*eqEi)-gE*(eqIi*sumdkyr(sigma)+eqIr*sumdkyi(sigma)) -gE*gI*wEI*(sumdkyi(sigma)*sumr(1)+sumdkyr(sigma)*sumi(1) +sumi(sigma)*sumdkyr(1)+sumr(sigma)*sumdkyi(1)) #kgen=2*pi/d gE=nu nu=eG om=eH kx=dGom*dHkx-dHom*dGkx ky=dGom*dHky-dHom*dGky #c=c-om/(kx^2+ky^2) #attention: c equation would kill Hopf recognition #ky=kx^2+ky^2-k #however |k| doesnt seem to work much as a parameterisation #kd=kd-(kgen-ky) #to find radius of second mode around the lattice-directed wavenumber @ nmesh=80, meth=2rb, dsmax=0.5, parmax=12 #par ky=1, gE=1 par v=5, kgen=6.28, sigma=2, wEI=-4, alpha=1, eps1=0.6 #par d=2 init gE=20.96101, om=3.455549, kx=0, ky=6.281968 done

Using this code we have plotted Figure 6.7. To construct the surfaces in Fig. 6.6 we have generated series of 1D sections (k x ) in XPP/AUTO (we removed two equations from the system in Listing 9.6.7 and set ky and gE as parameters) and combined them in MatLab.

237

9.7

2D neural eld numerics in PDE framework

We derived several PDE systems in Section 6.1 that approximate the twodimensional integral neural eld with axonal delay (6.1.2). Their numerical solution is due to our co-author Dr Carlo Laing. In principle, it is possible to extend the code based on the spectral collocation method that we used in Appendix 9.3 to the 2D case, but the differentiation matrices become too large to work with on a PC. Instead, nite elements were used to approximate the spatial derivatives. Again, the equations had to be expanded in the time derivatives, to a normal system of ODEs. The PDE systems which we simulated are the long-wavelength model (6.1.9), 3 A a;1 2 Q a u a = w0 , a 2

the Bessel model (6.1.12)

A2 2 a;1
and the optimal model (6.1.17),

4A2 2 Q a u a = 4w0 A a;1 , a a;1

(A a;1 2 )(A a;2 2 ) Q a u a = w0 B a . a


The operators are Qa = t 1+ a
2

A a; =

1 t + , a v

Ba =

1
2 2 2 a 1 2

1+

21 2 a t , ( 1 + 2 ) v

while a is the nonlinearity with a sigmoid ring rate function (3.3.4): = f

ub
b

f (u) =

1 . 1 + e u

Here we have chosen two populations a { E, I } and population output propthe implementation in MatLab can be obtained from Carlo Laing.

erties that are independent of the target population (see Section 6.2). Details on

9.8

Methods for localised solutions

In this Appendix we give the implementation of the Evans and the Amari methods for establishing the stability of a bump steady state in a neural eld with linear spike frequency adaptation. The steady state of (7.1.1) is (7.1.2), u( x ) = Wb ( x x1 ) Wb ( x x2 ), 238

with Wb ( x ) = W ( x ) a Wa ( x ) = 1 [1 e|x|/ a (1 e|x|/a )]signx. 2

By differentiating the steady state u( x ), at the crossing points xi we have u x ( x1 ) =

u x ( x2 ). The Evans function (7.1.5) simplies to


1+ 1 = [wb (0) wb ()]. c

The zeroes of this equation give the stability eigenvalues. This is implemented in the following code.
Code listing 9.8.1, in XPP format:
#exponential connectivity w(x)=exp(-abs(x)/sigma)/2/sigma Wint(x)=(1-exp(-x/sigma))/2 #wizard hat #w(x)=-w0*(1-abs(x))*exp(-abs(x)) #Wint(x)=-w0*x*exp(-abs(x)) #SFA exponential spread wa(x)=exp(-abs(x)/sigmaa)/2/sigmaa Waint(x)=(1-exp(-x/sigmaa))/2 factr(wr,wi,zr,zi)=(wr*zr+wi*zi)/(zr^2+zi^2) facti(wr,wi,zr,zi)=(wi*zr-wr*zi)/(zr^2+zi^2) wbr(x)=w(x)-ga*wa(x)*factr(1,0,1+nu/beta,om/beta) wbi(x)=-ga*wa(x)*facti(1,0,1+nu/beta,om/beta) Wbint(x)=Wint(x)-Waint(x) c=w(0)-w(D)-ga*(wa(0)-wa(D)) ReEvans=-wbr(D)^2+wbi(D)^2+(wbr(0)-c*(1+nu/alpha))^2-(wbi(0)-c*om/alpha)^2 ImEvans=-2*wbr(D)*wbi(D)+2*(wbr(0)-c*(1+nu/alpha))*(wbi(0)-c*om/alpha) alpha=nu D=h-Wbint(D) nu=ReEvans om=ImEvans aux ux1=c

#@ parmax=0.124 #par D=0.5 par h=0.04 par ga=1, sigma=1, sigmaa=2, beta=1, w0=-1 #par alpha=1.142 done

Asking XPP for the steady states of this system will reveal the parameter points at which an eigenvalue of the model has zero real part. Tracking these one can 239

delimit the region in parameter space where the bump is stable. We used this script to generate Figure 7.1, left. Alternatively, the ODE system (7.1.8), 1 xi = h W ((t)) + a i , u x ( xi ) z x ( xi ) i = h W ((t)) + a i u x ( xi ) is implemented as
Code listing 9.8.2, in XPP format:
#centering the bump around zero x1=-x2 #exponential connectivity w(x)=exp(-abs(x)/sigma)/2/sigma Wint(x)=(1-exp(-x/sigma))/2 #wizard hat #w(x)=-w0*(1-abs(x))*exp(-abs(x)) #Wint(x)=-w0*x*exp(-abs(x)) #SFA exponential spread wa(x)=exp(-abs(x)/sigmaa)/2/sigmaa Waint(x)=(1-exp(-x/sigmaa))/2 ubx(x)=w(x-x1)-w(x-x2)-ga*zbx(x) zbx(x)=wa(x-x1)-wa(x-x2) xdot[1..2]=alpha/ubx(x[j])*(h-Wint(x2-x1)+ga*zeta[j]) x2=xdot2 zeta[1..2]=zbx(x[j])*xdot[j]+beta*(-zeta[j]+Waint(x2-x1)) aux ux1=ubx(x1) @ nmesh=80, meth=2rb, dsmin=0.0001, dsmax=0.1, ds=0.001 par h=0.04, alpha=1, ga=1, sigma=1, sigmaa=2, beta=1,w0=-1 done

i = 1, 2.

i + Wa ((t)),

We have centered the bump at zero with x1=-x2, otherwise AUTO has trouble following the bifurcation branches due to the translational invariance. Here, XPP nds the actual steady states of the bump crossing points (Figure 7.1, centre). One can construct the full bifurcation portrait with AUTO (Figure 7.1, right). A breather (oscillating bump) is found as a Hopf orbit. By tracking the point of Hopf bifurcation with regards of a second parameter one would obtain the output of Listing 9.8.1.

240

References
[1] J G Daugman. Brain metaphor and brain theory. In E L Schwartz, editor, Computational Neuroscience, pages 918. The MIT Press, 1990. [2] M A Arbib. The Metaphorical Brain 2: Neural Networks and Beyond. John Wiley & Sons, 1989. [3] R C Paton. Towards a metaphorical biology. Biology and Philosophy, 7: 279294, 1992. [4] S Ochs. A History of Nerve Functions: from Animal Spirits to Molecular Mechanisms. Cambridge University Press, 2004. [5] A Vartanian. Man-machine from the Greeks to the Computer. In

P P Wiener, editor, Dictionary of the History of Ideas, volume 3, pages 132146. the Gale Group, 2003. local/DHI/dhicontrib2.cgi?id=dv3-17. [6] T Kuhn. The Structure of Scientic Revolutions. University of Chicago Press, 1962. [7] W McCulloch and W Pitts. A logical calculus of the ideas immanent in nervous activity. Bulletin of Mathematical Biology, 5:115133, 1943. [8] A M Turing. 230265, 1936. [9] D A Medler. A brief history of connectionism. Neural Computing Surveys, 1:61101, 1998. [10] D O Hebb. The Organization of Behavior. Wiley, 1949. 241 On computable numbers, with an application to the http://etext.virginia.edu/cgi-

Entscheidungsproblem. Proceedings of the London Mathematical Society, 42:

[11] A M Uttley. Imitation of pattern recognition and trial-and-error learning in a conditional probability computer. Reviews of Modern Physics, 31:546 548, 1959. [12] A M Uttley. Information transmission in the nervous system. Academic Press, 1979. [13] R A Rescorla and A R Wagner. A theory of Pavlovian conditioning: Variations in the effectiveness of reinforcement and non-reinforcement. In A H Black and W F Prockasy, editors, Classical Conditioning II. Current Research and Theory. [14] F Rosenblatt. Principles of Neurodynamics: Perceptrons and the Theory of Brain Mechanisms. Spartan Books, 1962. [15] J D Cowan. History of concepts and techniques. Intelligent Systems, 3: 375400, 2004. [16] G Reeke and G M Edelman. Daedalus, 117:143173, 1988. [17] Z Pylyshyn. Computation and Cognition: Toward a Foundation for Cognitive Science. The MIT Press, 1984. [18] D H Perkel. Computational neuroscience: Scope and structure. In E L Schwartz, editor, Computational Neuroscience, pages 3845. The MIT Press, 1990. [19] M A Arbib, editor. The Handbook of Brain Theory and Neural Networks. The MIT Press, 1995. [20] D Marr. Vision. Freeman, 1983. [21] P S Churchland, C Koch, and T J Sejnowski. What is computational neuroscience. In E L Schwartz, editor, Computational Neuroscience, pages 46 55. The MIT Press, 1990. [22] D E Rumelhart, J L McClelland, and the PDP Research Group. Parallel distributed processing: explorations in the microstructure of cognition. The MIT Press, 1986. 242 Real brains and articial intelligence.

[23] G Nicolis and I Prigogine. Self-Organization in Nonequilibrium Systems. Wiley and Sons, 1977. [24] H Haken. Introduction to Synergetics. Nonequilibrium Phase Transitions and Self-Organization in Physics, Chemistry and Biology. Springer-Verlag, 1977. [25] P L Nunez and R Srinivasan. Electroencephalogram. Scholarpedia, 2:1348, 2007. http://www.scholarpedia.org/article/EEG. [26] M Hmlinen, R Hari, R J Ilmoniemi, J Knuutila, and O V Lounasmaa. Magnetoencephalography - theory, instrumentation, and applications to noninvasive studies of the working human brain. Reviews of Modern Physics, 65:413497, 1993. [27] P Grobstein. Strategies for analyzing complex organization in the nervous system: I. Lesion experiments. In E L Schwartz, editor, Computational Neuroscience, pages 1937. The MIT Press, 1990. [28] M Stetter. Exploration of cortical function. Kluwer Academic Publishers, 2002. [29] G Buzski. Large-scale recording of neuronal ensembles. Nature Neuroscience, 7:446451, 2004. [30] J M Ollinger and J A Fessler. Positron-emission tomography. Signal Processing Magazine, IEEE, 14:4355, 1997. [31] D J Heeger and D Ress. What does fMRI tell us about neuronal activity? Nature Reviews Neuroscience, 3:142151, 2002. [32] E Formisano and R Goebel. Tracking cognitive processes with functional MRI mental chronometry. Current opinion in Neurobiology, 2:174181, 2003. [33] D Le Bihan, J-F Mangin, C Poupon, C A Clark, S Pappata, N Molko, and H Chabriat. Diffusion tensor imaging: Concepts and applications. Journal of Magnetic Resonance Imaging, 13:534546, 2001. [34] J R Fetcho and D H Bhatt. Genes and photons: new avenues into the neuronal basis of behavior. Current Opinion in Neurobiology, 14:707714, 2004. 243

[35] M S Cohen and S Y Bookheimer. Localization of brain function using magnetic resonance imaging. Trends in Neurosciences, 17:268277, 1994. [36] D H Hubel and T N wiesel. Functional architecture of macaque monkey visual cortex. Proceedings of the Royal Society of London, Series B, Biological Sciences, 198:159, 1977. [37] P-A Salin and J Bullier. Corticocortical connections in the visual system: structure and fucntion. Physiological Reviews, 75:107154, 1995. [38] R B Tootell, S L Hamilton, M S Silverman, and E Switkes. Functional anatomy of macaque striate cortex. I. Ocular dominance, binocular interactions, and baseline conditions. The Journal of Neuroscience, 8, 1988. [39] R B Tootell, E Switkes, M S Silverman, and S L Hamilton. Functional anatomy of macaque striate cortex. II. Retinotopic organization. The Journal of Neuroscience, 8:15311568, 1988. [40] R B Tootell, M S Silverman, S L Hamilton, R L De Valois, and E Switkes. Functional anatomy of macaque striate cortex. III. Color. The Journal of Neuroscience, 1988. [41] R B Tootell, S L Hamilton, and E Switkes. Functional anatomy of macaque striate cortex. IV. Contrast and magno- parvo streams. The Journal of Neuroscience, 8:15941609, 1988. [42] R B Tootell, M S Silverman, S L Hamilton, E Switkes, and R L De Valois. Functional anatomy of macaque striate cortex. V. Spatial frequency. The Journal of Neuroscience, 8:16101624, 1988. [43] J S Lund, A Angelucci, and P C Bressloff. Anatomical substrates for functional columns in macaque monkey primary visual cortex. Cerebral Cortex, 13:1524, 2003. [44] L C Sincich and G G Blasdel. Oriented axon projections in primary visual cortex of the monkey. The Journal of Neuroscience, 21:44164426, 2001. [45] P Erdi. Towards a noncomputational cognitive neuroscience. Brain and Mind, 1:119145, 2000.

244

[46] H T Siegelman and S Fishman. Analog computation with dynamical systems. Physica D, 120:214235, 1998. [47] U Helmke and J B Moore. Optimization and Dynamical Systems. Springer, 1994. [48] R W Brockett. Dynamical systems that sort lists, diagonalize matrices and solve linear programming problems. Linear Algebra Applications, 146: 7991, 1991. [49] R R Poznanski. Mathematical reduction techniques for modeling biophysical neural networks. In R R Poznanski, editor, Introduction to Integrative Neuroscience, pages 121. Mary Ann Liebert, Inc., 2001. [50] G G Globus. Towards a noncomputational cognitive neuroscience. Journal of Cognitive Neuroscience, 4:299310, 1992. [51] W J Freeman. How brains make up their minds. Columbia University Press, 2001. [52] G M Edelman. Neural Darwinism: The Theory of Neuronal Group Selection. Basic Books, 1987. [53] P Erdi. Neurodynamic system theory: Scope and limits. Theoretical Medicine and Bioethics, 14:137152, 1993. [54] G A Cheuvet. On the mathematical integration of the nervous tissue based on the S-propagator formalism I: Theory. Journal of Integrative Neuroscience, 1:3168, 2002. [55] G Piccinini. Computational explanation in neuroscience. Synthese, 153: 343353, 2006. [56] M F Bear, B W Connors, and M A Paradiso. Neuroscience: exploring the brain. Lippincott Williams and Wilkins, 2001. [57] L F Costa and T J Velte. Automatic characterization and classication of ganglion cells from the salamander retina. The Journal of Comparative Neurology, 404:3351, 1999.

245

[58] A Agmon and B W Connors. Correlation between intrinsic ring patterns and thalamocortical synaptic responses of neurons in mouse barrel cortex. Journal of Neuroscience, 12:319329, 1992. [59] I Segev. Dendritic processing. In M A Arbib, editor, The Handbook of Brain Theory and Neural Networks, pages 282289. The MIT Press, 1995. [60] P Dayan and L F Abbott. Theoretical neuroscience: computational and mathematical modeling of neural systems. The MIT Press, 2001. [61] B Hille. Ionic Channels of Excitable Membranes. Sinauer Associates, 1992. [62] G J Stuart and B Sakmann. Active propagation of somatic action potentials into neocortical pyramidal cell dendrites. Nature, 367:6972, 1994. [63] M Migliore and G M Shepherd. Emerging rules for the distributions of active dendritic conductances. Nature Reviews Neuroscience, 3:362370, 2002. [64] R R Llins. The intrinsic electrophysiological properties of mammalian neurons: insights into central nervous system function. Science, 242: 16541664, 1988. [65] M Husser, N Spruston, and G J Stuart. Diversity and dynamics of dendritic signaling. Science, 290:739744, 2000. [66] H Markram, J Lbke, M Frotscher, and B Sakmann. Regulation of synaptic efcacy by coincidence of postsynaptic APs and EPSPs. Science, 275: 213215, 1997. [67] W Gerstner and W Kistler. Spiking Neuron Models: Single Neurons, Populations, Plasticity. Cambridge University Press, 2002. [68] L F Abbott and S B Nelson. Synaptic plasticity: taming the beast. Nature Neuroscience, 3:11781183, 2000. [69] Z F Mainen and T J Sejnowski. Inuence of dendritic structure on ring pattern in model neocortical neurons. Nature, 382:363366, 1996. [70] A van Ooyen, J Duijnhouwer, M W H Remme, and J van Pelt. The effect of dendritic topology on ring patterns in model neurons. Network: Computation in Neural Systems, 13:311325, 2002. 246

[71] R Yuste and D W Tank. Dendritic integration in mammalian neurons, a century after Cajal. Neuron, 16:701716, 1996. [72] D Johnston, J C Magee, C M Colbert, and B R Christie. Active properties of neuronal dendrites. Annual Review of Neuroscience, 19:165186, 1996. [73] H Agmon-Snir, C E Carr, and J Rinzel. The role of dendrites in auditory coincidence detection. Nature, 393:268272, 1998. [74] M E Larkum, J J Zhu, and B Sakmann. A new cellular mechanism for coupling inputs arriving at different cortical layers. 398:338341, 1999. [75] W Softky. Sub-millisecond coincidence detection in active dendritic trees. Neuroscience, 58:1341, 1994. [76] B W Mel, D L Ruderman, and K A Archie. Translation-invariant orientation tuning in visual complex cells could derive from intradendritic computations. The Journal of Neuroscience, 18:43254334, 1998. [77] K A Archie and B W Mel. A model for intradendritic computation of binocular disparity. Nature Neuroscience, 3:5463, 2000. [78] J C Callaway, N Lasser-Ross, and W N Ross. Ipsps strongly inhibit climbing ber-activated [ca2+]i increases in the dendrites of cerebellar purkinje neurons. Journal of Neuroscience, 115:27772787, 1995. [79] H G Kim, M Beierlein, and B W Connors. Inhibitory control of excitable dendrites in neocortex. Journal of Neurophysiology, 174:18101814, 1995. [80] M London and M Husser. Dendritic computation. Annual Review of Neuroscience, 25:503532, 2005. [81] I Segev and M London. Untangling dendrites with quantitative models. Science, 290:744750, 2000. [82] J T Buchanan. Contributions of identiable neurons and neuron classes to lamprey vertebrate neurobiology. Progress in Neurobiology, 63:441466, 2001.

247

[83] A H Cohen, G B Ermentrout, T Kiemel, N Kopell, S A Sigvardt, and T L Williams. Modelling of intersegmental coordination in the lamprey central pattern generator for locomotion. Trends in Neurosciences, 15:434438, 1992. [84] K G Pearson. Common principles of motor control in vertebrates and invertebrates. Annual Review of Neuroscience, 16:265297, 1993. [85] S Grillner. Control of locomotion in bipeds, tetrapods, and sh. In V B Brooks, editor, The Handbook of Physiology. Section 1: The Nervous System. Vol. 2: Motor Control, pages 11791236. Bethesda. American Physiological Society, 1981. [86] M Golubitsky, I Stewart, P L Buono, and J J Collins. Symmetry in locomotor central pattern generators and animal gaits. Nature, 401:693695, 1999. [87] S Grillner, H Markram, G Silberberg E De Schutter, and F E N LeBeaud. Microcircuits in action - from CPGs to neocortex. Trends in Neurosciences, 28:525533, 2005. [88] E Marder, D Bucher, D J Schulz, and A L Taylor. Invertebrate central pattern generation moves along. Current Biology, 15:685699, 2005. [89] R S Zucker. Craysh escape behavior and central synapses. Journal of Neurophysiology, 35:599651, 1972. [90] A O Willows, D A Dorsett, and G Hoyle. The neuronal basis of behavior in Tritonia. Journal of Neurobiology, 4:207300, 1973. [91] I Kupfermann, T J Carew, and E R Kandel. Local, reex, and central commands controlling gill and siphon movements in Aplysia. Journal of Neurophysiology, 37:9961019, 1974. [92] A Schtz. Neuroanatomy in a computational perspective. In M A Arbib, editor, The Handbook of Brain Theory and Neural Networks, pages 622630. The MIT Press, 1995.

248

[93] J Szentgothai. The neuron network of the cerebral cortex: a functional interpretation. Proceedings of the Royal Society of London, Series B, Biological Sciences, 201:219248, 1978. [94] C Hansel, D J Linden, and E DAngelo. Beyond parallel ber LTD: the diversity of synaptic and non-synaptic plasticity in the cerebellum. Nature Neuroscience, 4:467475, 2001. [95] T P McNamara and A L Shelton. Cognitive maps and the hippocampus. Trends in Cognitive Sciences, 7:333335, 2003. [96] V B Mountcastle. Brain science at the centurys Ebb. Daedalus, 127:136, 1998. [97] P Rakic. A small step for the cell, a giant leap for mankind: a hypothesis of neocortical expansion during evolution. Trends in Neurosciences, 18: 383388, 1995. [98] R J Douglas and K A C Martin. Neuronal circuits of the neocortex. Annual Review of Neuroscience, 27:419451, 2004. [99] V B Mountcastle. The columnar organization of the neocortex. Brain, 120: 701722, 1997. [100] A L Hodgkin and A F Huxley. A quantitative description of membrane current and its application to conduction and excitation in nerve. Journal of Physiology, 117:500544, 1952. [101] H Lecar. Morris-Lecar model. Scholarpedia, 2:1333, 2007.

http://www.scholarpedia.org/article/Morris-Lecar%95Model. [102] J Rintzel. Excitation dynamics: insights from simplied membrane models. Federation proceedings, 44:29442946, 1985. [103] C Morris and H Lecar. Voltage oscillations in the barnacle giant muscle ber. Biophysical Journal, 35:193213, 1981. [104] Y A Kuznetsov. 2006. Saddle-node bifurcation. Scholarpedia, 1:1859,

http://www.scholarpedia.org/article/Saddle-Node%35Saddle-

Node%95Homoclinic%95Bifurcation. 249

[105] R Jolivet, T J Lewis, and W Gerstner. Generalized integrate-and-re models of neuronal activity approximate spike trains of a detailed model to a high degree of accuracy. Journal of Neurophysiology, 92:959976, 2004. [106] A Rauch, G La Camera, H R Luscher, W Senn, and S Fusi. Neocortical pyramidal cells respond as integrate-and-re neurons to in vivo-like input currents. Journal of Neurophysiology, 90:15981612, 2003. [107] J Keat, P Reinagel, R C Reid, and M Meister. Predicting every spike: a model for the responses of visual neurons. Neuron, 30:803817, 2001. [108] R Brette and W Gerstner. Adaptive exponential integrate-and-re model as an effective description of neuronal activity. Journal of Neurophysiology, 94:36373642, 2005. [109] P C Bressloff and S Coombes. Mathematical reduction techniques for modeling biophysical neural networks. In R R Poznanski, editor, Biophysical neural networks: foundations of integrative neuroscience. [110] W Rall and H Agmon-Snir. Cable theory for dendritic neurons. In C Koch and I Segev, editors, Methods in neuronal modeling: from ions to networks, pages 2792. The MIT Press, Cambridge, MA, 1998. [111] H C Tuckwell. Introduction to Theoretical Neurobiology. Cambridge University Press, 1988. [112] M Rapp, I Segev, and Y Yarom. Physiology, morphology and detailed passive models of guinea-pig cerebellar Purkinje cells. The Journal of Physiology, 474:101118, 1994. [113] G Major, A U Larkman, P Jonas, B Sakmann, and J J Jack. Detailed passive cable models of whole-cell recorded CA3 pyramidal neurons in rat hippocampal slices. Journal of Neuroscience, 14:46134638, 1994. [114] I Segev and R E Burke. Compartmental models of complex neurons. In C Koch and I Segev, editors, Methods in neuronal modeling: from ions to networks, pages 93136. The MIT Press, Cambridge, MA, 1998.

250

[115] S Coombes, Y Timofeeva, C-M Svensson, G J Lord, K Josic, S J Cox, and C M Colbert. Branching dendrites with resonant membrane: A sumover-trips approach. Biological Cybernetics, 97:137149, 2007. [116] P C Bressloff and S Coombes. Dynamics of strongly coupled spiking neurons. Neural Computation, 19:91129, 2000. [117] H T Kyriazi and D J Simons. Thalamocortical response transformations in simulated whisker barrels. Journal of Neurophysiology, 13:16011615, 1993. [118] D McLaughlin, R Shapley, M Shelley, and D J Wielaard. A neuronal network model of macaque primary visual cortex (v1): Orientation selectivity and dynamics in the input layer 4C. Proceedings of the National Academy of Sciences, 97:80878092, 2000. [119] L Tao, M Shelley, D McLaughlin, and R Shapley. An egalitarian network model for the emergence of simple and complex cells in visual cortex. Proceedings of the National Academy of Sciences, 101:366371, 2004. [120] A Compte, N Brunel, P S Goldman-Rakic, and X J Wang. Synaptic mechanisms and network dynamics underlying spatial working memory in a cortical network model. Cerebral Cortex, 10:910923, 2000. [121] W Gerstner, J L van Hemmen, and J D Cowan. What matters in neuronal locking? Neural Computation, 8:16531676, 1996. [122] D Hansel and G Mato. Asynchronous states and the emergence of synchrony in large networks of interacting excitatory and inhibitory neurons. Neural Computation, 15:156, 2003. [123] P H Chu, J G Milton, and J D Cowan. Connectivity and the dynamics of integrate-and-re neural networks. International Journal of Bifurcation and Chaos, 4:237243, 1994. [124] D Horn and I Opher. Solitary waves of integrate-and-re neural elds. Neural Computation, 9:16771690, 1997. [125] C Fohlmeister, W Gerstner, R Ritz, and J L van Hemmen. Spontaneous excitations in the visual cortex: stripes, spirals, rings, and collective bursts. Neural Computation, 7:905914, 1995. 251

[126] C R Laing and C C Chow. Stationary bumps in networks of spiking neurons. Neural Computation, 13:14731494, 2001. [127] D J Amit and M V Tsodyks. Quantitative study of attractor neural network retrieving at low spike rates. i, substratespikes, rates and neuronal gain. Neural Computation, 2:259273, 1991. [128] B D Ermentrout. Reduction of conductance-based models with slow synapses to neural nets. Neural Computation, 6:679695, 1994. [129] H R Wilson and J D Cowan. Excitatory and inhibitory interactions in localized populations of model neurons. Biophysical Journal, 12:124, 1972. [130] H R Wilson and J D Cowan. A mathematical theory of the functional dynamics of cortical and thalamic nervous tissue. Kybernetik, 13:5580, 1973. [131] O Shriki, D Hansel, and H Sompolinsky. Rate models for conductancebased cortical neuronal networks. 2003. [132] W Gerstner. Time structure of the activity in neural network models. Physical Review E, 51:738758, 1995. [133] W Gerstner. Population dynamics of spiking neurons: Fast transients, asynchronous states, and locking. Neural Computation, 12:4389, 2000. [134] D R Cox. Renewal theory. Methuen London, 1970. [135] B W Knight. Dynamics of encoding in a population of neurons. The Journal of General Physiology, 59:734766, 1972. [136] S I Amari. Characteristics of random nets of analog neuron-like elements. IEEE Transactions on Systems, Man and Cybernetics, 2:643657, 1972. [137] B W Knight. Dynamics of encoding in neuron populations: Some general mathematical features. Neural Computation, 12:473518, 2000. [138] N Brunel and V Hakim. Fast global oscillations in networks of integrateand-re neurons with low ring rates. Neural Computation, 11:16211671, 1999. 252 Neural Computation, 15:18091841,

[139] D Q Nykamp and D Tranchina. A population density approach that facilitates large-scale modeling of neural networks: Analysis and an application to orientation tuning. Journal of Computational Neuroscience, 8:1950, 2000. [140] A Roxin, N Brunel, and D Hansel. Role of delays in shaping spatiotemporal dynamics of neuronal activity in large networks. Physical Review Letters, 28:357376, 2005. [141] P Tass. Cortical pattern formation during visual hallucinations. Journal of Biological Physics, 21:177210, 1995. [142] P C Bressloff and J D Cowan. Spontaneous pattern formation in primary visual cortex. In A. Champneys S. J. Hogan and B. Krauskopf, editors, Nonlinear dynamics: where do we go from here? Institute of Physics, Bristol, 2002. [143] J Wyller, P Blomquist, and G Einevoll. Turing instability and pattern formation in a two-population neuronal network model. Physica D, 75:7593, 2007. [144] M C Cross and P C Hohenberg. Pattern formation outside of equilibrium. Reviews of Modern Physics, 65(3):8511112, 1993. [145] D Walgraef. Spatio-temporal pattern formation. Springer-Verlag, 1997. [146] J D Murray. Mathematical biology. Springer-Verlag, 2002. [147] D B Duncan, M Grinfeld, and I Stoleriu. matics, 11:561572, 2000. [148] R Hoyle. Pattern formation: An introduction to methods. Cambridge University Press, 2006. [149] A M Turing. The chemical basis of morphogenesis. 1953. Bulletin of Mathematical Biology, 52:153197, 1990. [150] R E Goldstein. Nonlinear dynamics of pattern formation in physics and biology. In H F Nijhout, L Nadel, and D L Stein, editors, Pattern formation 253 Coarsening in an integro-

differential model of phase transitions. European Journal of Applied Mathe-

in the physical and biological sciences, pages 6592. Addison-Wesley, Santa Fe Institute, 1997. [151] S Amari. Dynamics of pattern formation in lateral-inhibition type neural elds. Biological Cybernetics, 27:7787, 1977. [152] J OKusky and M Colonnier. A laminar analysis of the number of neurons, glia, and synapses in the visual cortex (area 17) of adult macaque monkeys. The Journal of Comparative Neurology, 210:278290, 1982. [153] M A L Nicolelis, E E Fanselow, and A A Ghazanfar. Hebbs dream: The resurgence of cell assemblies. Neuron, 19:219221, 1997. [154] M J Gutnick, B W Connors, and D A Prince. Mechanisms of neocortical epileptogenesis in vitro. Journal of Neurophysiology, 48:13211335, 1982. [155] R D Chervin, P A Pierce, and B W Connors. Periodicity and directionality in the propagation of epileptiform discharges across neocortex. Journal of Neurophysiology, 60:16951713, 1988. [156] C Yamamoto. Activation of hippocampal neurons by mossy ber stimulation in thin brain sections in vitro. Experimental Brain Research, 14: 423435, 1972. [157] D J Pinto, S L Patrick, W C Huang, and B W Connors. Initiation, propagation, and termination of epileptiform activity in rodent neocortex in vitro involve distinct mechanisms. Journal of Neuroscience, 24:81318140, 2005. [158] J Y Wu, L Guan, and Y Tsau. Propagating activation during oscillations and evoked responses in neocortical slices. Journal of Neuroscience, 19: 50055015, 1999. [159] W Bao and J Y Wu. Propagating wave and irregular dynamics: Spatiotemporal patterns of cholinergic theta oscillations in neocortex in vitro. Journal of Neuroscience, 90:333341, 2003. [160] X Huang, W C Troy, Q Yang, H Ma, C R Laing, S J Schiff, and J Wu. Spiral waves in disinhibited mammalian neocortex. The Journal of Neuroscience, 24:98979902, 2004.

254

[161] D Golomb and Y Amitai. Propagating neuronal discharges in neocortical slices: Computational and experimental study. Journal of Neurophysiology, 78:11991211, 1997. [162] D J Pinto, J C Brumberg, D J Simons, G B Ermentrout, and R Traub. A quantitative population model of whisker barrels: Re-examining the Wilson-Cowan equations. Journal of Computational Neuroscience, 3:247 264, 1996. [163] D J Pinto and G B Ermentrout. Spatially structured activity in synaptically coupled neuronal networks: I. Travelling fronts and pulses. SIAM Journal on Applied Mathematics, 62:206225, 2001. [164] K A Richardson, S J Schiff, and B J Gluckman. Control of traveling waves in the mammalian cortex. Physical Review Letters, 94:028103, 2005. [165] J H Kaas. Topographic maps are fundamental to sensory processing. Brain Research Bulletin, 44:107112, 1997. [166] E L Schwartz. Computational studies of the spatial architecture of primate visual cortex: Columns, maps, and protomaps. In Alan Peters and Kathleen Rockland, editors, Primary Visual Cortex in Primates, volume 10 of Cerebral Cortex. Plenum Press, 1994. [167] S A Engel, G H Glover, and B A Wandell. Retinotopic organization in human visual cortex and the spatial precision of functional MRI. Cerebral Cortex, 7:181192, 1997. [168] P C Bressloff. Les Houches Lectures in Neurophysics, chapter Pattern formation in visual cortex, pages 477574. Elsevier. [169] R B Tootell, M S Silverman, E Switkes, and R L De Valois. Deoxyglucose analysis of retinotopic organization in primate striate cortex. Science, 218: 902904, 1982. [170] D Hubel and T Wiesel. Receptive elds, binocular interaction and functional architecture in the cats visual cortex. The Journal of Physiology, 106: 106154, 1962.

255

[171] M Hubener, D Shoham, A Grinvald, and T Bonhoeffer. Spatial relationships among three columnar systems in cat area 17. The Journal of Neuroscience, 17:92709284, 1997. [172] D L Adams, L C Sincich, and J C Horton. Complete pattern of ocular dominance columns in human primary visual cortex. Journal of Neuro science, 27:10391U10403, 2007. [173] N P Issa, C Trepel, and M P Stryker. Spatial frequency maps in cat visual cortex. The Journal of Neuroscience, 20:85048514, 2000. [174] C E Landisman and D Y Tso. Color processing in macaque striate cortex: Relationships to ocular dominance, cytochrome oxidase, and orientation. The Journal of Neurophysiology, 87:31263137, 2002. [175] D-S Kim, Y Matsuda, K Ohki, A Ajima, and S Tanaka. Geometrical and topological relationships between multiple functional maps in cat primary visual cortex. Neuroreport, 10:25152522, 1999. [176] P C Bressloff and J D Cowan. A spherical model for orientation and spatial-frequency tuning in a cortical hypercolumn. Philosophical Transactions of the Royal Society: Biological Sciences, 358:16431667, 2003. [177] R Ben-Yishai, R L Bar-Or, and H Sompolinsky. Theory of orientation tuning in visual cortex. Proceedings of the National Academy of Sciences, 92: 38443848, 1995. [178] V K Jirsa and H Haken. A derivation of a macroscopic eld theory of the brain from the quasi-microscopic neural dynamics. Physica D, 99:503526, 1997. [179] B C Skottun, A Bradley, G Sclar, I Ohzawa, and R D Freeman. The effects of contrast on visual orientation and spatial frequency discrimination: a comparison of single cells and behavior. Journal of Neurophysiology, 57: 773786, 1987. [180] B Roerig and B Chen. Relationships of local inhibitory and excitatory circuits to orientation preference maps in ferret visual cortex. Cerebral Cortex, 12:187198, 2002. 256

[181] L Muckli, A Kohler, N Kriegeskorte, and W Singer. Primary visual cortex activity along the apparent-motion trace reects illusory perception. Public Library of Science: Biology, 3, 2005. doi: http://dx.doi.org/10.1371/ journal.pbio.0030265. [182] G B Ermentrout and J D Cowan. A mathematical theory of visual hallucination patterns. Biological Cybernetics, 34:137150, 1979. [183] P C Bressloff, J D Cowan, M Golubitsky, P J Thomas, and M C Wiener. Geometric visual hallucinations, Euclidean symmetry and the functional architecture of striate cortex. Philosophical Transactions of the Royal Society: Biological Sciences, 356:299330, 2001. [184] P C Bressloff, J D Cowan, M Golubitsky, P J Thomas, and M C Wiener. What geometric visual hallucinations tell us about the visual cortex. Neural Computation, 14:473491, 2002. [185] H Kluver. Mescal: The Divine Plant and its Psychological Effects. Trench, Trubner and Co., 1928. [186] H Kluver. Mescal, and Mechanisms of Hallucinations. University of Chicago Press, 1966. [187] R K Siegel and M E Jarvik. Drug-induced hallucination in animals and man. In R K Siegel and L J West, editors, Hallucinations: Behavior, Experience and Theory. John Wiley and Sons, 1975. [188] C Zaleski. Otherworld Journeys: Accounts of Near-Death Experience in Medieval and Modern Times. Oxford University Press, 1987. [189] H Ellis. Mescal: A new articial paradise. The Contemporary Review, 1898. http://druglibrary.net/schaffer/heroin/history/mescal.htm. [190] K Zhang. Representation of spatial orientation by the intrinsic dynamics of the head-direction cell ensemble: A theory. Journal of Neuroscience, 16: 21122126, 1996. [191] X-J Wang. Synaptic reverberation underlying mnemonic persistent activity. Trends in Neurosciences, 24:455463, 2001.

257

[192] J M Fuster. Memory in the cerebral cortex. MIT Press, 1995. [193] B S Gutkin, C R Laing, C L Colby, C C Chow, and G B Ermentrout. Turning on and off with excitation: the role of spike-timing asynchrony and synchrony in sustained neural activity. Journal of Computational Neuroscience, 11:121134, 2001. [194] P L Nunez. The brain wave equation: a model for the EEG. Mathematical Biosciences, 21:279297, 1974. [195] P A Robinson, C J Rennie, and J J Wright. Propagation and stability of waves of electrical activity in the cerebral cortex. Physical Review E, 56: 826840, 1997. [196] B W Connors and Y Amitai. Generation of epileptiform discharges by local circuits in neocortex. In P A Schwartzkroin, editor, Epilepsy: Models, mechanisms and concepts, pages 388424. Cambridge University Press, 1993. [197] V Shusterman and W C Troy. From baseline to epileptiform activity: A path to synchronized rhythmicity in large-scale neural networks. Physical Review E, 2008. Submitted. [198] C D Gilbert and T N Wiesel. Columnar specicity of intrinsic horizontal and corticocortical connections in cat visual cortex. Journal of Neuroscience, 9:24322442, 1989. [199] B A McGuire, C D Gilbert, P K Rivlin, and T N Wiesel. Targets of horizontal connections in macaque primary visual cortex. Journal of Comparative Neurology, 305:370392, 1991. [200] C D Gilbert, A Das, M Ito, M Kapadia, and G Westheimer. Spatial integration and cortical dynamics. Journal of Comparative Neurology, 93:615622, 1996. [201] A Hutt, M Bestehorn, and T Wennekers. Pattern formation in intracortical neuronal elds. Network, 14:351368, 2003.

258

[202] A Hutt. Local excitation-lateral inhibition interaction yields oscillatory instabilities in nonlocally interacting systems involving nite propagation delay. Physics Letters A, pages 541546, 2008. [203] P C Bressloff. New mechanism for neural pattern formation. Physical Review Letters, 76:46444647, 1996. [204] P C Bressloff and B De Souza. Neural pattern formation in networks with dendritic structure. Physica D, 115:124144, 1998. [205] S Coombes, G J Lord, and M R Owen. Waves and bumps in neuronal networks with axo-dendritic synaptic interactions. Physica D, 178:219 241, 2003. [206] R Curtu and B Ermentrout. Pattern formation in a network of excitatory and inhibitory cells with adaptation. SIAM Journal on Applied Dynamical Systems, 3:191231, 2004. [207] D Hansel and H Sompolinsky. Modeling feature selectivity in local cortical circuits. In C Koch and I Segev, editors, Methods in neuronal modeling: from ions to networks, pages 499567. The MIT Press, Cambridge, MA, 1998. [208] P C Bressloff and S Coombes. Physics of the extended neuron. International Journal of Modern Physics B, 11:23432392, 1997. [209] G. M. Shepherd, editor. The Synaptic Organization of the Brain. Oxford University Press, 1990. [210] G B Ermentrout. Asymptotic behavior of stationary homogeneous neuronal nets. In S Amari and M A Arbib, editors, Competition and cooperation in neural nets, pages 5770. Springer, 1982. [211] G B Ermentrout. Simulating, Analyzing, and Animating Dynamical Systems: A Guide to XPPAUT for Researchers and Students. SIAM, 2002. [212] S Coombes. theories. Waves, bumps and patterns in neural 2005.

eld

Biological

Cybernetics,

93:91108,

http://eprints.nottingham.ac.uk/archive/00000151/.

259

[213] F H Lopes da Silva, A Hoeks, H Smits, and L H Zetterberg. Model of brain rhythmic activity. Biological Cybernetics, 15:2737, 1974. [214] P Suffczynski, S Kalitzin, and F H Lopes da Silva. Dynamics of nonconvulsive epileptic phenomena modeled by a bistable neuronal network. Neuroscience, 126:467484, 2004. [215] P L Nunez. Neocortical dynamics and human EEG rhythms. Oxford University Press, 1995. [216] C R Laing and W C Troy. PDE methods for nonlocal models. SIAM Journal on Applied Dynamical Systems, 2:487516, 2003. [217] V I Arnold. Geometrical methods in the theory of ordinary differential equations. Springer-Verlag, 1983. [218] F Verhulst. Nonlinear differential equations and dynamical systems. SpringerVerlag, 1990. [219] E Knobloch and J de Luca. Amplitude equations for travelling wave convection. Journal of Nonlinearity, 3:975980, 1990. [220] D M Winterbottom. Pattern formation with a conservation law. PhD thesis, School of Mathematical Sciences, University of Nottingham, 2005. [221] G Helmberg. Introduction to spectral theory in Hilbert space. North-Holland Publishing Company, 1969. [222] R D Pierce and C E Wayne. On the validity of mean-eld amplitude equations for counterpropagating wavetrains. Journal of Nonlinearity, 8: 769779, 1995. [223] H Riecke and L Kramer. The stability of standing waves with small group velocity. Physica D, 137:124142, 2000. [224] A C Newell. The dynamics and analysis of patterns. In H F Nijhout, L Nadel, and D L Stein, editors, Pattern formation in the physical and biological sciences, pages 201268. Addison-Wesley, Santa Fe Institute, 1997. [225] I Melbourne. Derivation of the time-dependent Ginzburg-Landau equation on the line. Journal of Nonlinear Science, 1:115, 1998. 260

[226] G Schneider. A new estimate for the Ginzburg-Landau approximation on the real axis. Journal of Nonlinear Science, 4:2334, 1994. [227] A Mielke. The complex Ginzburg-Landau equation on large and unbounded domains: sharper bounds and attractors. Nonlinearity, 10:199 222, 1996. [228] A Mielke and G Schneider. Derivation and justication of the complex Ginzburg-Landau equation as a modulation equation. In P Deift, C D Levermore, and C E Wayne, editors, Dynamical systems and probabilistic methods in partial differential equations, pages 191216. The MIT Press, Cambridge, MA, 1994. [229] B I Shraiman, A Pumir, W van Saarloos, P C Hohenberg, H Chat, and M. Holen. Spatiotemporal chaos in the one-dimensional complex Ginzburg-Landau equation. Phys. D, 57(3-4):241248, 1992. ISSN 01672789. doi: http://dx.doi.org/10.1016/0167-2789(92)90001-4. [230] Y H Liu and X J Wang. Spike-frequency adaptation of a generalized leaky integrate-and-re model neuron. Journal of Computational Neuroscience, 10: 2545, 2001. [231] D J Pinto and G B Ermentrout. Spatially structured activity in synaptically coupled neuronal networks: II. Lateral inhibition and standing pulses. SIAM Journal on Applied Mathematics, 62:226243, 2001. [232] J Guckenheimer and Y A Kuznetsov. Bogdanov-takens bifurcation. Scholarpedia, 2:1854, 2007. http://www.scholarpedia.org/article/BogdanovTakens_bifurcation. [233] H Z Shouval. Models of synaptic plasticity. Scholarpedia, 2:1605, 2007. http://www.scholarpedia.org/article/Models_of_synaptic_plasticity. [234] P C Bressloff. Spatially periodic modulation of cortical patterns by longrange horizontal connections. Physica D, 185:131157, 2003. [235] D T J Liley, P J Cadusch, and M P Dalis. A spatially continuous mean eld theory of electrocortical activity. Network, 13:67113, 2002.

261

[236] H Haken. What can synergetics contribute to the understanding of brain function? In C Uhl, editor, Analysis of Neurophysiological Brain Functioning, pages 740. Berlin: Springer. [237] Hankel transform. Wikipedia. http://en.wikipedia.org/wiki/Hankel_transform, revision: 16:06, 10.08.2008. [238] G A Baker and P Graves-Morris. Pade Approximants. Cambridge University Press, 1996. [239] J W Sleigh M L Steyn-Ross, D A Steyn-Ross and D T J Liley. Theoretical electroencephalogram stationary spectrum for a white-noise-driven cortex: Evidence for a general anesthetic-induced phase transition. Physical Review E, 60:72997311, 1999. [240] J J Wright H Bahramali E Gordon P A Robinson, C J Rennie and D l Rowe. Prediction of electroencephalographic spectra from neurophysiology. Physical Review E, 63:021903, 2001. [241] I Bojak and D T J Liley. Modeling the effects of anesthesia on the electroencephalogram. Physical Review E, 71:041902, 2005. [242] A Hutt. Examination of a neuronal eld equation based on a MEG-experiment in humans. PhD thesis, Institute for Theoretical Physics and Synergetics, University of Stuttgart, 1997. [243] S E Folias and P C Bressloff. Breathing pulses in an excitatory neural network. SIAM Journal on Applied Dynamical Systems, 3:378407, 2004. [244] M Abramowitz and I E Stegun, editors. Handbook of mathematical functions with formulas, graphs and mathematical tables. National Bureau of Standarts, 1963. [245] P A Robinson. Patchy propagators, brain dynamics, and the generation of spatially structured gamma oscillations. Physical Review E, 73:041904, 2006. [246] B Ermentrout. Neural networks as spatio-temporal pattern-forming systems. Reports on Progress in Physics, 61:353430, 1998.

262

[247] S Coombes and M R Owen. Evans functions for integral neural eld equations with Heaviside ring rate function. SIAM Journal on Applied Dynamical Systems, 34:574600, 2004. [248] S Coombes and M R Owen. Exotic dynamics in a ring rate model of neural tissue with threshold accommodation. AMS Contemporary Mathematics, 440:123144, 2007. [249] S Coombes and M R Owen. Bumps, breathers, and waves in a neural network with spike frequency adaptation. Physical Review Letters, 94:148102, 2005. [250] A V Hill. Excitation and accommodation in nerve. Proceedings of the Royal Society of London, Series B, Biological Sciences, 119:305355, 1936. [251] C Elphick, E Meron, and E A Spiegel. Patterns of propagating pulses. SIAM Journal on Applied Mathematics, 50:490503, 1990. [252] P C Bressloff. Weakly interacting pulses in synaptically coupled neural media. SIAM Journal on Applied Mathematics, 66:5781, 2005. [253] C Wilimzig, S Schneider, and G Schner. The time course of saccadic decision making: Dynamic eld theory. Neural Networks, 19:10591074, 2006. [254] M A Giese. Dynamic Neural Field Theory for Motion Perception. Kluwer Academic Publishers, 1998. [255] A Hutt and F M Atay. Effects of distributed transmission speeds on propagating activity in neural populations. Physical Review E, 73:021906, 2006. [256] F M Atay and A Hutt. Neural elds with distributed transmission speeds and long-range feedback delays. SIAM Journal on Applied Dynamical Systems, 5:670698, 2006. [257] P Girard, J M Hup, and J Bullier. Feedforward and feedback connections between areas V1 and V2 of the monkey have similar rapid conduction velocities. Journal of Neurophysiology, 85:13281331, 2001.

263

[258] V K Jirsa and J A S Kelso. Spatiotemporal pattern formation in neural systems with heterogeneous connection topologies. Physical Review E, 62: 84628465, 2000. [259] C B Muratov and V V Osipov. Spike autosolitons in the Gray-Scott model. Journal of Physics A, 33:88938916, 2000. [260] M Bode, A W Liehr, C P Schenk, and H G Purwins. Interaction of dissipative solitons: particle-like behaviour of localized structures in a threecomponent reaction-diffusion system. Physica D, 161:4566, 2002. [261] B S Kerner and V V Osipov. Autosolitons: A New Approach to Problems of Self-Organization and Turbulence. Fundamental Theories of Physics. Kluwer Academic Publishers. [262] T Teramoto, K-I Ueda, and D Ueyama. Phase-dependent output of scattering process for travelling breathers. Physical Review E, 69:056224, 2004. [263] Y Nishiura, T Teramoto, and K-I Ueda. Scattering and separators in dissipative systems. Physical Review E, 67:056210, 2003. [264] Y Nishiura and D Ueyama. A skeleton structure of self-replicating dynamics. Physica D, 130:73104, 1999. [265] M R Owen, C Laing, and S Coombes. nal of Physics, 9:378, 2007. [266] J E Pearson. Complex patterns in a simple system. Science, 261:189192, 1993. [267] C B Muratov and V V Osipov. Scenarios of domain pattern formation in a reaction-diffusion system. Physical Review E, 54:38603879, 1996. [268] K A Gorshkov, A S Lomov, and M I Rabinovich. Chaotic scattering of two-dimensional solitons. Nonlinearity, 5:13431353, 1992. [269] C P Schenk, P Schutz, M Bode, and H G Purwins. Interaction of selforganized quasiparticles in a two-dimensional reaction-diffusion system: The formation of molecules. Physical Review E, 57:64806486, 1998. 264 Bumps and rings in a two-

dimensional neural eld: Splitting and rotational instabilities. New Jour-

[270] C R Laing. Spiral waves in nonlocal equations. SIAM Journal on Applied Dynamical Systems, 4:588606, 2005. [271] A Hagberg and E Meron. Order parameter equations for front transitions: Nonuniformly curved fronts. Physica D, 123:460473, 1998. [272] R E Goldstein, D J Muraki, and D M Petrich. Interface proliferation and the growth of labyrinths in a reaction-diffusion system. Physical Review E, 53:39333957, 1996. [273] N A Venkov, S Coombes, and P C Matthews. Dynamic instabilities in scalar neural eld equations with space-dependent delays. Physica D, 232:115, 2007. [274] S Coombes, N A Venkov, L Shiau, I Bojak, D T J Liley, and C R Laing. Modeling electrocortical activity through improved local approximations of integral neural eld equations. Physical Review E, 76:0519018, 2007. [275] Waterloo Maple Inc. (Maplesoft). Maple 9. http://www.maplesoft.com/. [276] Bard Ermentrout. XPP/XPPAUT 5.6.

http://www.math.pitt.edu/ bard/xpp/xpp.html. [277] B Fornberg. A Practical Guide to Pseudospectral Methods. Cambridge University Press, 1998. [278] J A C Weideman and S C Reddy. A MATLAB differentiation matrix suite. ACM Transactions on Mathematical Software, 26(4):465519, 2000. [279] D C Champeney, editor. A handbook of Fourier theorems. Cambridge University Press, 1987. [280] W Kaplan. Advanced Calculus. Addison-Wesley, 1984.

265

Potrebbero piacerti anche