Documenti di Didattica
Documenti di Professioni
Documenti di Cultura
http://www.ento.vt.edu/~sharov/PopEcol/#mark7
Lecture Handouts
6. Life-tables, k-values.
7. Model of Leslie.
9.1. Introduction
9.2. Attractors and Their Types
9.3. Equilibrium: Stable or Unstable?
9.4. Quantitative Measures of Stability
9.5. Limit Cycles and Chaos
9.6. Questions and Assignments
10.1. Introduction
10.2. Lotka-Volterra Model
10.3. Functional and Numerical Response
10.4. Predator-Prey Model with Functional Response
10.5. Host-Parasitoid Models
10.6. Host-Pathogen Model (Anderson & May)
10.7. Questions and Assignments
Labs
Statistical tables
1. t-statistics
2. Chi-square statistics
3. F-statistics, P=0.05
4. F-statistics, P=0.01
5. F-statistics, P=0.001
Lecture 1. Introduction.
Population Systems and Their Components
1.1. What is Population Ecology?
• Population ecology is the branch of ecology that studies the structure and
dynamics of populations.
• Physiology studies individual characteristics and individual processes. These are
use as a basis for prediction of processes at the population level.
• Community ecology studies the structure and dynamics of animal and plant
communities. Population ecology provides modeling tools that can be used for
predicting community structure and dynamics.
• Population genetics studies gene frequencies and microevolution in
populations. Selective advantages depend on the success of organisms in their
survival, reproduction and competition. And these processes are studied in
population ecology. Population ecology and population genetics are often
considered together and called "population biology". Evolutionary ecology is
one of the major topics in population biology.
• Systems ecology is a relatively new ecological discipline which studies
interaction of human population with environment. One of the major concepts
are optimization of ecosystem exploitation and sustainable ecosystem
management.
• Landscape ecology is also a relatively new area in ecology. It studies regional
large-scale ecosystems with the aid of computer-based geographic information
systems. Population dynamics can be studied at the landscape level, and this is
the link between landscape- and population ecology.
Populations can be defined at various spatial scales. Local populations can occupy very
small habitat patches like a puddle. A set of local populations connected by dispersing
individuals is called a metapopulation. Populations can be considered at a scale of
regions, islands, continents or seas. Even the entire species can be viewed as a
population.
Populations differ in their stability. Some of them are stable for thousands of years.
Other populations persist only because of continuous immigration from other areas. On
small islands, populations often get extinct, but then these islands can be re-colonized.
Finally, there are temporary populations that consist of organisms at a particular stage
intheir life cycle. For example, larvae of dragonflies live in the water and form a
hemipopulation (term of Beklemishev).
Interpretation means that model components (parameters, variables) and model behavior
can be related to components, characteristics, and behavior of real systems. If model
parameters have no interpretation, then they cannot be measured in real systems.
Most field ecologists are not good at abstraction. If they build a model they often try to
incorporate every detail. Most mathematicians are not good at interpretation of their
models. Usually they think of clean models and dirty reality. However, both abstraction
and interpretation are necessary for successful modeling. Thus, close collaboration
between ecologists and mathematicians is very important.
Models are always wrong ... but many of them are useful.
How it may happen that the wrong model can give a correct answer? In the same way as
old maps, which assumed a flat earth and used wrong distance relations, where useful
for travelers in the past.
Modeling strategy:
The term "life-system" was introduced by Clark et al. (1967). Later, Berryman (1981)
suggested another term "population system" which is definitely better. A short review of
the life-system methodology was published by Sharov (1991).
In 50-s and 60-s there was a discussion about population regulation between two
schools in population ecology. An agreement could not be reached because these
schools used different concepts of population dynamics. Andrewartha and Birch (1954)
used the factor-effect concepts whereas Nicholson (1957) used the factor-process
concept.
The factor-process concept works not only in population ecology but in any kind of
dynamic systems, e.g. in economic systems. Forrester (1961) formalized the factor-
process concept and applied it to industrial dynamics. Later this formalism became very
popular in ecology and is widely used especially in ecosystem modeling.
The model of Forrester is based on tank-pipe analogy. The system is considered as a set
of tanks connected by pipes with vents which can regulate the "flow" of liquid from one
tank to another. The flow of "liquid" between tanks is considered as "material flow".
However there is also "information flow" that regulates the vents. Vents are equivalent
to processes; and the amount of liquid in a tank is a variable or a factor because it can
affect processes via information flow.
The figure above is the Forrester diagram for an insect population system with 4 stages:
eggs, larvae, pupae, and adults. Transition between these states (development) is
regulated by temperature. Influx of eggs depends on the number of adults. Mortality
occurs in all stages of development. Larval and pupal mortality is density-dependent.
Diagrams of Forrester can be easily transformed into differential equation models. Each
process becomes a term in a differential equation that determines the dynamics of the
variable. For example, the number of larvae in the figure above is affected by 3
processes: egg development ED(T), larval development LD(T), and larval mortality
LM(N). Egg and larval development rates are functions of temperature T; whereas
larval mortality is a function of larval numbers N. The equation for larval dynamics is
the following:
Here the term ED(T) is positive because egg development increases the number of
larvae ("liquid" influx). Terms LD(T) and LM(N) are negative because larval
development (molting into pupae) and mortality reduce the numbers of larvae.
1. Distinction between information and material flows was not clear because
information cannot be transferred without any matter. For example: egg or seed
production is not only an information flow, there is a flow of matter too.
2. Only one type of processes is considered in which "fluid" moves from one tank
to another. This is good representation of organisms changing their state.
However, it is impossible to apply the model to the processes that involve two or
more participants. For example, when a parasite enters a host then it is
impossible to make a "pipe" that starts from two "tanks": host and parasite, and
ends in the "tank" of "parasitized host".
3. Only one level of processes is considered. In some cases it is important to
consider processes at two spatial levels, e.g., the dynamics of phytophagous
insects can be considered within a host plant and within a population of host
plants.
Other models has been developed to represent the factor-process concept. Petri nets
consider interactions of one two, or more participants. However, there is no generic
model with no limitations.
Components of the system will be called factors because they affect system dynamics.
The state of the component is the value of the factor. Factors are considered with as
many details as necessary for understanding system's dynamics.
Examples of factors:
Any detectable change in the population system is considered as an event. Events can be
classified according to the components involved in these events. The process can be
defined as a class of identical events. The rate of a process can be measured by the
number of events that occur in the system per unit time. Also, the specific rate of
processes is often used which is the number of events per time unit per one organism
involved in the event.
Examples of processes
• The birth of an organism is an event. The birth rate is the number of births per
unit time. The specific birth rate (= reproduction rate) is the birth rate per 1
female (or per 1 parent).
• Death of an organism is an event. Mortality on a specific stage from a specific
cause (e.g., parasitism, predation, starvation) is a process. Mortality rate is the
number of deaths per unit time. Specific mortality rate is the number of deaths
per unit time per organism.
• Other processes are: growth, development, consumption of resources, dispersal,
entering diapause, etc.
The value of a factor may change due to multiple processes. For example, the number of
organisms on a specific stage changes due to development (entering and exiting this
stage), dispersal, and mortality due to predation, parasitism, and infection.
It is dangerous to judge on the role of factors (e.g., biotic vs. abiotic) from life-tables.
For example, a life-table may show that 90% mortality of an insect is caused by
parasitism. This may lead to an erroneous conclusion that parasites rather than weather
are most important in the change of population density. It may appear that the
synchrony between life cycles of the host and parasite depends primarily on weather.
Life tables show various mortality processes in the population but they do not indicate
the role of factors. To analyze the role of factors, it is necessary to vary these factors
experimentally and examine how they affect various mortality processes. These
experiments may show, for example, that the rate of parasitism depends on the density
of parasites, density of hosts, and temperature. To understand population dynamics, it is
necessary to know both the effect of factors on the rate of various processes, and the
effect of processes on various ecological factors (e.g., on population density). This
information is integrated via modeling.
References
Berryman A. A. (1981) Population systems: a general introduction. New York: Plenum
Press.
Clark L. R., P. W. Geier, R. D. Hughes, and R. F. Morris (1967) The ecology of
insect populations. London: Methuen.
Sharov, A. A. 1992. Life-system approach: a system paradigm in population ecology.
Oikos 63: 485-494. Get a reprint! (PDF)
Petri Nets
A more universal factor-process model was developed by Petri (1962). He introduced a
class of nets which later were named "Petri nets". These nets have two types of
components: positions and transitions (positions = factors, transitions = processes).
For example, in the figure above, when transition t1 is fired, then 1 token is removed
from position p1, 1 token is removed from position p2, and 1 token is added on position
p3. When transition t2 is fired, then 1 token is removed from position p3, and 1 token is
added on position p2. Transition t1 can be interpreted as feeding and growth, and
transition t2 as reproduction.
Petri nets can be used to simulate complex ecological interactions within a population
and among populations. The figure below is an example of a population with sexual
reproduction. More examples can be found in Sharov (1991).
The rate of processes is not defined within a Petri net, it should be specified separately.
The rate of chemical reactions is usually defined by mass action law. Mass action law is
used in many population ecology models (exponential growth, Lotka-Volterra eqns.).
However, there are much more cases in ecology than in chemistry when mass action
law does not work. In these cases, the rate of equations is defined in some different way.
Example:
Other examples:
V=k
V = kx
Effect of other factors (temperature, age, sex, etc.) can be expressed as variation of
coefficient k. As a result, k will become a function of several variables.
References
Sharov, A.A. 1991. Self-reproducing systems: structure, niche relations and evolution.
BioSystems, 25: 237-249. Get a reprint! (PDF), Short version (HTML)
1.2. Can you define what is a "good" model and what is a "bad" model? Be careful in
your definitions: it may happen that the class of "good" models will be empty!
1.4. What ecological characteristics of organisms change during the life cycle? Consider
several examples: a bacteria, insect, mammal, and a plant.
1.5. What are ecological differences between males and females? 1.6. Is it possible to
find similarity in variability?
1.7. Describe elements of the temporal and spatial structures in a population system.
1.8. Why factor-effect models are not appropriate for studying population systems?
1.9. Describe a population system that you wish to study in the future (it may be a
natural or a laboratory population). Specify factors and processes and their interactions.
Draw a diagram with factors in rectangles and processes in ovals, show interactions
with arrows.
1.10. Draw a Petri net for a 3-level trophic interactions: an autotrophic population (algae
or plant) consumes several mineral components from soil (or water), the second
population feeds on that autotrophic population, and the third population feeds on the
second one. All three populations should grow in numbers.
1.11. Write differential equations for the following Petri nets assuming that the rate of
all processes correspond to the mass action law:
Lecture 2. Estimation of population density and size
2.1. Censusing a whole population
This method works only if organisms can be easily observed, their numbers are not too
big, and the area is well bounded and is not too large. Examples: trees in a small
isolated forest; all bird nests are censused in the New-York state (see Pielou 1977).
Migrating populations can be counted using aerial photography. This method is used
when the population has seasonal migration. Examples: sandhill cranes and caribou.
Traditionally, random sampling plans were preferred over systematic sampling plans
because random sampling helped to avoid subjective selection of sample locations.
However, systematic sampling has no elements of subjectivity if sample location is
selected prior to examining the area. For example, there is no subjective decisions if we
sample every tenth potato plant and count Colorado potato beetles.
Moreover, systematic sampling has an advantage over random sampling if the number
of samples is large because of more uniform coverage of the entire sampling area. It is
especially important for making population maps. Random sampling can be used if the
objective is to estimate the mean population density and the number of samples is not
large (<100).
Preferential sampling of specific areas (e.g., high-density areas) was always considered
unacceptable. However, modern geostatistical methods and stratified sampling can take
advantage of preferential sampling. This shows that the methodology of sampling
evolves and old textbooks may give obsolete recipes.
Traditional statistical methods include estimation of the mean population density (M),
standard deviation (S.D.), and standard error (S.E.), which is the standard deviation of
the sample mean.
The equation for standard error is derived assuming that all samples are independent.
This is a very strong assumption which is unrealistic in many situations. Samples
separated by small distance are often positively correlated. Before using standard
statistics it is important to test if samples are correlated. Spatial correlations are
examined using geostatistics. The simplest geostatistical test for spatial autocorrelation
is the omnidirectional correlogram:
where z1 and z2 are organism numbers in two samples separated by lag distance h,
summation is performed over all pairs of samples separated by distance h; Nh is the
number of pairs of samples separated by distance h; Mh and sh are the mean and the
standard deviation of samples separated by distance h (each sample is weighted by the
number of pairs of samples in which it is included).
The range of correlogram is the lag distance h at which correlation reaches (or becomes
close to) zero. Standard statistics can be applied only if inter-sample distance exceeds
the range of the correlogram.
Confidence interval (c.i.) is the interval where the population mean can be found with
probability of (1 - P), where P is error probability (e.g., P = 0.05). The number of
degrees of freedom d.f. = N - 1 (one d.f. goes for estimation of sample mean).
A = S.E. / M
There is an empirical rule that precision should be below 0.05 (or 0.1). However, this
rule is not universal The only thing that matters in statistics is testing hypotheses. If
null-hypothesis is rejected then it does not matter whether A was above or below 0.05.
However, in each specific research area, it is useful to find a precision level which is
usually sufficient for rejecting null-hypotheses.
Example:. Insect pest population should be suppressed if its density exceeds the
economic injury level (EIL). A null-hypothesis is tested that the average density M is
equal to EIL. If EIL is within the c.i. for M, then the null-hypothesis cannot be rejected
and no decision can be made. In this case, more samples should be taken. If the EIL is
outside of the c.i., then null-hypothesis is rejected, and population is suppressed if M >
EIL, or not suppressed if M < EIL.
Two-step sampling
The number of samples, N, required to achieve specific accuracy level can be estimated
from equations for standard error (S.E.) and accuracy, A:
where M is sample mean and S.D. is standard deviation. Here the third equation is
derived from the first two equations.
Standard deviation, S.D., is usually not known before sampling. Thus, the first step is to
take N1 samples and to estimate N using the equation above. Then, at the second step,
take N1 = N - N1 samples.
Taking samples in two steps is possible only if population numbers don't change
between two sampling dates.
Sequential sampling
The main idea of sequential sampling is to take samples until some condition (which is
easy to check) is met.
The first example is the sampling plan targeted at achieving specific accuracy. It is
based on the Taylor's power law:
Coefficients a and b can be estimated using linear regression from several pairs of M
and S.D. estimated in different areas with different average population density.
Combining two previous equations we get:
N=
Mean (M) equals to the total number of recorded individuals (S) in all samples divided
by the number of samples (N). Now, we substitute M by S/N, and solve this equation for
S:
Stop-lines for accuracy levels of A = 0.1; 0.07; and 0.05 are plotted below:
The blue line shows the total number of captured individuals in all samples. Sampling
terminates when this line crosses the stop line for selected accuracy level.
The second example is the sequential sampling plan used for decision-making in pest
management. This method was developed by Waters (1955; Forest Sci. 1:68-79). It is
described in Southwood (1978).
Here the blue line again shows the total number of captured individuals in all samples.
While the blue line is between magenta inclined lines, sampling continues. If the blue
line crosses the upper magenta line, then sampling stops and pesticides are applied
against the pest population. If the blue line crosses the lower magenta line, then
sampling stops and pesticides are not applied.
Deriving the solution of this problem it is too complicated. Thus, we will consider the
final result only.
If the population has a negative binomial distribution (see next lecture), then stop lines
correspond to the linear equation:
where:
1. Estimation of correlogram
2. Estimation of parameters of the correlogram model
3. Estimation of the surface (=map) using point kriging, or
4. Estimation of mean values using block kriging
Detailed description of most geostatistical methods can be found in Isaaks and
Srivastava (1989). Here we will discuss only the most important elements of
geostatistics.
Estimation of Correlogram
Correlogram is a function that shows the correlation among sample points separated by
distance h. Correlation usually decreases with distance until it reaches zero.
Correlogram is estimated using equation:
where z1 and z2 are organism numbers in two samples separated by lag distance h,
summation is performed over all pairs of samples separated by distance h; Nh is the
number of pairs of samples separated by distance h; Mh and sh are the mean and the
standard deviation of samples separated by distance h (each sample is weighted by the
number of pairs of samples in which it is included).
Notes:
1. Exponential model:
2. Spherical model:
where c1 is sill, and a is range. These parameters can be found using the non-linear
regression.
Estimation of the surface (=map) using point kriging (ordinary
kriging)
The value z'o at unsampled location 0 is estimated as a weighted average of sample
values z2 at locations i around it:
Weights depend on the degree of correlations among sample points and estimated point.
The sum of weights is equal to 1 (this is specific to ordinary kriging):
Weights are estimated individually for each point in a regular spatial grid using the
system of linear equations:
where is the Lagrange parameter; is the correlation between points i and j which is
estimated from the variogram model using distance, h, between points i and j; 0 is the
estimated point; 1,...,n are sample points.
Now, weights are found, and thus, it is possible to estimate the value z'o. When these
values are estimated for all points in a regular grid, then we get a surface of population
density.
Point-to-block correlation is the average correlation between sampled point i and all
points within the block (in practice, a regular grid of points within the block is used, as
shown in the figure).
Where is the average correlation within the block (average correlation between all
pairs of grid nodes within the block).
Advantages of kriging:
References
Isaaks, E. H. and R. M. Srivastava. 1989. An Introduction to Applied Geostatistics.
Oxford Univ. Press, New York, Oxford. (a very good introductory textbook)
Deutsch, C. V. and A. G. Journel. 1992. GSLIB. Geostatistical software library and
user's guide. Oxford Univ. Press, Oxford. (software code written in FORTRAN; I have
translated a portion of this library into the C-language).
In stratified sampling program, the area (volume) is subdivided into 2 or more portions
which are sampled separately. Example: pine sawflies prefer to spin their cocoons close
to the tree; thus, the area adjacent to trees (within 1 m radius) can be sampled separately
from the rest of the area.
The first two conditions are often non-realistic, and thus, several modifications of this
method has been developed that loosen these conditions. Perhaps the most popular is
the Jolly-Seber method which requires capturing and marking of animals at regular time
intervals. Animals, marked and released each time, should have different marks so that
it is possible to distinguish between individuals marked on different dates. Algorithm is
given in Southwood (1978).
Jolly-Seber method gives an estimate of population size on each specific date; the first
condition can be violated. However, the second condition is still required. It is also
possible to estimate mortality+emigration rate and birth+immigration rate on each
specific day. These rates are assumed to be constent for all individuals (including
marked individuals).
There are numerous other models for capture-recapture experiments, which are specific
for a particular population. For example, age structure of the population may be
important, or some individuals may have higher probability to be caught than others.
Another problem arises if the population has no boundaries. In this case, a grid of traps
can be established, and only the central portion of the grid is used for analysis (because
traps near the edges may be influenced by migration). The area covered by the grid
should be much larger than the average distance of animal dispersal.
Because the biology of different species is variable, it may be necessary to modify the
capture-recapture model.
Removal method
Removal method is based on intensive trapping of animals in an isolated area.
Migration is prevented by some kind of barriers. It is assumed that there are no births or
natural deaths of organisms. The proportion of animals captured each day is the same.
Therefore, population numbers and the number of captured individuals declines
exponentially:
Pielou (p. 127) used a different method for estimating parameters which is based on the
analysis of 2 first time intervals only. For example, if captures in the first 2 time
intervals were 29 and 18 animals, then
a = (29-18)/29 = 0.38.
This model can be generalized assuming the recruitment of organisms (e.g., emergence
of adult insects from the soil).
Linear regression:
y' = a + bx
The least square method is most often used to draw the "best" line through a cloud of
points. This method adjusts the values of regression parameters (a and b) so that the
residual sum of squares (=sum of square deviations of points from the line) reaches a
minimum.
The Least Square Method means that we find such parameter values a and b that the
value of is minimized. It follows from the calculus that derivatives at the minimum
point are equal to zero:
R-square is .
Polynomial regression
This is a non-linear function, but least square estimation leads to a system of linear
equations. Thus, this regression is analyzed by a linear method. This is NOT a non-
linear regression!
Note: use step-wise regression when you test the significance of non-linear terms in the
polynom. The effect is significant if the increment of R-square is large enough
according to F-statistics:
where, in the numerator, there is a difference in R-squares estimated in two consecutive
steps, and are corresponding degrees of freedom (d.f.= number of regression
coefficients minus 1). The difference - is equal to 1 because one term is added at a
time.
Nonlinear regression
Nonlinear regression is estimated numerically. Residual sum of squares is a function of
model parameters. Thus the minimum can be found as a lowest point on the response
hyper-surface:
Danger: you can end up in a local minimum (see the figure above). To avoid it, try to
start from various initial conditions.
It is desirable that the equation represents some theoretical model of a real system. Then
regression coefficients have biological interpretation.
where M is the mean number of individuals per plant. It is clear that the mean density,
M, can be estimated as the negative logarithm from the proportion of uninfested plants,
po.
An alternative theoretical model can be derived from the assumption that beetles are
aggregated on host plants and that their distribution is negative binomial. The zero term
of the negative binomial distribution is:
where k is the aggregation parameter. To test, which model is better, it is necessary to
use the non-linear regression and then to compare R-square.
2.2. The weed is more abundant at the edges of the field. Weed biomass is sampled at
random locations within the field. Can we use equation
to estimate the standard error of mean weed biomass per sq.m.? If not, then what
options do we have?
2.4. Density of the Colorado potato beetle should be estimated with accuracy A = 0.1
(10%). Preliminary sampling (N = 30) indicated: M = 0.5 beetles/plant, SD = 0.9
beetles/plant. How many additional samples you need?
2.5. Solve the ordinary kriging system for two sample points:
z are variable values at sample points; estimate z-value at the estimation point.
2.6. Sampling of the European pine sawfly was performed separately within 1 m radius
around trees and outside of these circles using 0.25 sq.m. squares:
Estimate average sawfly density and its standard error in the entire area
2.7. Estimate the size of the population of perch in a pond from capture-recapture data:
284 fish were netted, marked and released; 1392 fish were caught after 2 days, and 86 of
them were found to be marked. (use Lincoln index)
Lecture 3. Spatial distribution of organisms
Simplest example: 100 people are fishing in the same lake for the same time (e.g. 3 h);
they have equal probability to catch a fish per unit time.
Question: How many fishers catch 0, 1, 2, 3 etc. fish?
0 11 0.11 10
1 25 0.25 23
2 21 0.21 27
3 25 0.25 20
4 9 0.09 12
5 7 0.07 5
6 2 0.02 2
7 0 0.00 1
Mean number of fish captured by 1 fisher, M = 2.30, and standard deviation, SD = 1.41.
1. Method of moments (m = M)
2. Non-linear regression (iterative approximation)
where n(i) is the sample distribution (e.g., the number of fishers that captured i fish),
and n'(i) is theoretical distribution (e.g., expected number of fishers that captured i fish
according to poisson distribution). In our example,
df = 7 - 2 = 5
Note. Chi-square test cannot prove that sample distribution is the same as the theoretical
distribution! If there is no significant differences, it may mean two things: sample
distribution is really very close to the theoretical distribution, or there may be just not
enough data to distinguish these distributions. Suggestion: Use multiple hypothesis, i.e.,
compare sample distribution with several theoretical distributions.
Thus, the proportion of samples in which i organisms were found will correspond to the
poisson distribution.
Poisson distributions are asymmetric at low mean values, and almost symmetric at
higher mean values:
For a random distribution, CD=1 and k is infinite. When k increases to infinity then
NBD distribution coincides with poisson distribution.
2. Mean crowding (Lloyd 1967) is equal to the mean number of "neighbors" in the
same quad:
1 5 4 20
2 3 2 6
3 0 -1 0
4 1 0 0
5 7 6 42
Total 16 - 68
Note: mean crowding has biological sense only if the size of each quad corresponds to
"interaction distance" among individuals.
3. Negative binomial k. This is not a good index because usually it is not density-
invariant.
Any index of aggregation can be plotted against quad size. However, there are
specialized indexes designed for multiple quad sizes. For example, ro index (Iwao
1972) was defined as:
where subscripts 1,2,...i stand for successively increasing sizes of quads. Ro index is
used to determine characteristic distances in a spatial distribution:
In practice, it is difficult to find enough sample points which are separated by exactly
the same lag vector h. Thus, the set of all possible lag vectors is usually partitioned into
classes:
Vectors that end in the same cell are grouped into one class and correlogram value is
estimated separately for each class. The number of directions may be different (4, 8, 16,
etc.)
where indexes -h and +h refer to sample points located at the tail and head of vector h;
z-h and z+h are organism counts in samples separated by lag vector h; summation is
performed over all pairs of samples separated by vector h; Nh is the number of pairs of
samples separated by vector h; M-h and M+h are mean values for samples located at the
tail and head of vector h; s-h and s+h are standard deviations of samples located at the
tail and head of vector h.
The correlogram, covariance function, and variogram are all related. If the population
mean and variance are constant over the sampling area (there is no trend) then:
where C(0) is the covariance at zero lag = variance = squared standard deviation.
Interpretation of the nugget effect: It shows the pure random variation in population
density (white noise) or it may be associated with sampling error.
Usually, a series of thresholds is used, and variograms are estimated for all of them. If
one threshold has to be selected, then the best is to take the median threshold m which
corresponds to the 50% cumulative probability distribution.
Examples of fractals were known to mathematicians for a long time, but the notion was
formulated by Mandelbrot (1977).
It is not trivial to count the number of dimensions for a geometric figure. Geometric
figure can be defined as an infinite set of points with distances specified for each pair of
points. The question is how to count dimensions of such a figure. Hausdorf suggested to
count the minimum number of equal spheres (circles in the picture below) that cover the
entire figure.
The number of spheres, n, depends on their radius, r, and dimension was defined as:
"Normal" geometric figures have integer dimensions: 1 for a line, 2 for a square, 3 for a
cube. However, fractals have FRACTIONAL dimensions, as in the example below:
Here we use rather large circles, and thus, the precision is not high. For example, we got
D=2.01 for a square instead of D=2.
Square: , Fractal:
Below is the Mandelbrot set which is also a fractal:
Fractal dimension, D, is related to the slope of the variogram plotted in log-log scale, b:
3.3. Colorado potato beetles were counted on 30 potato plants, and there was no
significant difference between the actual distribution of beetles and both poisson and
negative binomial theoretical distributions. How to decide if their distribution is random
or aggregated? If additional sampling is required, then how to determine the additional
number of samples?
3.4. Taylor's power law for the number of fleas on rats is: ln(s ) = 0.1 + 1.5 ln(m). Total
17 fleas were found on 10 rats. How many rats should be examined to estimate flea
abundance with a 10% accuracy?
3.5. What is the relation between the correlogram, covariance function, and variogram?
3.6. Estimate the fractal dimension of the "carpet of Serpinski":
where y is the predicted variable (e.g., population density); Xi are factors; and bi are
parameters which can be found using regression analysis.
Geostatistics (kriging, see Lecture 2) is a better method for spatial modeling than
standard regression. It is possible to use a 3-dimensional kriging (2 space coordinates,
and 1 time coordinate) for spatio-temporal models .
The mechanisms that cause correlations between population density and factors are not
represented in these models. Prediction is based solely on the previous behavior of the
system. If in the past the system exhibited specific behavior in particular situations, then
it is most probable that it behave in a similar way in the future. Statistical modeling is
based on the assumption that system behavior remains the same. Thus, these models
have very limited value for predicting new behaviors of the system. For example, if we
develop a new method for controlling pest populations or a new strategy for
conservation of some species, then stochastic models based on the previous dynamics of
these populations may be misleading.
In some cases stochastic models may help to understand some mechanisms of
population dynamics. For example, if insect population density increases in years with
hot and dry summer, then it is possible that insects of this species have low mortality in
these years, possibly due to reduced resistance of host plants. Of course, these
hypotheses require thorough testing.
First, I will remind you the basics of the analysis of variances (ANOVA)
Total sum of squares (SST) is the sum of squared deviations of individual
measurements from the mean. The total sum of squares is a sum of 2 portions:
(1) Regression sum of squares (SSR) which is the contribution of factors into the
variance of the dependent variable, and
(2) Error sum of squares (=residual sum of squares) (SSE) which is the stochastic
component of the variation of the dependent variable.
SSR is the sum of squared deviations of predicted values (predicted using regression)
from the mean value, and SSE is the sum of squared deviations of actual values from
predicted values.
where
df(SSR)= g - 1 is the number of degrees of freedom for the regression sum of squares
which is equal to the number of coefficients in the equation, g, minus 1;
df(SSE)= N - g is the number of degrees of freedom for the error sum of squares which
is equal to the number of observations, N, minus the number of coefficients, g;
df(SST) = df(SSR) + df(SSE) = N - 1 is the number of degrees of freedom for the total
sum of squares.
The null-hypothesis is that factors has no effect on the dependent variable. If this is true,
then the total sum of squares is approximately equally distributed among all degrees of
freedom. As a result, the fraction of the sum of squares per one degree of freedom is
approximately the same for regression and error terms. Then, the F-statistic is
approximately equal to 1.
Now, the question is, how much should the F-statistic deviate from 1 to reject the null
hypothesis. To answer this question we need to look at the distribution of F assuming
the null hypothesis:
If estimated (empirical) value exceeds the threshold value (which corresponds
to the 95% cumulative probability distribution) then the effect of all factors
combined is significant. (See tables of threshold values for P = 0.05, 0.01,
and 0.001)
Note: In some statistical textbooks you can find a two-tail F-test (5% area is partitioned
into two 2.5% areas at both, right and left tails of the distribution). This is a wrong
method because small F indicates that the regression performs too well (some times
suspiciously well). Null hypothesis is not rejected in this case! If F is very small, then
we may suspect some cheating in data analysis. For example, this may happen if too
many data points were removed as "outliers". However, our objective here is not to test
for cheating (we assume no cheating). Thus we use a 1-tail F-test.
The F-distribution depends on the number of degrees of freedom for the numerator
[df(SSR)] and denominator [df(SSE)].
where SSR and SSR1 are regression sum of squares for the full and reduced models,
respectively; df(SSR) and df(SSR1) are degrees of freedom for the regression sum of
squares in the full and reduced models, respectively; SSE is the error sum of squares for
the full model; and df(SSE) is the number of degrees of freedom for the error sum of
squares.
df(SSR) - df(SSR1) = 1.
The F-statistic is related to the t-statistic if the denominator has only one degree of
freedom:
Thus, the t-statistic can be used instead of the F in the step-wise regression.
Example of the step-wise regression:
Full model ; SSR =53.2, SSE =76.3, df(SSR) =2, df(SSE) =53.
Reduced model y = a + b x; SSR =45.7, SSE =83.8, df(SSR1)=1, df(SSE1)=54.
F=(53.2-45.7)53 / 76.3 = 5.21; t = 2.28; P<0.05.
Thus, the quadratic term is significant (non-linearity test).
First step is to log-transform the data. When predicting population density, log-
transformation is always better than no transformation.
Predictor t-ratio P
Xt-1 0.25 0.801 NS
= 0.2%.
This means that our regression does not work any better than using average log-density.
Predictor t-ratio P
Xt-1 0.35 0.728 NS
Xt-2 2.73 0.010
= 16.8%.
Effect of year t-1 is not significant, but the effect of year t-2 is significant. Non-
significant effect can be ignored. Thus, we can re-estimate regression using year t-2 as
the only predictor:
Predictor t-ratio P
Xt-2 2.75 0.009
= 16.6%.
It is always necessary to plot the regression to see if there are any non-linear effects. It
seems that some non-linearity may be present (blue line). Let us test for quadratic
effects of year t-2.
Predictor t-ratio P
Xt-2 1.37 0.179 NS
1.14 0.261 NS
= 19.4%.
OK, there are no quadratic effects of year t-2.
Predictor t-ratio P
Xt-2 2.88 0.007
Xt-3 0.41 0.683 NS
= 18.9%.
The model did not get significantly better after adding year t-3 as a predictor. Thus, we
cannot improve the model any further.
5. Plotting the residuals.
Now we can plot the residuals ( ) versus population
counts in year t-1 to test if there is any non-linear effect of year t-1.
Predicted population counts follow the same pattern as observed values. However,
predicted values have smaller variation because any regression has a "smoothing"
effect.
In the previous graph, we predicted population counts one year ahead at a time. Let's see
what happen if we try to predict the entire time series from two initial values. In this
case, the error will propagate because we will use predicted population counts as the
base for further predictions.
Predicted population counts exhibit damped oscillations. After a few oscillations, they
approach the equilibrium level of x=5.553.
This model cannot be used to predict population numbers more than 1-3 years ahead.
4.4. Autocorrelation of factors and model validation
The variable is called autocorrelated if its value in specific place and time is correlated
with its values in other places and/or time. Spatial autocorrelation is a particular case of
autocorrelation. Temporal autocorrelation is also a very common phenomenon in
ecology. For example, weather conditions are highly autocorrelated within one year due
to seasonality. A weaker correlation exists between weather variables in consecutive
years. Examples of autocorrelated biotic factors are: periodicity in food supply, in
predator or prey density, etc.
However, ecologists are mostly interested in proving that some factor helps to predict
population density in all data sets within a specific class of data sets. It appeared that
models may work well with the data to which they were fit, but show no fit to other data
sets obtained an different time or in a different geographical points.To solve this
problem, the concept of validation was developed.
Example 1. In 60-s an 70-s it was very popular to relate population dynamics to the
cycles of solar activity. Solar activity exhibits 11-yr cycles which seemed to coincide
with the cycles of insect population outbreaks and population dynamics of rodents.
Most analyses were done using from 20 to 40-yr time series. However, 2 independent
cyclic processes with similar periods may coincide very well in short time intervals.
When larger time series became available, it appeared that periods of population
oscillations were usually smaller or greater than the period of the solar cycle. As a
result, the relationship between population density and solar activity may change its
sign in a larger time scale.
Example 2. Our fox model (section 4.3.) was developed by fitting the equation to the
time series. Thus, it is not surprising that it fits these data rather well. The question is,
will this model work if tested on an independent data set which was not used for fitting
the equation. We can separate the data into two portions, one of which is used for model
fitting and the other portion is used for model validation.
In our example, we select first 22 years and used them for estimating the regression:
Now we test this equation using data for even years. The equation is a predictor of
population numbers; thus estimated values from the equation can be used as the
independent variable and actual population numbers are used as a dependent variable.
Then:
R-square = 0.0001
F = 0.0002
P = 0.96
This means that the equation derived from uneven years did not help to predict
population dynamics in even years.
Conclusion: the model is not valid.
Crossvalidation
Jackknife method is similar to crossvalidation, but its advantage is that it can be used
for testing the significance of regression. Details of this method can be found in Sokal
and Rohlf (1981. Biometry). Let us consider the same example of colored fox. We need
to test if the effect of population density in year t-2 is significant. Jackknife method is
applied as follows:
Step 1. Each data point is left out from the analysis in turn, and the regression is
estimated each time (same as in crossvalidation). When observation i is excluded we get
the equation: . The slope bi characterizes the effect of population
density in year t-2 on the population density in year t. Slopes appear to be different for
different i, and different from the slope estimated from all data points (b = -0.412). The
variability of bi corresponds to the accuracy of estimated slope.
Year, t xt-2 xt i bi Bi
3 6.562 5.952 1 -0.430 0.286
4 7.318 4.343 2 -0.361 -2.429
5 5.729 6.562 3 -0.435 0.446
6 6.540 7.318 4 -0.530 4.184
7 4.813 5.729 5 -0.407 -0.611
8 5.131 6.540 6 -0.412 -0.429
9 6.605 4.813 7 -0.395 -1.099
10 4.525 5.131 8 -0.427 0.172
11 5.036 6.605 9 -0.409 -0.533
12 5.578 4.525 10 -0.426 0.11
13 6.657 5.036 11 -0.398 -0.962
Step 2. We need to determine if the slope significantly differs from 0 (in our case
significantly < 0). But the variation of bi is much smaller than the accuracy of the slope
because each slope was estimated from a large number of data points. Thus, we will
estimate pseudovalues Bi using the equation:
Bi = Nb - (N - 1)bi
where b = -0.412 is the slope estimated from all data, N is the number of observations,
and bi is the slope estimated from all observations except the i-th observation.
Step 3. The last step is to estimate the mean, SD, SE, and t for pseudovalues:
M = -0.403
t = |SE/M| = 2.43
SD = 1.05
P = 0.02.
SE = 0.166
The jackknife procedure is not a validation method for the same reasons as the
crossvalidation. However, jackknife is less sensitive to the shape of the distribution than
the standard regression. Thus it is more reliable.
This is a new method which can be used for validating correlations between population
density and other factors. However, this method cannot be used for autoregressive
models and thus we cannot apply it for the colored fox example.
This method was developed by Clifford et al. (1989, Biometrics 45: 123-134). The null
hypothesis is that variables X and Y are independent but each of them is autocorrelated.
For example, if we consider the relationship between population density and solar
activity, both variables are autocorrelated. The autocorrelation in population density
may result from the effect of solar activity or from other factors, e.g., interaction with
predators. In our null hypothesis we consider that solar activity has no effect on the
population density, but we allow autocorrelations in the population density.
where h is the temporal or spatial lag, are correlograms for variables X and Y, and
weights, wh, are equal to the proportion of data pairs separated by lag, h.
Thus, to test for significance of the correlation between processes X and Y we need to
estimate the standard error using the equation above. If the absolute value of empirical
correlation is larger than SD multiplied by 2 (t-value), then the correlation is significant.
Example. The correlation between log area defoliated by gypsy moth in CT, NH, VT,
MA, ME, and NY and the mean number of sunspots in 1964-1994 was r = 0.451:
This correlation is significant (P = 0.011), and it seems that we can use solar activity as
a good predictor of gypsy moth outbreaks. However, both variables are autocorrelated,
and thus, we will use the correlogram product method. Correlograms (=autocorrelation
functions, ACF) for both variables are shown below:
Correlograms are periodic indicating a cyclic behavior of both variables. The cycle is
very similar and the period is 9-10 years (location of the first maximum). The weights
wh are the following: wo = 1/N, and
Now we apply the correlogram product method: multiply both correlograms and
weights at each lag, h, and then take the square root from their sum (here we used only
for h $lt; 16). The standard error for correlation is SE= 0.337; t = r/SE = 0.451/0.337 =
1.34; P = 0.188.
Conclusion: Correlation between the area defoliated by the gypsy moth and solar
activity may be coincidental. More data is needed to check the relationship.
Another possible way of the analysis is autoregression combined with external factors.
For example, we can use the model:
where Dt is the area defoliated in year t, Wt is the averrage number of sunspots in year t,
and bi are parameters. The error probability (P) for the effect of sunspots will be very
close to that obtained in tyhe previous analysis. However, coefficients bi do not
necessary have biological meaning. For example, the equation above assums that
current defoliation depends on defoliation in previous years, but there may be no
functional relationship between areas defoliated in different years. Their correlation
may simply result from the effect of some external autocorrelated factor (e.g. solar
activity). Thus, it is necessary to use caution in the interpretation of autoregressive
models.
Note: In many cases, the effect of a factor is significant with one combination of
additional factors but becomes non-significant after adding some more factors. People
often select arbitrarily some combination of factors that will maximize statistical
significance of the factor tested. However, this is just another way of cheating. If
additional factors destroy the relationship between variables that you want to prove,
then there is no relationship. It is necessary to use as many additional factors as possible
including autoregressive terms.
This model can reproduce the pattern of population change better if stochastic noise is
added which can draw simulated population counts away from the steady state. Noise
can be simulated by a random variable which distribution is normal with zero mean.
Zero mean is important as a non-biasedness condition. The variance (or SD which is the
square root of the variance) should be set equal to the error variance: [SSE/df(SSE)] of
regression.
Random variable has a normal distribution with zero mean and variance equal to the
error variance in regression: Var( ) = 0.26 (S.D.= 0.51).
Stochastic model generates quasi-periodic cycles similar to that in the real population:
But, it is possible to modify stochastic models so that they will have some biological
interpretation. Usually this modification does not reduce the fit of the model. Let us
consider the dynamics of a population with discrete generations (e.g., a monovoltine
insect). Population density, Nt in year t can be estimated from population density in the
previous year t-1:
where s is survival and F is fecundity. Both, survival and fecundity may depend on the
density of the population in year t-1 and on environmental conditions in that year.
Survival may also depend on the density of natural enemies (predators, parasites, and
pathogens) in year t-1, which may depend on the density of the prey (host) population in
year t-2. Thus, survival, s, in year t-1 may be a function of population density in year t-
2. Weather conditions in year t-2 may also affect the survival. It is possible to imagine
even longer feed-back loops. For example, if it takes 3 years for host plants to recover
after severe defoliation caused by insect outbreak, then the quantity and/or quality of
food will cause a 3-yr delayed feedback to the insect population.
These density-dependent processes are often called "regulation". However, this term is
very ambiguous and I prefer to avoid it (see Lecture 9).
The product of survival and fecundity in the previous equation is the net rate of
population increase, R, which is a function of previous population densities and
previous weather conditions:
where wt in year t are weather conditions in year t. For simplicity we will ignore
weather conditions, and density effects with delays longer than 2 years. Then the
equation will become
where . is
the rate of population increase in year t which can be positive (if the population grows)
or negative (if the population declines). Now, the rate of population increase can be
represented as a linear function of population densities in the current and previous
years:
This equation is still a statistical model because coefficients bi are estimated using
regression analysis. However, this equation has some biological meaning. For example,
if the effect of population density on the rate of population increase is not significant
then the model becomes equivalent to the exponential model. If population growth
declines with increasing current density, b1 < 0, then some non-delayed density-
dependent processes should be present (e.g., competition, or pathogen infection), and
the model is equivalent to the discrete-time logistic model. If population growth
declines with increasing density in the previous generation, b2 < 0, then some non-
delayed density-dependent processes are present, e.g., parasitism by specialized
parasitoids. Delayed density-dependence usually yields oscillations in population
density.
where bi, h and g are parameters which can be fit using non-linear regression.
Note, that population densities are not log-transformed in the right side of the equation.
However, it can be proved that log-transformation is equivalent to setting parameters h
or g close to zero. Thus, log-transformation can be considered is a specific case of
power-transformation when the power is close to zero.
This is called "response-surface methodology" which means that the shape of the
surface is more important than the equation that fits it. Different species may have
different response surfaces which can be found using this general equation. Response
surface cannot identify mechanisms of population change, but it can indicate some
characteristics of these mechanisms, e.g. immediate density-dependence, delayed
density-dependence, etc.
Example. Let us analyze the dynamics of colored fox. This time we will predict the rate
of population increase rather than population density. First we will use the linear
equation that relates the rate of population increase in year t to log-transformed
population densities in years t and (t-1):
= 76.8%.
The effect of log population numbers in year t-1 on the rate of population increase in
year t is the same as the effect of log population numbers in year t-2 on the log
population numbers in year t (see our previous example). However, we got a significant
effect of population numbers in year t on the rate of population increase in the same
year which seems contradict to our previous result that population numbers in year t did
not correlate significantly with population numbers in the previous year.
However, there is nothing wrong here. Be got different answers because we asked
different questions. In our previous analysis we wanted to know is it possible to
improve the prediction of population density in year t using the information about
population numbers in the previous year, and we got the negative answer. Then we
asked if the rate of population increase is related to the current population density (test
for density-dependence) and got a positive answer.
where g = 0.0025 and h = 0.824. = 79.2%. Non-significant terms were remover from
the equation of Turchin and Taylor. Prediction has improved a little. The shape of this
response surface is shown below:
There is a strong non-delayed density-dependence (effect of Nt) and also some delayed
density-dependence (effect of Nt-1) which becomes weaker when density increases.
Lecture 5. Exponential and Logistic Growth
Exponential Model
Exponential model is associated with the name of Thomas Robert Malthus (1766-1834)
who first realized that any species can potentially increase in numbers according to a
geometric series. For example, if a species has non-overlapping populations (e.g.,
annual plants), and each organism produces R offspring, then, population numbers N in
generations t=0,1,2,... is equal to:
Parameter r is called:
• Malthusian parameter
• Intrinsic rate of increase
• Instantaneous rate of natural increase
• Population growth rate
"Instantaneous rate of natural increase" and "Population growth rate" are generic terms
because they do not imply any relationship to population density. It is better to use the
term "Intrinsic rate of increase" for parameter r in the logistic model rather than in the
exponential model because in the logistic model, r equals to the population growth rate
at very low density (no environmental resistance).
Assumptions of Exponential Model:
where b is the birth rate and m is the death rate. Birth rate is the number of offspring
organisms produced per one existing organism in the population per unit time. Death
rate is the probability of dying per one organism. The rate of population growth (r) is
equal to birth rate (b) minus death rate (m).
Logistic Model
Logistic Model
Logistic model was developed by Belgian mathematician Pierre Verhulst (1838) who
suggested that the rate of population increase may be limited, i.e., it may depend on
population density:
Population growth rate declines with population numbers, N, and reaches 0 when N =
K. Parameter K is the upper limit of population growth and it is called carrying
capacity. It is usually interpreted as the amount of resources expressed in the number of
organisms that can be supported by these resources. If population numbers exceed K,
then population growth rate becomes negative and population numbers decline. The
dynamics of the population is described by the differential equation:
Logistic model has two equilibria: N = 0 and N = K. The first equilibrium is unstable
because any small deviation from this equilibrium will lead to population growth. The
second equilibrium is stable because after small disturbance the population returns to
this equilibrium state.
Logistic model combines two ecological processes: reproduction and competition. Both
processes depend on population numbers (or density). The rate of both processes
corresponds to the mass-action law with coefficients: ro for reproduction and ro/K for
competition.
Complex dynamics results from a time delay in feed-back mechanisms. There are no
intermediate steps between time t and time t+1. Thus, overcompensation may occur if
the population grows or declines too fast passing the equilibrium point. In the
continuous-time logistic model, there is no delay because the rate of population growth
is updated continuously. Thus, the population density cannot pass the equilibrium point.
Questions and assignments to Lecture 5
1. Population numbers of cockroaches double every month (30 d). What is their
intrinsic rate of increase (per day)?
2. What is the intrinsic rate of increase in a human population if every family has 3
children at parent's age of 30 (there are no singles, no divorces, sex ratio 1:1)?
What would be the numbers of human population after 100 years if initial
numbers are 4 billion?
3. A new lake was created after building a dam. The number of fish censused after
2, 4, 6, 8 and 10 years since that time was 1000, 2000, 3500, 5000 and 6000.
Estimate parameters of the logistic model using non-linear regression. Plot the
data and the model on one graph.
4. Use Excel to simulate population dynamics with the discrete-time logistic model
(Ricker's model) for 60 generations. Use K=100; r = 0.1, 0.5, 1.0, 1.5, 1.9, 2.2;
No = 10.
Lecture 6. Life-tables and k-values
In this lecture you will learn how to collect data for the analysis of population
processes.
Ecological processes are usually specific to organisms' age or stage. Thus, they have to
be recorded relative to the life-cycle stage. This information is usually called a "life-
table". Two types of life-tables are generally used: (1) age-dependent and (2) stage-
dependent.
Age-dependent life-tables
Age-dependent life table shows organisms' mortality (or survival) and reproduction rate
(maternal frequency) as a function of age. In nature, mortality and reproduction rate
may depend on numerous factors: temperature, population density, etc. When building a
life-table, the effect of these factors is averaged. Only age is considered as a factor that
determines mortality and reproduction.
Example. Consider a sheep population which is censused once a year immediately after
breeding season:
Only females are considered in this life-table. However, there is no problem to include
male populations into the life table. Then, survival rates should be specified separately
for males and females, and the sex ratio of offspring should be taken into account.
Survivorship curves
Survival probabilities lx are often plotted against age x. These graphs are called
"survivorship curves". They show, at what age death rates are high and low. The
following graphs show two survivorship curves for domestic sheep (data from Caughley
1967) and for lapwings or green plovers (Vanellus vanellus) in Britain (data from
Deevey 1947):
Survivorship curve is exponential (with negative growth) for the lapwing. This means
that survival rate is independent of age. In the log-scale, survivorship curve becomes a
straight line (see above).
Sheep mortality generally increases with age; and the slope of survivorship curve
becomes steeper to the end. Humans have a similar shape of survivorship curve.
Now it is possible to estimate approximate value of intrinsic rate of increase r using the
following logic. We assume discrete generations with generation time T=5.1 years and
net reproductive rate of R0 =2.513. If population size at zero time was N0, then after T
years, the population will grow to NT = N0×R0. According to an exponential model,
where ln is natural logarithm (logarithm with base e =2.718). We got the equation for r:
If survival or reproduction are cyclic (e.g., seasonal), then one cycle can be taken as a
time unit. In this case it may happen that the number of age intervals will drop to 2 or 3.
However, it may be not very dangerous if reproduction is limited to a short period
within the year because there will be little age difference between organisms born in the
same year.
If survival and reproduction are cyclic but the entire life span is less or equal to this
cycle, then time units should be smaller than the cycle length. Age-dependent life-tables
can be built for the entire population only if the breeding period is short and therefore
organisms' development is synchronized. Otherwise, separate life-tables should be built
for subpopulations that start their development at different seasons.
If the population is stationary (i.e., population numbers and age distribution do not
change) then the number of new-born organisms x time units ago was the same as now;
and survivors of that group of organisms are of age x. Thus,
where N(x) is the number of organisms of age x. Here we assumed that age can be
accurately measured. In many species the number of "growth rings" in specific organs is
equal to age in years. Examples of such organs are: stems of trees, scales of fishes,
horns in sheep, roots of canine teeth in bears, otholits in fishes. The weight of the eye
lens can be used for age measurement in some animal species. However, in many
populations, measuring age is a difficult problem.
Consider a large number of carcasses whose age have been determined. We assume that
the probability of detecting a carcass does not depend on the age of animal at death. The
proportion of individuals that were at age x when they died is dx. These individuals
survived to age x but did not survive to age x+1. Thus,
Stage-dependent life-tables
Stage-dependent life tables are built in the cases when:
• The life-cycle is partitioned into distinct stages (e.g., eggs, larvae, pupae and
adults in insects)
• Survival and reproduction depend more on organism stage rather than on
calendar age
• Age distribution at particular time does not matter (e.g., there is only one
generation per year)
Stage-dependent life tables are used mainly for insects and other terrestrial
invertebrates.
Example. Gypsy moth (Lymantria dispar L.) life table in New England (modified from
Campbell 1981)
• There is no reference to calendar time. This is very convenient for the analysis
of poikilothermous organisms.
• Gypsy moth development depends on temperature but the life table is relatively
independent from weather.
• Mortality processes can be recorded individually and thus, this kind of life table
has more biological information than age-dependent life tables.
K-values
K-value is just another measure of mortality. The major advantage of k-values as
compared to percentages of died organisms is that k-values are additive: the k-value of a
combination of independent mortality processes is equal to the sum of k-values for
individual processes.
Mortality percentages are not additive. For example, if predators alone can kill 50% of
the population, and diseases alone can kill 50% of the population, then the combined
effect of these process will not result in 50+50 = 100% mortality. Instead, mortality will
be 75%!
Survival is a probability to survive, and thus we can apply the theory of probability. In
this theory, events are considered independent if the probability of the combination of
two events is equal to the product of the probabilities of each individual event. In our
case event is survival. If two mortality processes are present, then organism survives if
it survives from each individual process. For example, an organism survives if it was
simultaneously not infected by disease and not captured by a predator.
Assume that survival from one mortality source is s1 and survival from the second
mortality source is s2. Then survival from both processes, s12, (if they are independent)
is equal to the product of s1 and s2:
Varley and Gradwell (1960) suggested to measure mortality in k-value which is the
negative logarithms of survival:
k = -ln(s)
We use natural logarithms (with base e=2.718) instead of logarithms with base 10 used
by Varley and Gradwell. The advantages of using natural logarithms will be shown
below.
The k-values for the entire life cycle (K) can be estimated as the sum of k-values for all
mortality processes:
In the life table of the gypsy moth (see above), the sum of all k-values (K = 3.7674) was
equal to the k-value of total mortality.
The following example shows that the k-value represents mortality better than the
percentage of dead organisms: One insecticide kills 99% of cockroaches and another
insecticide kills 99.9% of cockroaches. The difference in percentages is very small
(<1%). However the second insecticide is considerably better because the number of
survivors is 10 times smaller. This difference is represented much better by k-values
which are 4.60 and 6.91 for the first and second insecticides, respectively.
Key-factor analysis Varley and Gradwell (1960) developed a method for identifying
most important factors "key factors" in population dynamics. If k-values are estimated
for a number of years, then the dynamics of k-values over time can be compared with
the dynamics of the generation K-value. The following graph shows the dynamics of k-
values for the winter moth in Great Britain.
It is seen that the dynamics of winter disappearance (k1) is most resembling the
dynamics of total generation K-value. The conclusion was made that winter
disappearance determines the trend in population numbers (whether the population will
grow or decline), and thus, it can be considered as a "key factor". There were numerous
attempts to improve the method. For example, Podoler and Rogers (1975, J. Anim.
Ecol, 44(1)) suggested regressing k over K.
But, this method was criticized recently because the meaning of a "key" factor was not
explicitly defined (Royama 1996, Ecology). It is not clear what predictions can be made
from the knowledge that factor A is a key-factor. For example, the knowledge of key-
factors does not help us to develop a new strategy of pest control.
The key-factor analysis was often considered as a substitute for modeling. It seems so
easy to compare time series of k-values and to find key-factors without the hard work of
developing models of ecological processes. However, reliable predictions can be
obtained only from models.
This critique does not mean that life-tables have no value. Life-tables are very important
for gathering information about ecological processes which is necessary for building
models. It is the key-factor analysis that has little sense.
K-value = instantaneous mortality rate multiplied by time. A population that
experience constant mortality during a specific stage (e.g., larval stage of insects)
change in numbers according to the exponential model with a negative rate r. We cannot
call r intrinsic rate of natural increase because this term is used for the entire life cycle,
and here we discuss a particular stage in the life cycle. According to the exponential
model:
Population numbers decrease and thus, Nt < N0. Survival is: s = Nt/N0 . Now we can
estimate the k-value:
k = -r t
.
m = -r,
k=mt
We proved that if mortality rate is constant, then k-value is equal to the instantaneous
mortality rate multiplied by time. This is analogs to physics: distance is equal to speed
multiplied by time. Here, instantaneous mortality rate is like speed, and k-value is like
distance. K-value shows the result of killing organisms with specific rate during a
period of time. If the period of time when mortality occurs is short then the effect of this
mortality on population is not large.
If instantaneous mortality rate changes with time, then the k-value is equal to its integral
over time. In the same way, in physics, distance is the integral of instantaneous speed
over time.
Example. Annual mortality rates of oak trees due to animal-caused bark damage are
0.08 in the first 10 years and 0.02 in the age interval of 10-20 years. We need to
estimate total k-value (k) and total mortality (d) for the first 20 years of oak growth.
d = 1 - exp(-k) = 0.63
Limitation of the k-value concept. All organisms are assumed to have equal dying
probabilities. In nature, dying probabilities may vary because of spatial heterogeneity
and individual variation (both inherited and non-inherited).
Estimation of k-values in natural populations. Estimation of k-values for individual
death processes is difficult because these processes often go simultaneously. The
problem is to predict what mortality could be expected if there was only one death
process. In order to separate death processes it is important to know the biology of the
species and its interactions with natural enemies. Below you can find several examples
of separation of death processes.
Example #1. Insect parasitoids oviposit on host organisms. Parasitoid larva hatches
from the egg and starts feeding on host tissue. Parasitized host can be alive for a long
period. Finally, it dies and parasitoid emerges from it. Insect predators usually don't
distinguish between parasitized and non-parasitized prey. If an insect was killed by a
predator, then it is usually impossible to detect if this insect was parasitized before.
Thus, mortality due to predation is estimated as the ratio of the number of insects
numbers destroyed by predators to the total number of insects, whereas mortality due to
parasitism is estimated as the ratio of the number of insects killed by parasitoids to the
number of insects that survived predation. In this example, predation masks the effect of
parasitism, and thus, insects killed by predators are ignored in the estimation of the rate
of parasitism. The effect is the same as if predation occurred before parasitism in the life
cycle. Thus, in the gypsy moth life table, predation was always considered before
parasitism. Diseases also mask the effect of parasitism and thus they are considered
before parasitism.
2. Partial life-table. The European pine sawfly, Neodiprion sertifer, cocoons were
collected at the beginning of August and dissected. Results of dissection of new (current
year) cocoons are the following:
Life-cycle information:
Estimate mortality caused by each natural enemy, convert it into k-value. Check that the
sum of all k-values is equal to the total k-value for sawfly cocoons. Write results in the
table, putting mortality processes in the order of their operation.
Simple example:
Number of eggs in
Number
which this
Mortality process of killed Mortality Survival k-value
mortality process
eggs
can be detected
1. Desiccation 500 100 0.2 0.8 0.223
2.Parasitism 400 (500-100) 200 0.5 0.5 0.693
Total 500 300 0.6 0.4 0.916
Initial larvae in 10
Larvae alive at the end
colonies
Control 3000 1400
Large predators
3500 2900
excluded
All predators excluded 3200 3100
• Nx,t = number of organisms in age x at time t (age is measured in the same units
as time t). Usually, only females are considered and males are ignored because,
as a rule, the number of males does not affect population growth.
• sx = survival of organisms in age interval from x to x+1.
• mx = average number of female offsprings produced by 1 female in age interval
from x to x+1 (mortality of parent and/or offspring organisms is included)
[2]
Equation [1] represents development and mortality, whereas equation [2] represents
reproduction. Equation [2] specifies the number of individuals in the first age class and
equation [1] specifies the number of individuals in all other age classes. In the equation
[1], the number of individuals in age x+1 in time t+1 equals to the number of
individuals in the previous age and previous time multiplied by age-specific survival
rate sx. In the equation [2] the number of new-born organisms equals to the number of
mothers (Nx,t) multiplied by the numbers of offspring produced (mx). The number of
offsprings is summed over all ages of mothers.
When a matrix is multiplied by a vector, we take the 1st row of the matrix, multiply
each number by the corresponding number in the vector-column and then sum all
products. This sum is the value of the 1st element in the result vector. Then we take the
2nd row of the matrix, multiply it by the same vector and the sum becomes the 2nd
element in the result vector. In the same way we can estimate all elements of the result
vector.
The first element of the result vector corresponds to the equation [2], and all other
elements correspond to the equation [1].
In this case, the sum of matrix elements in each column equals to 1 because it is
assumed that each system passes through a series of states and neither die nor
reproduce.
Leslie model is more complex because the sum of matrix elements in each column is
not necessary equal to 1. This is a "branching process" because the life trajectory of a
parent branches into life trajectories of its offsprings.
Matrix models are easy to iterate in time. In the next time step we again multiply the
transition matrix by the vector of age distribution:
These 2 features can be seen in the graphs that show simulations of sheep population
dynamics:
The first graph shows exponential population growth (it becomes linear in a log scale)
after several initial years. The second graph shows convergence of age distribution to a
stable age distribution.
You can play with this model by changing model parameters (the matrix) and initial
age-distribution.
You can simplify the analysis of matrix models using PopTools, which is a free Excel
plugin developed by Greg Hood, CSIRO, Canberra, Australia.
Method #1
For simplicity we take years as time units. However, the same logic can be applied to
days or weeks.
[3]
where
[4]
After initial damped fluctuations, the Leslie model shows exponential growth and age
distribution stabilizes. Thus, the number of organisms in any age class will grow
exponentially. In particular, the number of new-born organisms increases exponentially:
Then,
[5]
The term N0,t-x can be taken out of the summation expression and then we get the
equation:
[6]
Equation [6] can be used to estimate r. The sum at the left side can be estimated for
different values of r, and then we can select the r-value that makes this sum equal to 1. It
makes sense to start with the r-value estimated using the approximate method discussed
in the previous chapter. Thus, we start with r=0.181 and get the sum [6] equal to 0.9033.
When r increases, then the value of the sum [6] decreases because r is included as a
negative exponent. Obtained value of the sum appeared to be less than 1, and thus, we
need to try smaller values of r. Let's select r=0.16. Then the sum is equal to 1.0092. The
exact value of r can be found by linear interpolation:
Now we check the solution: when r=0.1617 then the sum [6] is equal to 1.00014 which
is very close to 1.
Note: In Pielou (1978), this example is estimated in a different way which is difficult to
understand. She constructed a different matrix by adjusting reproduction rates and
provided no explanation for this adjustment.
Method #2
Intrinsic rate of population increase can be estimated as the logarithm of the only real
and positive eigenvalue of the transition matrix. The theory of eigenvalues is the central
topic in linear algebra. It is used to reduce multidimensional problems to one-
dimensional problems. I recommend to study this topic for those students who plan to
be quantitative ecologists. Here we will only estimate the eigenvalue using available
software without going into details of the algorithm. The only real and positive
eigenvalue of our matrix is equal to =1.176. Then, r = ln( ) = 0.162 which is very
close to the value estimated by method #1.
Substituting this equation into [3] we get the relationship between the number of
organisms in age x and in age 0 in a stable age distribution:
[7]
Age,
x
lx exp(-rx) lxexp(-rx) cx Simulated cx
2. Distributed delays. Age and time are equivalent in the original Leslie model, and
thus, all organisms develop synchronously with constant rate. However, development
rate of invertebrates and plants is not constant: it depends on temperature and may vary
among individuals. Individual variation of development rates is called "distributed
delay" because there is a distribution of time when organisms reach maturity. Transition
matrix can be modified to incorporate these features.
3. Partitioning the life cycle into stages. Many invertebrate species have a complex
life cycle that includes several stages. For example, holometabolous insects usually
have 4 stages: egg, larvae, pupae, and adult. Each of these stages may include several
age intervals: In these models, age is no longer measured in calendar time units (e.g.,
days or years). Instead, it is measured in independent units which can be interpreted as
"physiological age". The concept of physiological age will be discussed in details in the
next chapter. It can be used to define "rate of development" as the average increment of
physiological age per calendar time unit.
Lecture 8. Development of poikilothermous organisms,
degree-days
8.1. Rate of development
Homeothermous organisms are warm blooded (mammals, birds)
Poikilothermous organisms are cold blooded (all invertebrates, plants, fishes,
amphibians, reptiles)
The rate of development can be measured by a reciprocal of the number of time units
(e.g., days) that is required for completion of development. Rates of development can be
estimated for the entire onthogenesis or for a specific stage. For example, if it takes 15
days for an insect to develop from egg hatch till pupation, then the rate of larval
development is v = 1/15 = 0.0667.
In the temperature range from 10 to 30 (°C), development rate changes almost linearly
with increasing temperature. At very low temperature there is no development, and at
very high temperature development is retarded.
In most cases, real development rate is indeed a linear function in the region of
moderate temperatures (15-25°) (see figure above). Deviating points at temperatures
that are too low or too high can be ignored. For example, in the figure, regression line is
plotted for points from 10 to 30°. At 5° all organisms died and there was no
development. At 35° organisms were overheated and development rate of survivors was
reduced. If we are interested in simulating organism development in moderate
temperatures, then the degree-day model will be the best choice.
Terms:
• tmin is the lower temperature limit; this is the temperature at which development
rate reaches zero.
• ET = t - tmin is effective temperature.
• S are degree-days; this is the effective temperature multiplied by the number of
days required to complete development.
Lower temperature limit and degree-days can be estimated from the regression line of
development rate versus temperature. We assume that regression equation is:
v = a + bt
This equation shows that degree-days do not depend on temperature! This is the
principal feature of the degree-day model. In our case, S = 1 / 0.00142 = 704 degree ×
day. The units of degree-days are degrees (centigrade) multiplied by days. Some times,
it is better to use degree-hours if development is very fast.
1 15 5 5
2 18 8 13
3 25 15 28
4 23 13 41
5 24 14 55
6 18 8 63
7 17 7 70
8 15 5 75
9 18 8 83
10 15 5 88
11 22 12 100
12 25 15 115
Accumulated degree-days reach the value S = 100 on day 11. Thus, it takes 11 days to
complete the development.
This model is non-linear because the graph is not straight (see the figure above). Thus,
temperature cannot be averaged! In particular, you cannot use average daily
temperature. Instead, it is necessary to use actual temperature dynamics. Accumulated
degree-days are equal to the area under the temperature curve restricted to the
temperature interval between tmin and tmax:
Another example:
Light-blue area again equals to accumulated degree-days. Daily maximum temperature
exceeds tmax however this excess does not count in the accumulation of degree-days.
Progress in physiological age in one time step may depend on temperature (3 arrows at
the left side of this graph).
Also, individual variation in development rate (distributed delays) can be taken into
account (branching arrow at the right side of the graph).
At the start of simulation all organisms can be placed into the first age class. Another
option is to add variability in the starting date of development. For example, if we
simulate insect larval development, it is unrealistic to assume synchronous egg hatch in
one day. It is better to assume distributed egg hatch time. Three kinds of distributions
are used most often: normal, logistic, and Weibul.
Normal and logistic distributions are both symmetrical and are very similar. But
Weibull distribution is asymmetrical. Actual distribution of egg hatch time is often
asymmetrical, and thus, the Weibul distribution is usually better than the normal and
logistic distributions.
10.7 38.0
14.4 19.5
16.2 15.6
18.1 9.6
21.4 9.5
23.7 7.3
24.7 4.5
26.9 4.5
28.6 7.1
Gypsy moth numbers increased by several orders of magnitude. Pest outbreaks resulted
in forest defoliation in large areas.
Main Problems:
There may be several attractors in a model. In this case each attractor has a domain of
attraction. Model trajectory converges to that attractor in which domain initial
conditions were located.
In this example, there are two attractors: a limit cycle (at the left) and a stable
equilibrium (at the right). Domains of attraction are colored blue, they never overlap.
For different starting places (initial conditions), trajectories converge to different
attractors.
Types of attractors:
Examples:
The notion of stability can be applied to other types of attractors (limit cycle, chaos),
however, the general definition is more complex than for equilibria. Stability is
probably the most important notion in science because it refers to what we call "reality".
Everything should be stable to be observable. For example, in quantum mechanics,
energy levels are those that are stable because unstable levels cannot be observed.
In this figure, population growth rate, dN/dt, is plotted versus population density, N.
This is often called a phase-plot of population dynamics. If 0 < N < K, then dN/dt > 0
and thus, population grows (the point in the graph moves to the right). If N < 0 or N > K
(of course, N < 0 has no biological sense), then population declines (the point in the
graph moves to the left). The arrows show that the equilibrium N=0 is unstable, whereas
the equilibrium N=K is stable. From the biological point of view, this means that after
small deviation of population numbers from N=0 (e.g., immigration of a small number
of organisms), the population never returns back to this equilibrium. Instead, population
numbers increase until they reach the stable equilibrium N=K. After any deviation from
N=K the population returns back to this stable equilibrium.
The difference between stable and unstable equilibria is in the slope of the line on the
phase plot near the equilibrium point. Stable equilibria are characterized by a negative
slope (negative feedback) whereas unstable equilibria are characterized by a positive
slope (positive feedback).
The second example is the bark beetle model with two stable and two unstable
equilibria. Stable equilibria correspond to endemic and epidemic populations. Endemic
populations are regulated by the amount of susceptible trees in the forest. Epidemic
populations are limited by the total number of trees because mass attack of beetle
females may overcome the resistance of any tree.
This is the 2-variable model in a general form. Here, H is the density of prey, and P is
the density of predators. The first step is to find equilibrium densities of prey (H*) and
predator (P*). We need to solve a system of equations:
The second step is to linearize the model at the equilibrium point (H = H*, P = P*) by
estimating the Jacobian matrix:
There are 2 types of stable equilibria in a two-dimensional space: knot and focus
There are 3 types of unstable equilibria in a two-dimensional space: knot, focus, and
saddle
If the slope is positive but less than 1, then the system approaches the equilibrium
monotonically (left). If the slope is negative and greater than -1, then the system
exhibits oscillations because of the "overcompensation" (center). Overcompensation
means that the system jumps over the equilibrium point because the negative feedback
is too strong. Then it returnes back and again jumps over the equilibrium.
Now we will analyze stability in the Ricker's model. This model is a discrete-time
analog of the logistic model:
First, we need to find the equilibrium population density N* by solving the equation:
This equation is obtained by substituting Nt+1 and Nt with the equilibrium population
density N* in the initial equation. The roots are: N* = 0 and N* = K.. We are not
interested in the first equilibrium (N* = 0) because there is no population. Let's estimate
the slope df/dN at the second equilibrium point:
Now we can apply the condition of stability:
-1 < 1 - r < 1
0<r<2
If a discrete time model has more than one state variable, then the analysis is similar to
that in continuous-time models. The first step is to find equilibria. The second step is to
linearize the model at the equilibrium state, i.e., to estimate the Jacobian matrix. The
third step is to estimate eigenvalues of this matrix. The only difference from continuous
models is the condition of stability. Discrete-time models are stable (asymptotically
stable) if and only if all eigenvalues lie in the circle with the radius = 1 in the complex
plain.
Robert May (1973) suggested to measure system stability by the maximum real part of
eigenvalues of the linearized model. It was shown that this value correlates with the
variance of population fluctuations in stochastic models.
Sharov (1991, 1992) suggested measures of m- and v-stability that characterize the
stability of the mean (m) and variance (v) of population density (initially these measures
were called as coefficients of buffering and homeostasis, see Sharov [1985, 1986]).
Later they were re-invented by Ives (1995a, 1995b). They can be used to predict the
effect of environmental changes (e.g., global warming or pest management) on the
mean and variance of population numbers
M-stability (MS) was defined as the ratio of the change in mean log population density,
N, as a response to the change in mean value of some environmental factor, v.
M-stability is the reciprocal of the sensitivity of mean population density to the mean
value of factor v. Log-transformation of population density is important because it
makes population models closer to linear.
V-stability (VS) was defined as a ratio of the variance of additive random noise to
the variance of log population numbers :
Population that has smaller fluctuations of population numbers than another population
that experience the same intensity of additive environmental noise has a higher v-
stability. To estimate v-stability in the Ricker's model we can use the linearized model
at the equilibrium point:
where N is log population density, and is the white noise with a zero mean. Noise is
not correlated with log population numbers. Thus:
This graph shows that v-stability equals to zero at r=0 and r=2 (these are the boundaries
of quantitative stability). V-stability has a maximum at r = 1.
References
r = 3.0 Chaos.
K = 200
In the two upper figures the model has a stable equilibrium, only the patterns of
approaching the equilibrium are different. In the three lower figures there is no stable
equilibrium. Non-equilibrium dynamics may be of 2 types: a limit cycle when the
trajectory repeats itself, and chaotic when the trajectory does not repeat itself.
The bifurcation plot (below) helps to visualize all types of dynamics generated by the
Ricker's model. This graph is plotted as follows: for each value of parameter r which is
incremented with 0.05 steps, population dynamics was simulated for 200 generations.
First 125 generations were discarded because the population may not have reached the
asymptotic behavior. Population numbers in the rest 75 generations are plotted versus
the value of parameter r:
One of the questions that are often discussed in ecological literature is "does chaos
really exists in population dynamics?". The major argument in favor of chaos is: when
model parameters are fit to known time series of population dynamics then the
dynamics of the model with these parameters is chaotic. Another kind of arguments is
based on attempts to separate chaotic dynamics from stochastic noise. However
detection of chaos is difficult because of several problems:
• There is no evidence that the model is correct. Usually these models ignore
many ecological processes (natural enemies, etc.). It is necessary to use multiple
models for detecting chaos (Ellner and Turchin 1995, Amer. Natur. 145: 343-
375).
• The confidence interval for parameter values is usually large enough to cover
both chaotic and non-chaotic (limit cycle) model dynamics.
• Time series of population dynamics are usually not long enough to separate
chaotic dynamics from stochastic noise.
At this point, chaotic dynamics was detected consistently only in a few microtine
populations (Ellner and Turchin 1995). I suspect that in these cases chaotic dynamics
was induced by the seasonal cycle in population numbers. Chaos was never detected in
time series with 1 year as a time step. Probably chaos is a rare phenomenon in
population dynamics.
Draw schematically the phase portrait (rate of population growth vs. population
density). Find all equilibrium points and characterize their stability.
9.2. Characterize the type of attractor in population dynamics: (a) stable equilibrium; (b)
limit cycle, (c) chaos.
Lecture 10. Predation and parasitism
10.1. Introduction
Predation and parasitism are examples of antagonistic ecological interactions in which
one species takes advantage of another species. Predators (see a picture) use their prey
as a source of food only, whereas parasites (see a picture) use their hosts both as a food
and as a habitat. Predation and parasitism are stage-specific interactions rather than
species-specific. Many species are predators or parasites only on specific stages in their
life cycle.
1. Keep prey population without predators and estimate their intrinsic rate of
increase (r).
2. Put one predator in cages with different densities of prey and estimate prey
mortality rate and corresponding k-value in each cage. As we know, k-value
equals to the instantaneous mortality rate multiplied by time. Thus, the predation
rate (a) equals to the k-value divided by the duration of experiment.
Example: lady-beetle killed 60 aphids out of 100 in 2 days. Then,
the k-value = -ln(1-60/100) = 0.92, and a = 0.92/2 = 0.46.
Note: if a -values estimated at different prey densities are not close
enough to each other, then the Lotka-Volterra model will not
work! However, the model can be modified to incorporate the relation of a to
prey density.
3. Estimation of parameters b and m:
Keep constant density of prey (e.g., H = 0, 5, 10, 20, 100 prey/cage), and
estimate the intrinsic rate of predator population increase (rP) at these densities
of prey. Plot the intrinsic rate of predator population increase versus prey
density: The linear regression of this line is:
Note: If points do not fit to a straight line (e.g., the intrinsic rate of predator
population growth may level off), then the Lotka-Volterra model is not adequate
and should be modified. Now , parameters b and m can be taken from this
regression equation.
The simplest and least accurate is the Euler's method. Consider a stationary differential
equation:
First we need initial conditions. We will assume that at time to the function value is
x(to).
Now we can estimate x-values at later (or earlier) time using equation:
Euler's method can be improved, if the derivative (slope) is estimated at the center of
time interval . However, the derivative at the center depends on the function value at
the center which is unknown. Thus, first we need to estimate the function value at the
middle point using simple Euler's method, and then we can estimate the derivative at the
middle point.
k is the function value in the center of time interval l . Finally, we can estimate
function value at the end of the time interval:
First, we estimate prey and predator densities (H' and P', respectively) at the center of
time interval:
The second step is to estimate prey and predator densities (H" and P" at the end of time
step l :
These two graphs were plotted using the same model parameters. The only difference is
in initial density of prey. This model has no asymptotic stability, it does not converge to
an attractor (does not "forget" initial conditions).
This figure shows relative changes in prey predator density for both initial conditions.
Trajectories are closed lines.
The model of Lotka and Volterra is not very realistic. It does not consider any
competition among prey or predators. As a result, prey population may grow infinitely
without any resource limits. Predators have no saturation: their consumption rate is
unlimited. The rate of prey consumption is proportional to prey density. Thus, it is not
surprising that model behavior is unnatural showing no asymptotic stability. However
numerous modifications of this model exist which make it more realistic.
Additional information on the Lotka-Volterra model can be found at other WWW sites:
References:
Lotka, A. J. 1925. Elements of physical biology. Baltimore: Williams & Wilkins Co.
Volterra, V. 1926. Variazioni e fluttuazioni del numero d'individui in specie animali
conviventi. Mem. R. Accad. Naz. dei Lincei. Ser. VI, vol. 2.
This model illustrates the principal of time budget in behavioral ecology. It assumes that
a predator spends its time on 2 kinds of activities:
Consumption rate of a predator is limited in this model because even if prey are so
abundant that no time is needed for search, a predator still needs to spend time on prey
handling.
Total time equals to the sum of time spent on searching and time spent on handling::
Assume that a predator captured Ha prey during time T. Handling time should be
proportional to the number of prey captured:
Capturing prey is assumed to be a random process. A predator examines area a per time
unit (only search time is considered here) and captures all prey that were found there.
Parameter a is often called "area of discovery", however it can be called "search rate" as
well.
After spending time Tsearch for searching, a predator examines the area = a Tsearch, and
captures aHTsearch prey where H is prey density per unit area:
Hence:
Now we can balance the time budget:
The graph of functional response that corresponds to this equation is shown below:
This function indicates the number of prey killed by 1 predator at various prey densities.
This is a typical shape of functional response of many predator species. At low prey
densities, predators spend most of their time on search, whereas at high prey densities,
predators spend most of their time on prey handling.
Type II functional response is most typical and corresponds to the equation above.
Search rate is constant. Plateau represents predator saturation. Prey mortality declines
with prey density. Predators of this type cause maximum mortality at low prey density.
For example, small mammals destroy most of gypsy moth pupae in sparse populations
of gypsy moth. However in high-density defoliating populations, small mammals kill a
negligible proportion of pupae.
Type III functional response occurs in predators which increase their search activity
with increasing prey density. For example, many predators respond to kairomones
(chemicals emitted by prey) and increase their activity. Polyphagous vertebrate
predators (e.g., birds) can switch to the most abundant prey species by learning to
recognize it visually. Mortality first increases with prey increasing density, and then
declines.
If predator density is constant (e.g., birds, small mammals) then they can regulate prey
density only if they have a type III functional response because this is the only type of
functional response for which prey mortality can increase with increasing prey density.
However, regulating effect of predators is limited to the interval of prey density where
mortality increases. If prey density exceeds the upper limit of this interval, then
mortality due to predation starts declining, and predation will cause a positive feed-
back. As a result, the number of prey will get out of control. They will grow in numbers
until some other factors (diseases of food shortage) will stop their reproduction. This
phenomenon is known as "escape from natural enemies" discovered first by Takahashi.
No. of
Total Average no.
prey per No. of
prey of prey killed 1/Ha 1/(HT)
cage replications
killed Ha
H
5 20 50 2.5 0.400 0.1000
10 10 40 4.0 0.250 0.0500
20 7 55 7.9 0.127 0.0250
40 5 45 9 0.111 0.0125
80 3 38 12.6 0.079 0.0062
160 3 35 11.6 0.086 0.0031
Cage area was 10 sq.m., and duration of experiment was T=2 days.
Holling's equation can be transformed to a linear form:
y = 3.43 x + 0.0612
Type III functional response can be simulated using the same Holling's equation with
search rate (a) dependent on prey density, e.g.:
Numerical Response
Numerical response means that predators become more abundant as prey density
increases. However, the term "numerical response" is rather confusing because it may
result from 2 different mechanisms:
Reproduction rate of predators naturally depends on their predation rate. The more prey
consumed, the more energy the predator can allocate for reproduction. Mortality rate
also reduces with increased prey consumption.
The most simple model of predator's numerical response is based on the assumption that
reproduction rate of predators is proportional to the number of prey consumed. This is
like conversion of prey into new predators. For example, as 10 prey are consumed, a
new predator is born.
References:
We will start with the prey population. Predation rate is simulated using the Holling's
"disc equation" of functional response:
The rate of prey consumption by all predators per unit time equals to
Here we assumed that without predators, prey population density increases according to
logistic model.
This equation represents the numerical response of predator population to prey density.
Simulation results are presented below. This model exhibits more various dynamic
regimes than the Lotka-Volterra model.
r = 0.2
H
K = 500
a=
No
0.001 oscillations
Th = 0.5
r = 0.1
P
k = 0.2
r = 0.2
H
K = 500 Damping
oscillations
a = 0.1
converging to
Th = 0.5 a stable
r = 0.1
P
equilibrium
k = 0.2
r = 0.2
H
K = 500
a = 0.3
Limit cycle
Th = 0.5
r = 0.1
P
k = 0.2
This model can be used to simulate biological control. The goal of biological control is
to suppress the density of the pest population using natural enemies. We will assume
that the prey in our model is a dangerous pest, and that the predator was introduced to
suppress its density. Withour predators the density of prey population is equal to the
carrying capacity, K = 500. After a predator with a search rate, a = 0.001, was
introduced, the equilibrium population density, N*, declined to the value of 351.
Beddington et al. (1978, Nature, 273: 573-579) suggested to measure the degree of pest
suppression by the ratio:
It could be expected that more effective predators will cause more suppression of the
prey population density. But this is not true, because more effective natural enemies
also cause larger oscillations in population density. For example, at a = 0.3, the
equailibrium is not stable and populations exhibit periodic cycles (see graph #3 above).
Periodically host density reaches the value of 190.
The transition between the stable equilibrium and the limit cycle occurs approximately
at a = 0.244. The transition from one type of dynamics to another one is often called
"phase transition" (e.g., transitions between liquid and gas phases or between solid and
liquid phases). The phase transition in our model will be less ubrupt if we introduce
noise. With noise, the system will exhibit oscillations even if the equilibrium is stable.
The closer we are to the critical value, a = 0.244, the larger will be these oscillations.
These oscillations result from the interaction of predators with prey.
This example illustrates that pest regulation (or control) by natural enemies is an
ambibuous notion. First, it does not refer to the type of dynamics (stable equilibrium vs.
limit cycle), and second, the excess of "regulation" may cause large oscillations of prey
density.
Parasitoids and their hosts often have synchronized life-cycles, e.g., both have one
generation per year (monovoltinous). Thus, host-parasite models usually use discrete
time steps that correspond to generations (years).
where p(i) is the proportion of hosts that get i parasitoid eggs, and M is the mean
number of parasitoid eggs per one host.
Survived hosts are those which get 0 parasitoid eggs. The proportion of survived hosts
is equal to p(0) = exp(-M).
Variables:
Parameter
The first equation describes host survival and reproduction. The numbers of survived
hosts are multiplied by Ro which means reproduction.
In the second equation, each parasitized host produce one adult parasitoid in the next
generation. P is the density of females only. Thus, the numbers of parasitoids is
multiplied by the proportion of females = q.
In the model of Thompson, it is assumed that parasites always lay all their eggs. Thus,
realized fecundity equals potential fecundity. This assumption implies unlimited search
abilities of parasitoids. In nature, parasites often do not realize their potential fecundity
just because they can not find enough hosts. Thus, the model of Thompson may
overestimate parasitism rates especially if host density is low.
Because each encounter with the host results in depositing 1 egg, the realized fecundity
equals the product of the area of discovery and host density: F = aH. Substituting this
value of F into the Thompson model we get:
In the Nicholson and Bailey model, the potential fecundity of parasites is not limited.
Parasites lay an egg at every encounter with the host even if the number of encounters is
very large (e.g., if host density is high). Thus, this model may overestimate parasitism
rates at high host density.
We will use the Holling's disc equation (see section 10.3) to model the functional
response of parasitoids. The number of hosts attacked by one parasitoid female is equal
to
We can modify this equation by setting T=1 because search rate is considered per life
time of parasitoid female. Life time can be coded as 1 because the time step is equal to 1
generation. The ratio is the maximum fecundity of parasitoid female.
Then:
When parasitoid female attacks a host it lays an egg. Thus, realized fecundity F = Ha.
Substituting this value of F into the Thompson's model we get:
In the model of Rogers, realized fecundity is different from the potential fecundity
whereas in previous models this distinction was not present.
All models of host-parasitoid system are unstable: they generate oscillations with
increasing amplitude.
This is the dynamics of the model of Nicholson and Bailey.
References:
References:
Parameter values are: r=1.5 per yr; Th = 0.0003yr; P=0.01 birds/sq.m.; b=1500
sq.m./yr; c=30 larvae/sq.m.
Find all equilibrium densities of spruce budworm population. Which equilibria are
stable and which are unstable? At what population density birds fail to control spruce
budworm population?
10.3. Tachinid fly Parasetigena silvestris is a parasitoid of gypsy moth. Both the host
and the parasite have one generation per year. P.silvestris attacks large larvae and
emerges from pre-pupae or pupae. The following results were obtained after three years
of study:
Density of host
Percentage
Year large larvae
of parasitism
per sq.m.
1988 0.45 12
1989 2.15 8
1990 4.10 ?
10.4. Why the distribution of parasitoid eggs laid on hosts may be different from the
Poisson distribution?
• Fecundity is limited
• Search rate is limited
• Host mortality does not depend on host density
• Host mortality does not depend on parasite density?
10.7. Host density is 5 individuals per square meter; parasitoid density is 1 female per
square meter; one parasitoid female can parasitize maximum 100 hosts, and its search
rate (area of discovery) is 1 sq.m. per life. What is the proportion of parasitism predicted
by each of three models: Thompson, Nicholson & Bailey, and Rogers?
Lecture 11. Competition and Cooperation
11.1. Intra-specific competition
Intraspecific competition results in a reduction of population growth rate as population
density increases. We already studied several models that consider intraspecific
competition: logistic model and Ricker's model. In these models population growth rate
steadily declines with increasing population density.
Forest insect defoliators often have a "scramble" competition. If larvae can find foliage
they still survive, but when all foliage is destroyed then mortality increases very rapidly.
This is because these insects have seasonally-synchronized life cycles. If food is
exhausted before they can pupate, then none of them pupates.
Mechanisms of competition:
Direct interaction among organisms makes competition balanced and usually results in a
gradual decline of population growth rate with the decrease in the amount of resources.
Major problems:
1. In conservation ecology: to prevent extinction of particular species; predict
potential losses in species composition after introduction of competitors; to
reduce competition effects.
2. In biocontrol: to find an exotic natural enemy which will successfully fit into the
community of existent natural enemies; to find exotic non-pest competitors that
may oust the pest species.
Now, we will introduce the second (competing) species. As a result, the figure becomes
two-dimensional:
In this example, species #1 becomes extinct as a result of its competition with species
#2.
Competitive exclusion principle was first formulated by Grinnell (1904) who wrote:
"Two species of approximately the same food habits are not likely to remain long evenly
balanced in numbers in the same region. One will crowd out the other; the one longest
exposed to local conditions, and hence best fitted, though ever so slightly, will survive,
to the exclusion of any less favored would-be invader."
If competing species are ecologically identical (use the same resource), then inter-
specific competition is equivalent to intra-specific competition. Each organism
competes with all organisms of both populations. As a result, population growth rate of
each population is determined by the sum of numbers of both populations:
Excel spreadsheet "lotkcomp.xls"
In this case, both isoclines are parallel and have a slope of 45° (see figures above). The
species that have a higher carrying capacity (K) always wins. Higher carrying capacity
means that the species can endure more crowding than the other species (e.g., due to
more effective search for resources). Competitive exclusion is called K-selection
because it always go in the direction of increasing K.
Theoretically it is possible that weights wi>1. This means that organisms of another
species are stronger competitors than organisms in the same population. I don't know
any example of this sort. But this situation is always discussed in ecological textbooks.
If wi>1 and isoclines intersect, then one species will oust the second one, but what
species will be excluded depends on initial conditions (initial numbers of both
populations):
This system has an unstable equilibrium which separates 2 areas of attraction: (1) where
the first species ousts the second one and (2) where the second species ousts the first
one.
When the principle of competitive exclusion became widely known among ecologists, it
seemed to contradict with some well known facts and this contradiction was formulated
as "paradoxes". For example, "plankton paradox" focused on the variability of plankton
organisms which all seemed to use the same resources. All plankton algae use solar
energy and minerals dissolved in the water. There are not so many mineral components
as compared to a large variability in plankton algae species.
There is no final solution for this paradox. However, it became clear that coexistence of
species that use the same resource is a common phenomenon. Mathematical models
described above are correct, but they are oversimplified; thus it is difficult to apply them
to real species. More complicated and more realistic models indicate that species
coexistence is possible. For example, plankton algae have distinct seasonality in their
abundance which is ignored in the simple Lotka-Volterra model. Cyclical dynamic
regime allows species to coexist even if they cannot coexist in stable systems. Another
important factor is spatial heterogeneity which effect is substantial even in such
homogeneous systems as the ocean.
Group effect results in the increase of population growth rate with increasing population
density when density is low. There is more danger of extinction in populations with
group effect because there is a minimum population density below which population
declines until it is gone.
Cooperation between different species is relatively rare in nature and we will not study
its models. Usually it has a form of symbiosis.
Lecture 12. Dispersal and Spatial Dynamics
12.1. Random walk
Early population studies concentrated on local population dynamics. However, spatial
processes are very important in life-systems of most of the species. They may so
significantly modify system behavior that local model would be unable to predict
population changes.
Let's take the problem of pest insect control as an example. The first question is what
area to treat. If this area is too small it will be immediately colonized by immigrants.
Crop rotation is often used to prevent propagation of pests, but the distance between
fields with the same crop in two consecutive years should be separated further than
migration distance. Finally, many insect pests are sampled using traps (pheromone-
baited traps or UV-traps). To determine pest density from trap catches it is important to
know dispersal abilities of the insect.
The main problem: how many organisms disperse beyond a specific distance?
Random walk is simulated here assuming that 50% individuals stay at the same place,
25% move to the left, and 25% move to the right. After several time steps the
distribution of organisms becomes close to the normal distribution:
Skellam's model predicts that if a population was released at a single point, then its
spatial distribution will be a 2-dimensional normal distribution:
One of the most interesting features of this model is that it predicts the asymptotic rate
of expansion of population front. The rate of population expansion, V, is defined as the
distance between sites with equal population densities in two successive years:
Skellam's model gives the following equation for the rate of population expansion:
where M(t) is mean displacement of organisms recaptured t units time after their release
(Skellam 1973).
Example:
The muskrat (Ondatra zibethica) was introduced to Europe in 1905 near Prague. Since
that time its area expanded, and the front moved with the rate ranging from 0.9 to 25.4
km/yr. Intrinsic rate of population increase was estimated as 0.2-1.1 per year, and
diffusion coefficient ranged from 51 to 230 sq.km/yr. Predicted spread rate (6.4-31.8
km/yr) corresponds well to actual rates of spread.
Passive transportation mechanisms are most important for discontinuous dispersal. They
include wind-borne transfer of small organisms (especially, spores of fungi, small insects,
mites); transportation of organisms on human vehicles and boats. Discontinuous long-distance
dispersal usually occurs in combination with short-distance continuous dispersal. This
combination of long- and short-distance dispersal mechanisms is known as stratified dispersal
(Hengeveld 1989).
n(a) = noexp(ra)
where no are initial numbers of individuals in a colony that has just established, and r is the
marginal rate of population increase.
The population front is defined as the farthest point where the average density of individuals
per unit area, N, reaches the carrying capacity, K:
N = K.
The rate of spread, v, can be determined using the traveling wave equation. We assume that the
velocity of population spread, v, is stationary. Then, the density of colonies per unit area m(a,x)
of age a at distance x from the population front is equal to colony establishment rate a time
units ago. At that time, the distance from the population front was x + av. Thus, m(a,x) = b(x +
av). The average numbers of individuals per unit area at distance x from the population front is
equal to:
where n(a) is the number of individuals in a colony of age a. The population front is defined by
the condition N(0) = K. Thus, the traveling wave equation is
This equation can be used for estimating the rate of population spread. To estimate this integral
we need to define explicitly functions b(x) and n(a). We assume a linear function of the rate of
colony establishment:
After substituting these functions b(x) and n(a) into the traveling wave equation, we get the
following equation
where V = v/xmax is the relative rate of population spread. This equation can be solved
numerically for V; and then the rate of spread is estinmated as v = V xmax.
This model can be used to predict how barrier zones (where isolated colonies are detected and
eradicated) reduce the rate of population spread. We will assume that the barrier zone is placed
in the transition zone at some particular distance from the population front. Because new
colonies are eradicated in the barrier zone, we can set the colony establishment rate, b(x), equal
to zero within the barrier zone as it is shown in the figure below
If this new function b(x) is used with the traveling wave equation, we get the rate of spread
with the barrier zone. The figure below shows the effect of barrier zone on the rate of
population spread. Relative width of the barrier zone is measured as its proportion from the
width of the transition zone. Relative reduction of population spread is measured as 1 minus the
ratio of population spread rate with the barrier zone to the maximum rate of population spread
(without barrier zone).
This model predicted that barrier zones used in the Slow-the-Spread project should reduce the
rate of gypsy moth spread by 54%. This prediction was close to the 59% reduction in the rate of
gypsy moth spread in Central Appalachian Mountains observed since 1990 (when the strategy
of eradicating isolated colonied has been started).
The figure above shows 3 basic rules for the dynamics of cellular automata
that simulates stratified dispersal:
Results of cellular automata simulation for several sequential time steps are
shown at the right figure. It is seen how isolated colonies become established,
grow, and then coalesce. This model was used for prediction of barrier-zone
effect on the rate of population spread and the results were similar to those
obtained with the metapopulation model..
References
Sharov, A. A., and A. M. Liebhold. 1998. Model of slowing the spread of gypsy moth
(Lepidoptera: Lymantriidae) with a barrier zone. Ecol. Appl. 8: 1170-1179. Get a reprint!
(PDF)
Sharov, A. A., and A. M. Liebhold. 1998. Bioeconomics of managing the spread of exotic
pest species with barrier zones. Ecol. Appl. 8: 833-845.
Get a reprint! (PDF)
Sharov, A. A., and A. M. Liebhold. 1998. Quantitative analysis of gypsy moth spread in the
Central Appalachians. Pp: 99-110. In: J. Braumgartner, P. Brandmayer and B.F.J. Manly [eds.],
Population and Community Ecology for Insect Management and Conservation. Balkema,
Rotterdam.
Get a reprint! (PDF)
Sharov, A. A., A. M. Liebhold and E. A. Roberts. 1998. Optimizing the use of barrier zones
to slow the spread of gypsy moth (Lepidoptera: Lymantriidae) in North America. J. Econ.
Entomol. 91: 165-174.
Get a reprint! (PDF)
Local populations usually inhabit isolated patches of resources, and the degree of
isolation may vary depending on the distance among patches:
One of the first metapopulation models was developed by MacArthur and Wilson
(1967). They considered immigration of organisms (e.g., birds) from a continent to
islands in the ocean. The proportion of islands colonized by a species, p, changes
according to the equation:
Equilibrium proportion of colonized islands can be found by solving the equation dp/dt
= 0:
Assume that extinction rate declines with increasing island diameter S:
and colonization rate declines with increasing distance D from the continent:
Now, the proportion of colonized islands becomes a function of island size and its
distance from the continent:
If there is a group of species with similar biology and similar migration capabilities,
then the proportion of colonized islands is proportional to the number of species that
live on an island. Now the model can be tested using regression:
where
Example: There are 100 bird species on the continent and the number of species on
islands is in the following table:
Using linear regression of ln(p*/(1-p*) against two factors: S and D we get the
following model parameters: α = 0.229; β = 0.0467; and γ = -1.29. These parameters
can be used to predict the number of species on other islands using information about
island size and its distance from the continent.
This model can be expanded by incorporating island size and degree of isolation in the
same way as the previous model.
Lecture 13. Population Outbreaks
Ecological Mechanisms of Outbreaks
Population outbreaks are characterized by rapid change in population density over
several orders of magnitude. Only a small number of species have outbreaks: e.g., some
insect pests, pathogens, and rodents. Population outbreaks often cause serious
ecological and economic problems. Examples of outbreak species: locusts, southern
pine beetle, spruce budworm, gypsy moth
The building phase in insect pests often goes unnoticed because the effect of pests on
host plants is still very small. Regular monitoring helps to detect population growth
before the pest species devastates host plants over large areas.
Examples of "amplifiers":
• Destruction of resources
• Natural enemies
• Unfavorable weather
A model of an Outbreak
The eastern spruce budworm (Choristoneura fumiferana) is a forest pest insect. It
defoliates spruce stands in Canada and Maine. Outbreaks occur in intervals of 30-40
years. Last outbreaks were in 1910, 1940, and 1970. They resulted in defoliation of 10,
25, and 55 million hectares, respectively.
Clark and Holling (1979, Fortschr. Zool. 25: 29-52) developed a simple model of the
spruce budworm population dynamics. It includes (1) logistic population growth, and
(2) type III functional response of polyphagous predators (birds). Biological background
of this model is not very solid because several important factors were ignored (e.g.,
parasitism, diseases). However, the model captures dynamic features of the system and
we will use it as an example of an outbreak models.
This model describes the start of an outbreak due to escape from natural enemies. The
phase plot of the model is show below. There are 2 stable equilibria: at the lower
equilibrium (N* = 2.5) population numbers are stabilized by predation, and at the higher
equilibrium (N* = 2.5) the population reaches its carrying capacity. The switch point
between these equilibria is N = 10.
If we run the model with a small environmental noise, we will get the following output:
The population density once came very close to the switch point (N = 10) but then it
returned back due to unfavorable conditions at that specific time. If we increase the
amplitude of noise, then the population eventually passes the switch point and the
outbreak starts. The lower graph is the magnification of the upper graph:
Now we will increase parameter r (intrinsic rate of increase) to the value of r = 1.1. The
phase plot changes and the distance between the lower equilibrium and the switch point
becomes smaller:
Now the outbreak starts even with a low noise (as in the first graph) and much earlier:
The lower graph is the magnification of the upper graph.
This model does not include mechanisms that may cause the collapse of outbreak
populations. Thus, the outbreak continues forever in a model population. In nature,
outbreak populations of the spruce budworm cause severe defoliation and destroy the
forest. As a result, the population collapses. Interaction of spruce budworm with host
trees will be considered in the following section.
Catastrophe theory
Catastrophe theory was very fashionable in 70-s and 80-s. Rene Thom was one of its
spiritual leaders. This theory originated from qualitative solution of differential
equations and it has nothing in common with Apocalypse or UFO.
Catastrophe means the loss of stability in a dynamic system. The major method of this
theory is sorting dynamic variables into slow and fast. Then stability features of fast
variables may change slowly due to dynamics of slow variables.
The performance of spruce budworm populations is better in mature spruce stands than
in young stands. Thus, we will assume that the intrinsic rate of increase (r) and carrying
capacity (K) both increase with the age of host trees:
where A is the average age of trees in a stand. Now the model is represented by the
equation:
The first term is the logistic model, and the second term describes mortality caused by
generalist predators which have a type III functional response. Equilibrium points can
be found by solving this equation with the left part set to zero (dN/dt = 0):
The left graph shows phase plots for various forest ages from A = 35 to A = 85, and the
right graph shows equilibrium points (where the derivative is equal to 0). Only one non-
zero equilibrium exists if N<38 or N>74. If 40<A<74, then there are 2 two stable
equilibria separated by one unstable equilibrium. Equilibrium line folds.
The age of trees continue increasing with time. Age can be considered as a "slow"
variable as compared to population density which is a "fast" variable. Dynamics of the
system can be explained using the graph:
Fast processes are vertical arrows; slow processes are thick arrows. Slow processes go
along stable lines until it ends, then there is a fast "jump" to another stable line.
Direction of slow processes. When the density of spruce budworm is low, then there is
little mortality of trees and the average age of trees increases. Thus, the slow process at
the lower branch of stable budworm density is directed to the right (increasing of stand
age). The upper branch of stable budworm density corresponds to outbreak populations.
Old trees are more susceptible to defoliation and they die first. Thus, the mean age of
defoliated stand decreases, and the slow process at the upper branch goes backwards.
Population dynamics can be described as a limit cycle
that includes 2 periods of slow change and 2 periods
of fast change. Transition to a fast process is a
catastrophe. This model is built in Excel:
We can add stochastic fluctuations due to weather or other factors. As the age of trees
increases, the domain of attraction of the endemic (=low-density) equilibrium becomes
smaller. As a result, the probability of outbreak increases with increasing forest age. If
an outbreak occurs in a young forest stand, then it is possible to suppress the population
and return it back to the area of stability. But if the stand is old, then the endemic
equilibrium has a very narrow domain of attraction, and thus, the probability of an
outbreak is very high. Finally, the lower equilibrium disappears and it is no longer
possible to avoid an outbreak by suppressing budworm population.
The model suggests to reduce forest age by cutting oldest trees. This will move the
system back into the stable area.
Classification of Outbreaks
Classification of insect outbreaks was independently developed by Berryman (1987)
and Isaev and Khlebopros (1984).
Unstable High
Stable High Equilibrium
Equilibrium)
Sustained eruption Pulse eruption
Below is the classification of all types of population dynamics (not only outbreak
species)
• Gradient populations: respond directly to external factors (no density-
dependent amplification). They have high density in favorable conditions and
low density in unfavorable conditions (both in space and time). High-density
populations never spread (cause population increase in surrounding
populations).
• Eruptive populations: the effect of external factors is amplified by inverse
density-dependence (=release effect). Amplifying mechanisms were discussed in
the first section. Outbreaks of eruptive populations are able to spread (traveling
wave). See the spreading outbreak of the southern pine beetle.
• Sustained eruption: environmental fluctuations may cause the transition of the
population from the low equilibrium to the high equilibrium. Examples: bark
beetles, spruce budworm
• Pulse eruption: environmental fluctuations trigger an outbreak which collapses
immediately (e.g., due to parasites). Examples: gypsy moth, pine sawflies, etc.
• Cyclical eruption: Both equilibria are unstable and populations cycles around
them. Examples: Zeiraphera diniana, Cardiospina albitextura.
In bark beetles and some sawyer beetles Cerambicidae, two stable population equilibria
exist because of the positive feedback. Massive beetle attack on a tree overcomes its
resistance, and thus, the greater is population density the more resources (weakened
trees) are available.
Mechanisms of beetle attack may be different. Bark beetles make holes in the bark. If
there are only few beetles, then the holes become filled with resin and beetles die. If
thousands of beetles are making their holes simultaneously, then the tree has not enough
resin for self-defense.
Sawyer beetles (Monochamus urussovi) oviposit into boles of weakened trees. Adults
feed in tree twigs and can weaken a tree if population density is high. As a result, more
oviposition sites become available.