Sei sulla pagina 1di 12

605

REVIEW
2014 Canadian Geotechnical Colloquium: Landslide runout
analysis — current practice and challenges
Scott McDougall

Abstract: Flow-like landslides, such as debris flows and rock avalanches, travel at extremely rapid velocities and can impact large
areas far from their source. When hazards like these are identified, runout analyses are often needed to delineate potential
inundation areas, estimate risks, and design mitigation structures. A variety of tools and methods have been developed for these
purposes, ranging from simple empirical–statistical correlations to advanced three-dimensional computer models. This paper
provides an overview of the tools and methods that are currently available and discusses some of the main challenges that are
currently being addressed by researchers, including the need for better guidance in the selection of model input parameter
values, the challenge of translating model results into vulnerability estimates, the problem with too much initial spreading in
the simulation of certain types of landslides, the challenge of accounting for sudden channel obstructions in the simulation of
debris flows, and the sensitivity of models to topographic resolution and filtering methods.

Key words: landslides, runout analysis, modelling, risk assessment.

Introduction
Landslide runout analysis is used to simulate the motion of past landslides and to predict the motion of potential future landslides. It is
often a key step in landside risk assessment and mitigation design, especially in cases involving extremely rapid, flow-like landslides, such as
debris flows and rock avalanches (cf. Hungr et al. 2014 for landslide type definitions).
The high mobility and destructive potential of these types of landslides was demonstrated by the recent (in 2010) Mount Meager
rock slide – debris flow in southwestern British Columbia (Fig. 1). Approximately 50 × 106 m3 of weak volcanic rock failed on the flank
of Mount Meager and travelled down Capricorn Creek, reaching estimated velocities of over 80 m/s (290 km/h), before temporarily
damming Meager Creek and then Lillooet River more than 12 km from the landslide source (Guthrie et al. 2012). The risk of a sudden
landslide dam outburst flood on Meager Creek forced the temporary evacuation of 1500 residents from the lower Lillooet River valley
(Guthrie et al. 2012); however, the dam breached calmly within a couple of hours.
In spite of substantial direct economic costs of about CAD$10 million
(Guthrie et al. 2012), the outcome of the Mount Meager event was very fortunate in the sense that no lives were lost, in part because of its
relatively remote location (although the landslide did narrowly miss an occupied campground located next to Lillooet River).
However, worldwide, landslides cause thousands of fatalities every year (Petley 2012). Significantly, most landslide-related deaths occur
on relatively flat land in the distal runout zone (D. Petley, personal communication, 2014), where human development is generally
concentrated and high-velocity landslide impacts can still occur, often with little or no advance warning. The landslide that occurred
near Oso, Washington, in March 2014, which travelled more than 1 km across the valley bottom and caused 43 fatalities in the
community of Steelhead Haven (Keaton et al. 2014; Iverson et al. 2015; Hibert et al. 2015; Wartman et al. 2016), is a recent example of
this widespread problem. In some cases, agricultural development on the flat land in the runout zone may also contribute to the
mobility of landslides and exacerbate the consequences (e.g., Evans et al. 2007). Because stabilization of landslide source areas is not
always possible, tools and methods are needed to predict landslide runout behaviour and help manage land use and (or) design
protection in the runout zone.
The main objectives of this paper are to provide an overview of the runout analysis tools and methods that are currently available and
to discuss some of the main challenges that are currently ing addressed by researchers. Some ideas for future research are also
discussed briefly. The material described in this paper was presented during the annual Colloquium Lecture at the 2014 Canadian
Geotechnical Society conference in Regina, but includes several updates.

Received 22 February 2016. Accepted 1 November 2016.


S. McDougall. Department of Earth, Ocean and Atmospheric Sciences, The University of British Columbia, Vancouver, BC, Canada.
Email for correspondence: smcdouga@eoas.ubc.ca.
Copyright remains with the author(s) or their institution(s). Permission for reuse (free in most cases) can be obtained from RightsLink.

Can. Geotech. J. 54: 605–620 (2017) dx.doi.org/10.1139/cgj-2016-0104 Published at www.nrcresearchpress.com/cgj on 6 December 2016.
McDougall 609

Fig. 1. View looking up Capricorn Creek towards the source of the 2010 Mount Meager rock slide – debris flow. Note the dramatic
superelevation in channel bends and the complete stripping of vegetation along the path. (Photograph courtesy of John Clague, Simon Fraser
University.) [Colour online.]

Landslide runout analysis


Landslide runout analysis is the analysis of post-mobilization landslide motion. It can involve both the forensic-style
backanalysis (simulation) of previous events and the forward-analysis (prediction or forecasting) of potential future events. Runout
prediction is often required in the context of a landslide hazard or risk assessment (e.g., Willenberg et al. 2009; Froese et al. 2012;
Jakob et al. 2013; Loew et al. 2016), in which case it is desirable to be able to assign conditional probabilities to a range of potential
mobility outcomes.
Figure 2 illustrates the concept of probabilistic runout mapping in the context of Turtle Mountain in southwestern Alberta, site of the
1903 Frank Slide (McConnell and Brock 1904). The coloured dashed lines shown in Fig. 2 represent conceptual runout exceedance
probability isolines for a potential failure of the South Peak of Turtle Mountain, where various unstable rock masses have been
identified and are currently being monitored (Froese et al. 2012; Froese and Moreno 2014). Figure 2 is not an actual hazard map, but
is intended to illustrate the concept that the runout exceedance probability (i.e., the probability that a future event of a given size
will travel past each isoline) decreases with distance from the source slope. The analyses on which Fig. 2 is based are described in O.
Hungr Geotechnical Research Ltd. (2007). Runout analysis is also used to design mitigation structures, including debris barriers, berms,
and nets (Mancarella and Hungr 2010; Ashwood 2014). Runup heights and impact loads on such structures can be modelled directly or
estimated indirectly based on estimated flow depths and velocities at specific points of interest (e.g., Hübl et al. 2009; Kwan 2012).
Runout analysis is also used to help assess the potential secondary effects of landslides, including landslide-generated waves (Pastor et
al. 2009a; BGC 2012; Wang et al. 2015; Yavari-Ramshe and Ataie-Ashtiani 2015) and flooding caused by landslide dams (both upstream
flooding behind a dam and downstream flooding following a dam breach) (Schneider et al. 2014; Worni et al. 2014). Other secondary
effects, such as air blasts and dust cloud cover, can also be delineated on the basis of estimated runout limits.
Unfortunately, limited guidance is currently provided to practitioners carrying out runout analysis. Landslide guidelines published by the
Association of Professional Engineers and Geoscientists of British Columbia (APEGBC 2010) describe runout analysis and the associated
design of control structures as “specialty services” that may be beyond the scope of typical landslide assessments or may require expert
help. Some guidance on the selection of appropriate runout analysis tools and methods is provided in the national landslide guidelines
that are being published online by the Geological Survey of Canada (Lato et al. 2016). In contrast, relatively prescriptive guidance is
provided to practitioners in Hong Kong by the Geotechnical Engineering Office (GEO 2011).

Overview of methods
Runout analysis methods can be grouped into two broad categories (Fig. 3): (i) empirical–statistical methods that rely on statistical
geometric correlations and (ii) analytical methods that rely on process-based modelling. Numerical models, including both
continuum and discontinuum models, fall into the second category. Within this subcategory, hybrid “semi-empirical” numerical
models that rely on some form of parameter calibration are more common than pure mechanistic models that rely on independent
material property estimates.
Landslide modellers tend to rely heavily on empiricism because there are no universal constitutive laws governing landslides that are
straightforward to incorporate into numerical models (Pastor et al. 2012). Iverson and George (2014) recently formulated a twophase
model that is capable of simulating the effects of dilatancy on evolving pore pressure response, which is a step towards more purely
mechanistic modelling of debris flows. However, landslides as a collective phenomenon are extremely diverse and complex. There is
still debate in the landslide community about the mechanisms of long runout behaviour, which includes pore pressure response (Heim
1932; Abele 1997; Iverson 1997, 2012; Hungr and Evans 2004; Legros 2006; Iverson et al. 2011), but could also involve more exotic
behaviour, including lubrication by snow or ice (Evans and Clague 1988; Delaney and Evans 2014); fluidization by trapped air, vapour or
dust (Kent 1966; Hsu 1975; Manzanal et al. 2016); mechanical or acoustic fluidization of particles (Melosh 1979; Johnson et al.
2016), frictional weakening by flash heating (Lucas et al. 2014), and (or) forces generated by dynamic rock fragmentation (Bowman et al.
2012; Davies and McSaveney 2012). Such theories are very difficult to test, and the physical properties that go along with them are
very hard to measure because shear rates, pressures, and temperatures at the field scale are challenging to replicate in the laboratory.
On the other hand, numerical models that rely heavily on empirical calibration have been criticized regarding their ability to simulate
the bulk behaviour of landslides simply through “tuning”of parameters that may have questionable physical significance (Iverson 2003).

Fig. 3. Available runout analysis methods fall into two broad categories: empirical–statistical or analytical. Red dashed line indicates a
subcategory of hybrid “semi-empirical” numerical models that require parameter calibration. [Colour online.]

Empirical–statistical methods
The most practical empirical methods are based on simple geometric correlations. Two well-established examples are shown
schematically in Fig. 4. Figure 4a illustrates an inverse correlation between landslide volume and angle of reach or “fahrböschung” (the
angle of the line connecting the crest of the source with the toe of the deposit), which has been documented by several workers for a
variety of landslide types (e.g., Scheidegger 1973; Li 1983; Nicoletti and Sorriso-Valvo 1991; Corominas 1996; Hunter and Fell 2003). Figure
4b illustrates a similar simple correlation, based on Galileo scaling laws, between landslide volume and the area covered by the deposit,
which has been documented for rock avalanches and lahars (e.g., Li 1983; Hungr 1990; Iverson et al. 1998; Griswold 2004). The latter
correlation is the basis for the GIS-based computer program LAHARZ (Iverson et al. 1998), which is used by the U.S. Geological Survey to
map lahar hazards around U.S. volcanoes. Modifications were made to LAHARZ by Berti and Simoni (2014) to develop the program
DFLOWZ for unconfined flow conditions. Other empirical methods have been presented by Hsu (1975), Davies (1982), and Fannin and
Wise (2001).
Whittall (2015) demonstrated that the empirical methods described above are also applicable to open-pit slope failures, for which
the mobility of events depends largely on the nature of the source material (with failures of weathered or altered rock masses being more
mobile than similarly sized failures of massive crystalline rock masses). Whittall (2015) proposed decision-making methods based on
McDougall 609
these correlations to help mine operators improve their trigger action response plans and reduce the risk to workers and equipment
if an imminent pit wall failure is detected.
These types of empirical–statistical methods are simple, but extremely powerful, because the inherent data scatter shown
schematically in Fig. 4 can be expressed in quantitative statistical terms. The statistical results can be used to establish limits of confidence
for prediction (Hungr et al. 2005a; Iverson 2008; Schilling et al. 2008; Berti and Simoni 2014), which can then be used for quantitative risk
assessment. For example, using the angle of reach method shown in Fig. 4a, if the volume of a potential failure can be estimated, a
range of travel angles can be estimated that bound the data points in that magnitude range, and those uncertainties can be translated into
estimates of runout exceedance probability. An example of this probabilistic approach is illustrated in Fig. 5. Using a dataset of case
histories that are similar to the case in question, runout estimates based on the best-fit (orange line) could be associated with an
exceedance probability of 50% (i.e., a 50% chance that future landslides of this type and size will travel farther), while runout estimates
based on the lower 10th percentile prediction interval (yellow line) could be associated with an exceedance probability of 10% (i.e., a
10% chance that future landslides of this type and size will travel farther). Such an approach provides useful context for decision-makers
and is consistent with evolving professional practice guidelines for landslide assessments in Canada and around the world (APEGBC
2010; Porter and Morgenstern 2013; Corominas et al. 2014).

Fig. 4. Schematic illustrations of two landslide geometric correlations: (a) volume, V, versus angle of reach, ; and (b) volume, V, versus
deposit area, A. [Colour online.]

This probabilistic framework is not new for natural hazards. Similar approaches are used for weather forecasting (DeMaria et al.
2009), flood mapping (APEGBC 2012), and snow avalanche mapping (CAA 2016). A good example is the probabilistic forecasts that are
provided for hurricanes and tropical storms by the U.S. National Hurricane Center, which provides real-time predictions of hurricane
tracks based on the probability of tropical storm wind speeds. These maps are used routinely by various agencies to make life-saving
decisions, and users of the maps are familiar with working within the inherent uncertainty.

Numerical models
The empirical methods described above are very useful for estimating runout distances and inundation areas, but numerical models
have the potential to provide more information because they can also be used to estimate relevant landslide intensity parameters, such
as flow depths, flow velocities, and impact pressures, within these limits. Animations or time-lapse images generated from
numerical model outputs are also useful visualization and communication tools. An example of a time-lapse image based on a
back-analysis of the 2010 Mount Meager rock slide – debris flow (Guthrie et al. 2012) using the numerical model DAN3D (McDougall and
Hungr 2004) is shown in Fig. 6.
At least 20 different numerical runout models have been developed over the past two decades, the majority of which are continuum
models that are based on established hydrodynamic modelling methods, but with some landslide-specific modifications to account for the
effects of entrainment, internal stresses, and spatial variations in rheology. A unique model benchmarking workshop was held in Hong
Kong in 2007 to compare the performance of 17 different models that were in development at the time using a series of validation
tests, laboratory experiments, and full-scale case studies (Hungr et al. 2007). A more recent overview of selected numerical runout
models was provided by Pastor et al. (2012). Newer models have recently been introduced by Mergili et al. (2012, 2017), Horton et al.
(2013), and Iverson and George (2014).
A consolidated list of selected numerical models that are currently available or at an advanced stage of development is provided in
Table 1. Note that, for simplicity, models denoted as two-dimensional (2D) or three-dimensional (3D) in Table 1 are capable of
simulating motion along a 2D path or 3D surface, respectively, regardless of the numerical integration scheme that is used.
With the exception of TOCHNOG (Roddeman 2002), all of the continuum models listed in Table 1 are based on depth-averaged
shallow flow equations that have been adapted to simulate the flow of earth materials, as described in the pioneering work of
Savage and Hutter (1989). The resultant forces acting on each computational element in these types of models look very similar to the
forces acting on the columns of soil in a limit equilibrium slope-stability analysis (Fig. 7). Gravity (W) is the main driver of motion.
There are also internal stress gradients ( P and S) that arise because of the sloping free surface; these forces influence how the flow
spreads out. There may also be some inertial resistance if the flow is entraining new material from the path (momentum flux
component, E) because momentum needs to be transferred from the moving mass to accelerate that material up to speed. However,
most of the resistance to motion typically comes from basal shear stress (T), which may be moderated by pore pressure and (or)
other possible mechanisms described earlier.
In continuum models, the mass and momentum balance equations are solved at each time step at several locations within the landslide
mass. In depth-averaged 2D models, reference “slices” are used and the flow direction and path width need to be pre defined by the
user. In depth-averaged 3D models, reference “columns” are used that allow for lateral movement, so that the flow direction and path
width do not need to be pre-defined and instead become key outputs of the model. Different computational methods are available to
solve the equations of motion, including Eulerian (fixed frame of reference) and Lagrangian (moving frame of reference) approaches.

Fig. 5. Probabilistic runout prediction framework based on volume versus angle of reach data. [Colour online.]

Fig. 6. Time-lapse simulation of 2010 Mount Meager landslide using am DAN3D program. Images are shown at 1 min intervals. See Fig. 1 for a
photo of the event and Fig. 12 for a summary of the numerical analysis. (Digital elevation model used for simulation provided by Richard
Guthrie, Stantec.) t, time. [Colour online.]
McDougall 609
Table 1. List of selected numerical landslide runout models that are currently available or in development.
Model Type Selected reference
3dDMM 3D, continuum Kwan and Sun (2007) DAN 2D, continuum Hungr (1995)
DAN3 D 3D, continuum McDougall (2006) FLATModel 3D, continuum Medina et al. (2008) FLO-2 D 3D,
continuum FLO-2 D Software Inc.
(2007)
610 Can. Geotech. J. Vol. 54, 2017
Flow-R 3D, spreading algorithm

Since the early 2000s, the author has been closely involved in the development of the model DAN3D (McDougall and Hungr
2004, 2005; McDougall 2006), which is a 3D extension of the 2D model DAN-W (Hungr 1995). The computational method used in
DAN3D is based on the meshless Lagrangian numerical technique known as “smoothed particle hydrodynamics” (SPH), which was
originally developed in the 1970s for the simulation of astrophysical phenomena like galaxy collisions (Lucy 1977; Gingold and
Monaghan 1977). Using this approach, the landslide mass is divided up into a collection of so-called “smooth particles”. In the depth-
averaged context, these particles can be visualized as bellshaped objects moving across the sliding surface (Fig. 8). The free surface of the
landslide, which defines the flow depths and depthgradients that are used in the equations of motion, is constructed by superposition of
the particles (i.e., the depth of the slide mass at any given location is the sum of the contributing depths of each individual particle at that
location). In effect, each particle pushes on all of its neighbours, so that in areas with a denser concentration of particles, the flow
depth and depth gradients will be greater and, in general, so will the spreading forces. The equations of motion are solved at each
particle location and their positions are advanced in time. A major advantage of this method is that the particles are free to split apart
from each other to move around obstacles in the path without causing mesh distortion problems. This ability to handle large
deformations and flow splitting can
be very important when dealing with steep, complex terrain. For example, in the 2012 Johnson’s Landing landslide at Kootenay Lake,
B.C. (Fig. 9), an approximately 380 000 m3 debris avalanche – debris flow, most of the damage was caused by a large lobe of debris that
left the main creek channel, while approximately half of the debris stayed in the creek and eventually flowed into the lake (Nicol et al.
2013). The 1970 rock–ice fall – debris flow at Nevado Huascarán in Peru, in which the town of Yungay was buried by a large lobe of
material that separated from the main flow, is another striking example of this behaviour (Plafker and Ericksen 1978; Evans et al.
2009).

GeoFlow-SPH 3D, continuum Pastor et al. (2009b)


D-Claw 3D, continuum Iverson and George (2014) MADFLOW 3D, continuum Chen and Lee (2000) MassMov2D
3D, continuum Begueria et al. (2009)
PFC 3D, discontinuum Poisel and Preh (2008) RAMMS 3D, continuum Christen et al. (2010) RASH3D 3D,
continuum Pirulli (2005) r.avalanche 3D, continuum Mergili et al. (2012) r.avaflow 3D, continuum Mergili et al. (2017)
Sassa-Wang 3D, continuum Wang and Sassa (2002)
SCIDDICA S3-hex 3D, cellular automata D’Ambrosio et al. (2003) SHALTOP-2D 3D, continuum Mangeney-Castelnau et al.
(2003)
TITAN2D 3D, continuum Pitman et al. (2003) TOCHNOG 3D, continuum Roddeman (2002) VolcFlow 3D, continuum
Kelfoun and Druitt (2005) Wang 2D, continuum Wang (2008)

Fig. 7. Simplified depth-averaged forces acting on a column of flowing material. E, momentum flux due to entrainment; P, differential earth
pressure; S, differential transverse shear; T, basal shear; W, gravity. [Colour online.]

Fig. 8. Schematic illustration of the “smoothed particle hydrodynamics” method used in DAN3D model. Landslide mass is discretized into a
collection of “smooth particles”, which can be visualized as bell-shaped objects. Local flow depths and depth gradients are constructed by
superposition of particles. [Colour online.]

Fig. 9. Example of significant flow splitting during 2012 fatal landslide at Johnson’s Landing, B.C. Inset shows view looking downstream
above point where part of flow jumped out of creek channel. (Photographs courtesy of Peter Jordan and Dwain Boyer, B.C. Ministry of Forests,
Lands and Natural Resource Operations.) [Colour online.]

Landslide models also need to be able to account for nonhydrostatic internal pressures that can develop within deforming earth
materials (Savage and Hutter 1989), similar to the passive and active earth pressures that develop next to a deflecting retaining wall.
These nonhydrostatic pressures develop because of internal shear strength, which resists internal deformation. This resistance to
deformation means that landslides do not spread out as readily as water, which has no internal shear strength.
The majority of continuum landslide runout models listed in
Table 1 use methods to estimate internal pressure distributions that are based on the Rankine earth pressure theory, following
methods that were originally developed by Savage and Hutter (1989). In DAN3D, internal strains and stresses are tracked at each time
step based on the relative change in position of the moving particles, which allows the simulation of anisotropic pressure distributions
that can develop, for example, if the flow is converging in one principal direction and diverging in the other. This capability can have a
significant influence on the extent and shape of the modelled inundation area. To illustrate this effect, Fig. 10 shows DAN3D
simulations of hypothetical experiments involving idealized material flowing down a ramp onto a flat surface (after McDougall 2006).
The material on the left (Fig. 10a) has zero internal shear strength (like water), while the material on the right (Fig. 10b) is a frictional
material with an internal friction angle, i, of 40°. Both materials have the same basal friction angle (25°). In both cases, the centre of
mass travels the same distance; however, with the material that has internal strength, high passive pressures that develop during
converging movement through the slope transition zone result in more longitudinal spreading and therefore longer runout of the
610 Can. Geotech. J. Vol. 54, 2017
flow front.
McDougall 611

Fig. 10. DAN3D simulations of idealized material flowing down a ramp onto a flat surface: (a) material with zero internal shear strength (like
water) and (b) material with an internal friction angle of 40° (after McDougall 2006).

The simulation of entrainment of material along the flow path is also an important model capability. This process involves volume
change and momentum transfer from the moving mass to the stationary path material, which gives rise to the momentum flux
component, E, shown in Fig. 7. Debris flows, in particular, can sometimes gather most of their material through entrainment
(Takahashi 1991; Revellino et al. 2004; Hungr et al. 2005b; Iverson 2012). But the process of entrainment and plowing of path material can
also be critical to the behaviour of large rock avalanches (Hungr and Evans 2004; Evans et al. 2009). In the case of the 1903 Frank Slide,
most of the damage in the town of Frank was actually caused by alluvium that was mobilized when the rock avalanche impacted the
valley floor (McConnell and Brock 1904; Cruden and Hungr 1986).
Different approaches to simulating material entrainment have been proposed, ranging from empirical methods that require the input
of user-prescribed volume growth rates (e.g., McDougall and Hungr 2005; Chen et al. 2006) to process-based methods that simulate
entrainment as a function of basal shear stress conditions (e.g., Crosta et al. 2009; Iverson 2012). Rheology changes along the path can
accompany entrainment and are also important to consider. Undrained loading of weak, wet path material has been recognized as a
long runout mechanism for well over a century, dating back to interpretations of the 1881 Elm Slide in Switzerland by Albert Heim
(Heim 1932; Abele 1997; Hungr and Evans 2004; Legros 2006; Iverson 2012).

Current challenges
Model calibration
All of the landslide-specific features described above present modelling challenges and have been the focus of the model
development work that has been carried out over the past two decades. But now that we have models that incorporate these key features,
how do we select the input parameter values and actually use these models for reliable landslide runout forecasting? In the author’s
opinion, this is the biggest current challenge for researchers and practitioners involved in this type of work.
One modelling approach is to base the input parameter values on physical material properties that are measured in the field or
laboratory (e.g., Iverson and George 2014, 2016). This approach typically involves complex constitutive relationships with a relatively
large number of input parameters and requires the use of material sampling and testing methods that are appropriate for the scale
and velocity of real landslides, which can be a significant challenge. A variation of this approach using parameter values based on a
combination of laboratory experiments and field-scale stress field observations has been proposed by Pellegrino et al. (2015).
An alternative modelling approach, which has been adopted previously with the majority of the models listed in Table 1, is to base the
selected parameter values on calibrated values obtained through numerical back-analysis of past landslides. This approach can be used
with relatively simple rheological models that do not necessarily capture the complex micromechanics of real landslides, but are still
able to simulate their bulk behaviour (e.g., flow velocities, inundation area, distribution of deposits), which is the main goal in runout
forecasting. In some cases, this approach may also have expediency and cost advantages in practice because specialized material testing
is not required. Furthermore, as discussed later in this paper, models that are calibrated to “groups” of events are also potentially well-
suited to probabilistic analysis. Analogous parameter calibration is carried out routinely in geotechnical practice with limit equilibrium
slope stability analyses (e.g., back-analyzing a failed slope to help constrain shear strength values). However, in contrast to landslide runout
modelling, selection of the parameter values in slope stability modelling can be more readily informed by the results of conventional field and
(or) laboratory strength tests.
The focus of the calibration-based approach is on the main external aspects of landslide behaviour (i.e., how fast and how far do
they travel?). The landslide mass is treated as an “equivalent fluid” (Hungr 1995), a material governed by simple basal resistance
relationships with a limited number of adjustable parameters. The resistance parameter values in an equivalent fluid model are not
necessarily real material properties that can be measured; instead, they are adjusted (calibrated) by the user to produce the best possible
simulation of a given real event. Calibration trends amongst groups of similar landslides are then sought that can be used for
prediction.
A variety of simple rheological models can be used for this purpose. The selection of the most appropriate rheological model
depends on the type of landslide in question and, often, the nature of the materials along the path. Two rheological models that are
referred to later in this paper, the frictional model and the Voellmy model, are shown in Fig. 11. With the frictional model (Fig. 11a),
the basal resistance is controlled by a single parameter, the bulk basal friction angle, b, which accounts for pore pressure
implicitly. The Voellmy model (Fig. 11b) also includes a frictional component (again with implicit pore pressure effects), but adds a
turbulence term to account for velocity-dependent resistance. Voellmy (1955) originally developed this model for snow avalanches,
but it has since been adopted by landslide modellers (Körner 1976) because it is able to simulate the range of velocities and shape of
deposits that are observed in many real landslides, as described below
McDougall 613

Fig. 11. Two simple rheological models that can be used in an equivalent fluid framework: (a) frictional and (b) Voellmy. Adjustable
parameters are highlighted in red. EGL, energy grade line; f, friction coefficient; g, gravitational acceleration; v, velocity; , turbulence
parameter; , bulk density; , basal normal stress; b, bulk basal friction angle. [Colour online.]

.
The frictional model produces forward-tapering deposits and relatively high peak velocities. In comparison, the Voellmy model
produces relatively uniform deposits. The turbulence component of the Voellmy model can also limit the peak velocities, in the same
way that air resistance limits the freefall speed of a skydiver. This effect can be visualized using the energy grade line (EGL) concept
shown in Fig. 11. The EGL in this case connects the centres of mass of the stationary source and deposit material. During motion, the
vertical distance between the centre of mass (which can be closely approximated by the elevation of the sliding surface) and the EGL
approximates the velocity head, v2/2g, of the centre of mass. In the frictional case (Fig. 11a), the EGL slopes uniformly downward at
the same angle as the bulk basal friction angle; the velocity head would therefore increase along the path as long as the friction angle
is lower than the slope angle. In contrast, in the Voellmy case (Fig. 11b), the EGL bends towards the sliding surface as the flow velocity
increases.
One approach to model calibration is to visually compare simulation results with observations and adjust the parameter values by
trial-and-error to achieve a satisfactory match in terms of the simulated runout distance, deposit distribution, and velocities (Hungr
1995). This subjective approach is simple to implement using 2D models with one or two adjustable parameters that dominate different
characteristics of the simulation and can therefore be adjusted relatively independently of each other. For example, using a 2D runout
model with the two-parameter Voellmy rheology (Fig. 11b), the friction coefficient — which governs the slope angle on which material
begins to deposit — can be adjusted to achieve a satisfactory visual match of the observed runout distance, while the turbulence
parameter — which limits flow velocities as described above — can be adjusted simultaneously to achieve a satisfactory visual match
of independent velocity estimates along the path.
With 3D models, this visual approach tends to require more interpretation, and is therefore even more subjective. An example of
subjective visual calibration using a 3D model (DAN3D) with the two-parameter Voellmy rheology (Fig. 11b) is shown in Fig. 12. The
results of a series of model calibration runs are presented as two visual matrices, one showing simulated deposit depths (Fig. 12a) and the
other showing simulated flow velocities (Fig. 12b). In each matrix, the friction coefficient increases (and therefore reduces the simulated
runout distances) from left to right and the turbulence parameter increases (and therefore increases the simulated velocities) from top to
bottom. The range of best-match (subjectively or visually) parameter combinations, in terms of the simulated deposit and velocity
distributions, are indicated in each case. The best overall match occurs where those two independent results intersect, in this case, at f = 0.05
and = 500 m/s2.
A more objective and efficient calibration method has been proposed by Aaron et al. (2016a) using DAN3D automated batch runs and
the parameter estimation package PEST (Watermark Numerical Computing 2005). PEST uses a systematic inverse analysis algorithm to
determine the combination of model input parameters that best minimize the error variance between model outputs and observed
data. In the example shown in Fig. 13, DAN3D model results using the two-parameter Voellmy rheology are being judged by PEST
based on how well they simulate the extent of the actual flow trimline in a series of simulations of the 1903 Frank Slide. The red line
highlighted in Fig. 13 identifies the parameter combinations that resulted in quantitatively comparable best fits.
Trimline fitness can be judged using automated methods that can be coded directly into models. Figure 14 shows one method based
on maximization of the ratio of the intersection and union of the simulated and observed inundation areas, as proposed by Galas et al.
(2007). The intersection is represented by the purple area where the simulated and observed trimlines overlap, while the union is
represented by the whole area covered by both. A perfect fit would occur when the intersection to union ratio is exactly 1. A caveat of
automated methods like this is that they can sometimes produce nonunique results (e.g., two loci of possible best-fit parameter
combinations). Subjective judgment is therefore still required when interpreting the results.
Cepeda et al. (2010) proposed an alternative, multi-criteria calibration method based on a technique known as receiver operator
characteristic analysis (ROC). This method permits the evaluation of results against multiple calibration criteria simultaneously and allows
the user to subjectively assign weights to the criteria to reflect their relative importance and (or) reliability. For example, more weight
may be placed on an accurate estimate of the total runout distance than on a point velocity estimate that was backcalculated from
relatively unreliable flow superelevation measurements.
In addition to the calibration methods described above, geomorphic clues can also be used to help constrain model input
parameters. For example, the friction parameters in both the frictional and Voellmy rheologies (Fig. 11) control the slope angles on which
material decelerates and deposits in runout models. Therefore, if one expects material to deposit in a certain area (e.g., downslope of
the fan apex on a well-defined debris flow fan), the field-observed local slope angles can be used to constrain the friction input.

Fig. 12. Example of subjective visual calibration using a 3D model (DAN3D) with two-parameter Voellmy rheology to simulate 2010 Mount
Meager landslide described by Guthrie et al. (2012). Best overall match was achieved using f = 0.05 and = 500 m/s2, where best-match
(a) deposit depth and (b) flow velocity results intersect (green box). (Digital elevation model used for simulations provided by Richard Guthrie,
Stantec.) [Colour online.]
616 Can. Geotech. J. Vol. 54, 2017

The methods described above can be used to produce very good simulations of past events on a case-by-case basis, and many examples
of successful case-specific landslide back-analyses have been documented. A thorough recent compilation of over 300 documented
back-analyses was presented by Quan Luna (2012). Although one successful runout prediction based on a casespecific calibration
was documented recently (Loew et al. 2016), case-specific calibration parameters have limited use in the prediction of future events.
Model calibration is more powerful when “groups” of similar events are back-analyzed together because the resulting patterns are more
broadly applicable and can be used in a statistically justifiable probabilistic way.
An early attempt at group calibration was carried out by Hungr and Evans (1996). Using the 2D model DAN-W to back-analyze 23 rock
avalanches, they found that the total runout distance in 70% of the cases could be simulated within an error of approximately 10% using
the Voellmy rheology with a single combination of input parameters. Similar group calibration exercises were carried out by Ayotte
and Hungr (2000), Revellino et al. (2004), and Pirulli (2005). The explicit error bounds reported in all of these studies provide
extremely useful information for probabilistic prediction. Like the data scatter of the empirical–statistical methods described earlier,
such error bounds can be translated into estimates of runout exceedance probability

Fig. 13. Example of a trimline fitness test using PEST with output from DAN3D model to simulate 1903 Frank Slide: (a) trimline fitness test
for all parameter combinations that were run and (b) output from simulation using f = 0.1 and = 500 m/s2, which falls on locus of best-fit
combinations (yellow star in (a)). (Images courtesy of Jordan Aaron, The University of British Columbia.) [Colour online.]

Fig. 14. Simple calibration method proposed by Galas et al. (2007) based on maximization of ratio of intersection (purple area) and union (blue
+ red + purple areas) of simulated and observed inundation areas. Perfect fit would correspond to a ratio of 1. [Colour online.]

.
An extension of this calibration approach was studied by McKinnon (2010). Forty flow-like landslides were back-analyzed using DAN-
W with the same wide range of input parameter combinations. The results for each parameter combination were plotted as histograms
showing the number of cases that simulated the observed runout distances within certain error range bins, with the goal of identifying
the input parameter combinations that minimized the error and variance of the results. McKinnon’s results were comparable to those
of Hungr and Evans (1996).
A similar calibration study was carried out by Quan Luna (2012), who fitted 2D probability density functions to groups of calibrated
parameter values (based on back-analyses using various landslide runout models). Such probability density functions can be used
directly in Monte-Carlo style probabilistic analysis, similar to the routine methods that are built into several existing rockfall modelling
programs. The same probabilistic approach to parameter value selection can also be applied to landslide runout models that require
the input of measured material properties, which have inherent variability that can be quantified during material testing; this
approach is used in slope stability modelling to predict the probability of failure (Nadim 2007). Unfortunately, landslide runout models
are still limited by relatively long model run times, which can make the Monte-Carlo approach time-prohibitive in practice (Dalbey et al.
2008).
Despite the significant advancements described above, more work is still needed to expand the record of calibrated case studies to
provide better guidance to practitioners on the selection of model input parameters. An emphasis should be placed on seeking
calibration patterns for different types of landslides that can be applied in a probabilistic framework. Most of the existing numerical
models listed in Table 1 can be used in this context.
616 Can. Geotech. J. Vol. 54, 2017
Estimating vulnerability of elements at risk
Besides being used to estimate inundation limits and associated spatial impact probabilities, runout models can also be used to
estimate the vulnerability of elements at risk within the impact area. One approach to estimating vulnerability that appears
promising is based on a parameter called the debris flow intensity index, IDF, defined by Jakob et al. (2012) as the product of the
square of the flow velocity, v, and the flow depth, h (Fig. 15). The intensity index represents a simple proxy for dynamic impact
pressure. Jakob et al. (2012) noted a good correlation between the intensity index and the degree of damage to buildings that have been
impacted by debris flows, and defined four building damage classes ranging from “some sedimentation” to “complete destruction”.
Kang and Kim (2016) carried out a similar study of the vulnerability of both reinforced concrete and nonreinforced concrete
buildings to a series of debris flows in South Korea in 2011, and proposed three different vulnerability curves based on estimated flow
depths, flow velocities, and impact pressures. Similar approaches to developing vulnerability curves were presented by Quan Luna et
al. (2011) using data from a series of damaging debris flows in Italy in 2008 and Eidsvig et al. (2014) using data from a debris flow event in
Italy in 1987.
Using one of the approaches described above to estimate building damage, the vulnerability of building occupants can then be
estimated. However, because historical fatalities and their relationship to building damage are not well-documented, fairly wide
vulnerability uncertainty bounds need to be carried through the risk assessment calculations. More work to compile historical fatality
records and correlate them with building damage estimates should be carried out.

Limiting initial spreading


Another modelling issue that researchers are currently working on is how to limit the initial spreading of the slide mass. Continuum
models based on shallow flow theory assume that the landslide fluidizes instantaneously upon failure, but in reality this process can
be progressive. The result is that continuum models tend to overestimate the amount of spreading during the early stages of motion.
A DAN3D simulation of the early stages of the Mount Meager landslide described by Guthrie et al. (2012) is shown in Fig. 16, which
demonstrates overestimation of initial spreading by a continuum model.

Fig. 15. Sample runout analysis showing modelled debris flow intensity and associated building damage classes based on definitions proposed
by Jakob et al. (2012). (Digital elevation model used for simulation provided by Dave Southam, B.C. Ministry of Forests, Lands and Natural
Resource Operations.) [Colour online.]

Aaron and Hungr (2016) developed a modified version of DAN3D that allows the user to delay fluidization. The method treats the
landslide mass as a coherent body until it reaches a certain userspecified distance, at which point the original DAN3D algorithm takes
over and spreading is allowed to begin. The coherent motion stage is simulated using the method of columns, similar to the approach
used in 3D limit equilibrium slope stability analysis (e.g., Hungr 1987). The individual forces and torques on each column are calculated
at each time step and then combined to determine the total force and torque acting on the whole column assembly. This total force and
torque are then used to determine the translational and rotational accelerations of the landslide for that time step.
The ability to delay fluidization can be very important when analyzing rock slides, which can travel hundreds of metres before they
fragment enough to be treated as a fluid body. The translational rock slide that occurred in Goldau, Switzerland, in 1806 (Heim 1932)
is a good example of this behaviour. A comparative simulation of the Goldau rock slide using Aaron and Hungr’s (2016) flexible
block version of DAN3D is shown in Fig. 17. A detailed description of this analysis is presented in Aaron and Hungr (2016). As shown in
Fig. 17, the modified model produces a better simulation of the actual flow trimline. The only extra parameter that needs to be specified
is the location where fluidization starts. Aaron and Hungr (2016) suggest that this parameter can be selected based on examination of
the pre-slide topography, to identify topographic obstacles or sudden changes in slope (e.g., at the point where a rock slide leaves its
planar source area) that could cause the mass to fragment.

Simulating obstructions and avulsions


Another big challenge practitioners face is predicting where debris flows might jump out of their channel, as occurred in the Nevado
Huascarán (Evans et al. 2009) and Johnson’s Landing (Nicol et al. 2013) cases described earlier. Three-dimensional runout models can
simulate superelevation and runup around channel bends, which can help indicate the most likely avulsion locations, but avulsions can
also happen if the channel becomes blocked by woody debris or coarse deposits. The presence of wood itself in the flow can influence
where this type of mass deposition occurs, particularly around channel bends (Lancaster and Hayes 2003). At present, this type of
behaviour requires makeshift modelling assumptions to simulate.
The Johnson’s Landing case study shown in Fig. 9 is a good example of a major flow avulsion that would have been very difficult to
predict in advance. Nicol et al. (2013) hypothesized that the deeply incised channel at this location was temporarily choked with
woody debris, which forced more material than expected to jump the bank. They simulated this behaviour by manually modifying the
local topographic surface to force an avulsion in their DAN3D model. A subsequent re-examination of this event by Aaron et al. (2016b)
using different rheological assumptions also required manual modifications to the topography to adequately simulate the observed
channel avulsion.
616 Can. Geotech. J. Vol. 54, 2017
Fig. 16. DAN3D simulation of early stages of Mount Meager landslide demonstrating overestimation of initial spreading by a continuum
model. (Digital elevation model used for simulations provided by Richard Guthrie, Stantec.) [Colour online.]

Bouldery debris flow surges can also deposit in the channel and cause avulsions. This coarse material can be bypassed by muddy
afterflows as well, which may not be as life-threatening as coarse debris flow surges, but can still cause considerable property
damage.
Figure 18 shows a shaded slope LiDAR image of a relatively active debris flow fan. Abandoned paleochannels are visible on both
sides of the current active channel, indicating that avulsions are common on this fan. Standard desktop GIS tools can be used to map
drainage pathways on the fan and help identify potential avulsion locations. However, judgment is then needed to select the
locations where avulsions are most likely to occur during future debris flow events. In a risk assessment, the number of simulated
avulsion scenarios needs to be representative enough to capture the spectrum of potential outcomes without making the risk event
tree unnecessarily complicated and (or) unmanageable. Significant judgment is also needed to assign reasonable conditional
probabilities to these subscenarios.

Sensitivity to topographic input


Numerical models are also sensitive to the roughness of the topographic input. All other factors being equal, the rougher the
surface, the higher the simulated momentum losses and the shorter the modelled runout distance. A very rough sliding surface can
also destabilize depth-averaged flow models.
To address this issue, in the 2D model DAN-W, the user builds the sliding surface by adding points along the path and the model fits a
smooth spline function to the points. In DAN3D, this process must be mimicked by filtering or smoothing the input digital elevation
model (DEM), which is analogous to draping a blanket over the surface to smooth out small-scale roughness. Figure 19 shows the
visual effect of filtering on a bare earth LiDAR data sample. In this case, most of the important (large-scale) topographic details
were preserved.
Figure 20 compares DAN3D simulation results from three different model runs using three different degrees of smoothing and
demonstrates that modelled landslides travel farther over smoother topography, all other things being equal. With DAN3D, the filtering
method used in Fig. 20b tends to produce results that are similar to the spline interpolation method used in the 2D model DAN-W.
Standardization of this model setup approach is desirable so that calibration results can be compared directly for landslides of different
types and scales. The optimum approach may be scale-dependent (e.g., the optimum number of filtering passes could be a function
of the characteristic flow depth or the spacing of grid nodes in the original digital elevation data). With the increasing availability of
high-resolution bare earth topographic models, there is a strong temptation to use the high-resolution data directly in runout models
to get a more accurate solution, but such results may not be comparable with, for example, model results based on more widely
available shuttle radar topography mission (SRTM) data.

Fig. 17. Simulations of Goldau rock slide using (a) original DAN3D model and (b) flexible block version of DAN3D. Actual flow trimline is
shown by dashed line. (Images courtesy of Jordan Aaron, The University of British Columbia, modified from figs. 3 and 10 of Aaron and
Hungr(2016), with permission of Elsevier.) [Colour online.]

Fig. 18. Shaded slope LiDAR image of relatively active debris flow fan showing abandoned paleochannels that indicate high avulsion
potential.

Fig. 19. Shaded slope images showing effect of filtering on bare earth LiDAR data from a location in the Coast Mountains, B.C.: (a) raw LiDAR
data at 1 m grid spacing; (b) LiDAR data resampled at 5 m grid spacing and filtered three times using a Gaussian algorithm.
McDougall 617

Fig. 20. Simulations of hypothetical 10 000 m3 debris avalanche – debris flow showing influence of surface roughness on runout models. Bare
earth LiDAR data were resampled at 5 m grid spacing with (a) no additional filtering; (b) 3× Gaussian filtering; (c) 10× Gaussian filtering. Source
area is at upper right in each image. Scale markers on horizontal and vertical axes are in metres. (Digital elevation model used for simulations
provided by Dave Southam, B.C. Ministry of Forests, Lands and Natural Resource Operations.) [Colour online.]

Summary and conclusions


Runout analysis is a key step in landslide risk assessment and mitigation design. This paper has provided an overview of the tools
and methods that are currently available to practitioners. Although significant advancements in this field have been made over the
past decade, particularly with respect to the development of numerical models, several key challenges remain, including the need for
better guidance in the selection of model input parameter values, the challenge of translating model results into vulnerability
estimates, the problem with too much initial spreading in the simulation of certain types of landslides, the challenge of accounting for
sudden channel obstructions in the simulation of debris flows, and the sensitivity of models to topographic resolution and filtering
methods.
In addition to these main current challenges, other emerging topics that warrant more attention from researchers and practitioners
include
• Improved model efficiency and user-friendliness, including shorter model setup, run times, and processing times. For example,
the next generation of landslide runout models could potentially make use of the physical realism, high efficiency, and
intuitiveness of advanced 3D video game engines, as has been favourably demonstrated for rockfall applications recently by
Ondercin et al. (2015).
• Improved model availability and cost. Many models are being developed noncommercially for research purposes and are
therefore difficult for practitioners to obtain, while models that are commercially available tend to be expensive.
• Improved simulation of mitigation elements. For example, models could include built-in berms and barriers that can be easily
adjusted to test sensitivity and optimize their effectiveness. Reliability-based design of mitigation structures may also be possible in a
probabilistic analysis framework (e.g., the probability of a deflection berm being overtopped could be estimated based on the
probability distribution of the model input parameters).
• Integration of model results directly into risk assessment cal-
culations. For example, automated hazard mapping and risk estimates could be developed using batch model runs.
• Coupling of landslide and landslide-generated wave models or development of stand-alone models that can simulate both processes
equally well.
Of all of the challenges summarized above, the selection of model input parameters within a framework that is suited to
quantitative risk assessment remains the biggest challenge for practitioners. Work should therefore continue to focus on the
collection of case history data and the probabilistic calibration of runout models for a variety of landslide types. Researchers and
practitioners carrying out this work should recognize that calibrated parameter values can depend strongly on the roughness of the
input topography; therefore, until a standard approach to model setup is adopted widely, calibration results documented by different
workers using different models may not be directly comparable.

Acknowledgements
This paper was prepared for the 2014 Canadian Geotechnical Society Colloquium Lecture. The author would like to thank the
Canadian Geotechnical Society and the Canadian Foundation for Geotechnique for this opportunity. The author would also like to
acknowledge the following individuals for their contributions to the paper: Oldrich Hungr, Jordan Aaron, Matthias Jakob, Richard
Guthrie, John Clague, Peter Jordan, Dwain Boyer, and Dave Southam. Finally, the author would like to thank Stephen Evans and an
anonymous reviewer for providing constructive feedback that substantially improved the paper.

Potrebbero piacerti anche