Documenti di Didattica
Documenti di Professioni
Documenti di Cultura
2013 Sei Paper - Alvi - Final
2013 Sei Paper - Alvi - Final
ABSTRACT
Ultimately, structural engineers are interested in the real behavior of structures, both
existing and proposed. We gain confidence that such real behavior can be accurately
predicted starting with our engineering education, when we become exposed to the
mathematically sophisticated and beautiful theories taught in our courses on statics,
mechanics of materials, analysis, dynamics, etc. With these theories, we can
seemingly create ‘clean’ models of structures which can be solved exactly – so clean
that we believe that the models do indeed directly and accurately represent the
structures of interest. Solve some algebraic or differential equations by hand and you
have your results. Amazing, how nice to be involved in an ‘exact science’!
But as we progress through our education towards design courses, and start becoming
exposed to design codes, we start to notice overt approximations being made for
loading conditions, distribution of loads through structures, material behavior,
ultimate strengths, etc., and it becomes evident that models are being used – models
which cannot possibly represent the exact behavior of real structures. How are these
heuristic approximations justified, and how much can we trust them?
At some point we encounter our first finite element modeling course, where we chop
up structures into bits and pieces and generate some very impressive graphics – but
what is the ‘best’ way to chop up the structure and what are the ‘best’ ways of
modeling the pieces? Linear elastic? Should we account for cracking? Plasticity?
As a sort of last resort, we might suppose that experience is the final arbiter of what
constitutes a ‘good enough’ model, even if not perfect. If the codes have produced a
set of structures which have a low failure rate over a long period of time, the
associated models have been validated. And if you are a seasoned engineer who has
been successful in designing structures – maybe even complex and innovative
structures – that is evidence that you personally are a good modeler of structures. But
not so quick: real structures are always designed with a relatively large factor of
safety, and this may conceal many cases where a structure was poorly modeled, but
the structure has not failed because the actual factor of safety is still say 1.5 – only
half the target of say 3.0. In other words, our design conservatism prevents our
structural models from truly being put to the test and ‘falsified’ even when they are
substantially inaccurate. True, sometimes structures do fail, and that experience
could potentially be used to learn something about the accuracy of the associated
structural models. However, forensic investigations have consistently revealed that
failures are typically due to unfortunate accumulations of human errors of various
types, rather than solely or primarily problems with structural models, so even here
we face a sharp limitation when it comes to getting feedback on how well our
modeling is going.
This paper explores these and related questions. Definitive answers to the questions
may be out of reach, but it is at least hoped that philosophical scrutiny of the
modeling process will lead to useful insights and ultimately better models – even if
we can rarely be quite sure just how good the models are. In this spirit, based on this
exploration of the issues, practical modeling suggestions are offered.
Arguably, we cannot access reality directly or even ‘mirror’ it in an exact way (Rorty
1991). Instead, both philosophically and cognitively, our interaction with reality is
always mediated by models. The form of a model may be conceptual, mathematical,
computational, or physical, and all of these forms are used in structural engineering.
In addition to the obvious quantitative aspects of most models, all models also have a
qualitative structure. This is sometimes overlooked and implicit, rather than
explicitly examined, but it is still a fundamental element of models where much of
‘qualitative physics’ applies (Bobrow 1985). In this regard, we often draw on our
ordinary daily experiences to incorporate mechanistic features into models, but as our
intellectual capabilities grow, particularly in the direction of higher mathematics, our
models can become abstract to the extent that they defy any attempts at visualization,
which arguably poses risks due to the resulting loss of intuition (Ferguson 1992). We
may even intentionally disregard reality and build models of hypothetical ‘imagined’
systems to explore the properties of those systems (Weisberg 2013), and indeed that
is what largely comprises current research in mathematics. Interestingly, such
imagined systems have sometimes later had unanticipated usefulness in developing
models of reality, such as when non-Euclidean geometry was found to have
application in Einstein’s general theory of relativity, which raises the question of the
‘unreasonable effectiveness of mathematics in the natural sciences’ (Wigner 1960).
Similarly, much theoretical work in structural mechanics was done primarily for its
theoretical interest and was not implemented in engineering practice for many
decades (and some of it may never see such application).
From an engineering standpoint, all designed systems, including structures, are also
imagined systems until they are constructed, but here the goal is to make real what
was imagined. This illustrates the pragmatic orientation of engineering towards
actively shaping reality, in contrast to the scientific emphasis on describing reality.
This leads to different goals in developing models, a theme which is taken up next.
Our interactions with reality are diverse, so our models are accordingly developed
and used for diverse goals as well (Bailer-Jones 2009; Weisberg 2013). Some key
goals of models, both generally and in structural engineering, are as follows:
Control – Often the motivation for modeling is to provide tools which aid in
the control of behavior so that systems (including structures) function as
desired. Prediction is related to control, but a substantial degree of control is
sometimes possible even with limited predictive ability, as long as a
These diverse goals of modeling make it clear that there will rarely be a single, one-
size-fits-all model which is best for all goals (Weisberg 2013). Instead, a multitude
of possible models can typically be developed relative to a given phenomenon –
including an imagined phenomenon, such as a structure yet to be built – and so an
explicit goal should be specified when developing a model. This does not preclude
having more than one goal for a model, but in that case there will likely be trade-offs
in trying to achieve the various goals, and the modeler will then need to choose what
weights to assign to each goal (Weisberg 2013).
All of these considerations highlight the fact that a model is essentially what Koen
(2003) provocatively defines as a heuristic: “anything that provides a plausible aid or
direction in the solution of a problem but is in the final analysis unjustified, incapable
of justification, and potentially fallible.” We explore the development of models in
further detail next.
Looking at the objective side, models rely on reasonably stable empirical patterns in
data, and provide a means to summarize that data, as well as interpolate and
extrapolate from that data, thus reaching beyond the known to the unknown. As more
data becomes available, models may evolve to account for the new data, possibly
changing both quantitatively and qualitatively in the process. This raises the
possibility (and hope) that our collective modeling efforts reflect an ‘antifragile’
process (Taleb 2012) in which, over time, the confrontations with reality to which we
subject our models result in not just ‘weeding out’ inadequate (fragile) models and
maintaining adequate (robust) models, but also natural improvement of certain types
of (antifragile) models. In this way, our overall set of models becomes better over
time – without excluding temporary regressions – ideally with our models eventually
‘converging on reality’, even if no proof of such convergence is possible.
Regardless of how motivated we are to apply our rationality and intuition, Herbert
Simon (1957) has noted that humans have ‘bounded rationality’, being subject to
limited availability of information, limited cognitive capacity, and limited time to
complete tasks (such as modeling). An implication is that, rather than optimizing or
striving for ‘perfect’ models or Truth, in practice we ‘satisfice’, continuing our efforts
only until ‘good enough’ results have been achieved. This is closely related to
Koen’s (2003) idea of applying heuristics as described above; heuristics are
incorporated into models (e.g., the many approximations in structural design codes)
(Bulleit 2008), and models themselves serve as heuristics. However, modeling need
not be a solo effort, and Page (2007) has shown that when solving complex problems
(such as development of models), teams composed of individuals bringing diverse
cognitive perspectives will generally be more effective.
Along the same lines, the principle of Occam’s razor advises that we should favor
simpler models, and likewise Einstein famously advised that “everything should be
made as simple as possible, but not simpler.” Methodologically, these considerations
generally support starting with simpler models, focusing on primary features first,
adding complexity in relatively small increments, and evaluating the behavior of the
resulting models in the process; this is analogous to perturbation modeling in physics
(Weisberg 2013). To take a somewhat contrived structural engineering example,
consider the following progression of models used to compute support reactions for a
series of continuous spans:
Divide total vertical load equally among supports. The inaccuracy of this
model is obvious, but it at least ensures equilibrium and provides a first
approximation of reactions.
Compute support reactions based on tributary areas. This is equivalent to
assuming simple spans, and thus ignoring continuity, but it adds the
refinement of providing some reflection of varying span lengths.
Account for continuity, but assume constant EI. Though it clearly entails
some inaccuracy, this model is not uncommon in practice, and considerably
simplifies analysis, to the extent that hand calculations are often feasible.
Account for varying EI, but still assume linear elastic behavior. This
increases model complexity to the extent that computer modeling will usually
be mandatory in engineering practice.
Adjust EI to account for cracking, plasticity, creep, etc. Thoroughly
addressing such factors increases model complexity to an extent that many
projects will not warrant.
BIASES IN MODELING
Review of such a long list of biases which can adversely affect the modeling process
may prove disconcerting, but one of the best (and perhaps only) ways to address such
biases is to make a conscious search for their presence during modeling and actively
try to correct for them.
To help reduce uncertainties, models can be tested and calibrated against empirical
data over some domain, but such testing will always be finite and generally will not
cover all cases for which a model may be used. In other words, we generally cannot
avoid going from the ‘known’ to the ‘unknown’, and we are faced with the
philosophical ‘problem of induction’: no series of observations can causally prove
that a particular observation will follow – there is always an assumption, ‘an act of
faith’, in making such an inference.
These issues present a genuine challenge with regard to evaluation and validation of
models, but some pragmatic suggestions can still be offered:
CONCLUSION
Perhaps the most central conclusion following from our exploration is that we
engineers don’t fully know what we’re doing when we model structures, and never
will – we can’t fully ‘get real’ – and safety factors are shielding us against structural
failure, thereby possibly fueling overconfidence. With this in mind, we should reduce
safety factors with great caution, only in circumstances in which a clear ability to
reduce uncertainty enables extra confidence to be placed in our models. Conversely,
when modeling atypical structures or in other circumstances in which uncertainty is
increased, we should be ready to increase safety factors and generally apply more
conservatism than required by structural codes. In short, be aware, and be humble!
REFERENCES