Sei sulla pagina 1di 12

Engineers Need to Get Real, But Can’t: The Role of Models

By Irfan A. Alvi, PE, M. ASCE1


1
President & Chief Engineer, Alvi Associates, Inc., 110 West Road, Towson, MD
21204; PH (410) 321-8877; FAX (410) 321-8970; email: ialvi@alviassociates.com

ABSTRACT

Engineers routinely use conceptual, mathematical, and computational models in their


work, and sometimes physical models as well. These models serve as heuristic tools
which abstract from objectively real or ‘imagined’ (yet to be built) engineering
systems to represent them incompletely and inaccurately, with the extent of
inaccuracy often being substantially uncertain. The modeling process is also heavily
shaped by the background knowledge, creativity, goals, and preferences of engineers,
thus incorporating a subjective aspect which involves a variety of unconscious biases
that can adversely affect models. This paper explores the nature and implications of
modeling in engineering, mainly at a philosophical level, and also offers practical
modeling suggestions derived from this exploration.

INTRODUCTION AND OVERVIEW

Ultimately, structural engineers are interested in the real behavior of structures, both
existing and proposed. We gain confidence that such real behavior can be accurately
predicted starting with our engineering education, when we become exposed to the
mathematically sophisticated and beautiful theories taught in our courses on statics,
mechanics of materials, analysis, dynamics, etc. With these theories, we can
seemingly create ‘clean’ models of structures which can be solved exactly – so clean
that we believe that the models do indeed directly and accurately represent the
structures of interest. Solve some algebraic or differential equations by hand and you
have your results. Amazing, how nice to be involved in an ‘exact science’!

But as we progress through our education towards design courses, and start becoming
exposed to design codes, we start to notice overt approximations being made for
loading conditions, distribution of loads through structures, material behavior,
ultimate strengths, etc., and it becomes evident that models are being used – models
which cannot possibly represent the exact behavior of real structures. How are these
heuristic approximations justified, and how much can we trust them?

At some point we encounter our first finite element modeling course, where we chop
up structures into bits and pieces and generate some very impressive graphics – but
what is the ‘best’ way to chop up the structure and what are the ‘best’ ways of
modeling the pieces? Linear elastic? Should we account for cracking? Plasticity?

Page 1 Proceedings of 2013 ASCE Structures Congress


What about geometric nonlinearity of the structure as a whole and associated
complex buckling modes? What sorts of boundary conditions do the foundations
provide? By this point, uncertainties – maybe even worries – about what constitutes a
reliable model are growing in the mind of the attentive student.

Of course, after school and upon embarking on a career of professional structural


engineering practice, all of this becomes magnified. Now we are indeed in the realm
of dealing with real structures costing real dollars, and whose failure can result in the
loss of real lives, along with many other real (and adverse) consequences. As any
practitioner who has had to use her PE seal can attest, the stakes are high – we need to
get things right. But how do you assure yourself that your models of structures are
‘right’, or at least right enough? Is a simpler model usually more robust and
understandable, and thus more reliable? Or is it better to go with a more complex
model to capture the nuances of structural behavior better? Is it helpful to compare
the results from different models of the same structure, in the hope that the models
can serve as cross-checks? Or might the models latently incorporate a common and
seriously flawed assumption? And what do we do when these models disagree
substantially – which do we go with?

As a sort of last resort, we might suppose that experience is the final arbiter of what
constitutes a ‘good enough’ model, even if not perfect. If the codes have produced a
set of structures which have a low failure rate over a long period of time, the
associated models have been validated. And if you are a seasoned engineer who has
been successful in designing structures – maybe even complex and innovative
structures – that is evidence that you personally are a good modeler of structures. But
not so quick: real structures are always designed with a relatively large factor of
safety, and this may conceal many cases where a structure was poorly modeled, but
the structure has not failed because the actual factor of safety is still say 1.5 – only
half the target of say 3.0. In other words, our design conservatism prevents our
structural models from truly being put to the test and ‘falsified’ even when they are
substantially inaccurate. True, sometimes structures do fail, and that experience
could potentially be used to learn something about the accuracy of the associated
structural models. However, forensic investigations have consistently revealed that
failures are typically due to unfortunate accumulations of human errors of various
types, rather than solely or primarily problems with structural models, so even here
we face a sharp limitation when it comes to getting feedback on how well our
modeling is going.

This paper explores these and related questions. Definitive answers to the questions
may be out of reach, but it is at least hoped that philosophical scrutiny of the
modeling process will lead to useful insights and ultimately better models – even if
we can rarely be quite sure just how good the models are. In this spirit, based on this
exploration of the issues, practical modeling suggestions are offered.

Page 2 Proceedings of 2013 ASCE Structures Congress


NATURE OF MODELS

Arguably, we cannot access reality directly or even ‘mirror’ it in an exact way (Rorty
1991). Instead, both philosophically and cognitively, our interaction with reality is
always mediated by models. The form of a model may be conceptual, mathematical,
computational, or physical, and all of these forms are used in structural engineering.

All models are simplifications of reality, incomplete and abstracted representations of


phenomena which emphasize selected features of reality and deemphasize or ignore
others (Bailer-Jones 2009; Weisberg 2013), analogous to the various types of maps
we develop to suit various purposes (Giere 1999). Because of what they leave out
and the distortions they introduce, we can boldly assert that models are always and
necessarily false and wrong, never capable of providing ‘the truth, the whole truth,
and nothing but the truth’. But wrongness (inaccuracy) of models is not all-or-
nothing, and model accuracy can range from low (e.g., in much of the social sciences)
to very high (e.g., in much of physics). Indeed, the extreme quantitative accuracy of
some models in physics (e.g., quantum electrodynamics or QED) is part of the reason
why we are sometimes tempted to believe that models can provide Truth!

In addition to the obvious quantitative aspects of most models, all models also have a
qualitative structure. This is sometimes overlooked and implicit, rather than
explicitly examined, but it is still a fundamental element of models where much of
‘qualitative physics’ applies (Bobrow 1985). In this regard, we often draw on our
ordinary daily experiences to incorporate mechanistic features into models, but as our
intellectual capabilities grow, particularly in the direction of higher mathematics, our
models can become abstract to the extent that they defy any attempts at visualization,
which arguably poses risks due to the resulting loss of intuition (Ferguson 1992). We
may even intentionally disregard reality and build models of hypothetical ‘imagined’
systems to explore the properties of those systems (Weisberg 2013), and indeed that
is what largely comprises current research in mathematics. Interestingly, such
imagined systems have sometimes later had unanticipated usefulness in developing
models of reality, such as when non-Euclidean geometry was found to have
application in Einstein’s general theory of relativity, which raises the question of the
‘unreasonable effectiveness of mathematics in the natural sciences’ (Wigner 1960).
Similarly, much theoretical work in structural mechanics was done primarily for its
theoretical interest and was not implemented in engineering practice for many
decades (and some of it may never see such application).

From an engineering standpoint, all designed systems, including structures, are also
imagined systems until they are constructed, but here the goal is to make real what
was imagined. This illustrates the pragmatic orientation of engineering towards
actively shaping reality, in contrast to the scientific emphasis on describing reality.
This leads to different goals in developing models, a theme which is taken up next.

Page 3 Proceedings of 2013 ASCE Structures Congress


GOALS OF MODELS

Our interactions with reality are diverse, so our models are accordingly developed
and used for diverse goals as well (Bailer-Jones 2009; Weisberg 2013). Some key
goals of models, both generally and in structural engineering, are as follows:

Explanation – Explanation involves addressing our need and desire to


cognitively ‘understand’ why reality works the way it does. Of course, all
explanations (and thus models) are founded on assumptions which must be
taken as axioms and are not themselves explained. Likewise, explanations
and models may be more or less detailed, and are never ‘complete’ or able to
provide us with Truth, though we may still aim in that direction – which is
often exactly what philosophers and theoretical physicists have been doing, as
when the latter aim for a ‘theory of everything’. In structural engineering, it
may be argued that satisfactory explanations/understandings of behavior are
mandatory requirements for an engineer to be considered competent.

Development of Intuition – Tied with explanation is the goal of development


of intuition, such that models become embedded within our unconscious
cognition and are used more implicitly than explicitly. Indeed, research
suggests that development of expertise in any field relies, in part, on
development of intuition through implicit use of progressively refined and
effective models (Bereiter and Scardamalia 1993), and such intuition is
largely the basis of what we commonly refer to as ‘engineering judgment’.

Instruction – Instruction goes a step beyond explanation by using models in


order to teach. This is an important goal in both academic and practice
settings. Models used for this purpose may be intentionally and especially
oversimplified – and thus especially inaccurate – in order to increase clarity of
explanations; hence the difference between ‘textbook’ cases as compared to
cases encountered in practice.

Prediction – In many applications, especially structural engineering, the


ability to predict system behavior accurately is a primary goal of modeling.
Such prediction may be qualitative, but more often the goal is quantitative
accuracy. The particular behaviors to be predicted generally need to be
explicitly specified, since a given model will likely be more accurate with
respect to some behaviors than others. For example, different models may be
used to predict deflections as compared to stress resultants (moment, shear,
axial force, and torsion), and different models may even be used to predict
short-term deflections as compared to long-term deflections.

Control – Often the motivation for modeling is to provide tools which aid in
the control of behavior so that systems (including structures) function as
desired. Prediction is related to control, but a substantial degree of control is
sometimes possible even with limited predictive ability, as long as a

Page 4 Proceedings of 2013 ASCE Structures Congress


behavioral target is clearly specified and can be closely maintained, such as
with the simple application of negative feedback loops (Weiner 1948).

Design – In engineering, we go beyond the goal of control and instead


specifically design systems to perform intended functions. Such design
always relies on use of models. In practice, the use of models for design is
subject to constraints with respect to schedules, costs, compliance with design
codes, required level of accuracy, need for relevant expertise, etc., so an
idealized philosophical or scientific search for Truth is never the goal.
Moreover, because design is inherently a creative process, design models have
a two-way interaction with creativity, in which creativity fosters development
of models, but the specific features of models can also spur design creativity.

Evaluation – Design is always done in the context of specific performance


criteria (both explicit and implicit), but models can also be used to evaluate
existing systems in specific ways. For example, in bridge engineering, ‘load
rating’ models are commonly developed to evaluate the load capacity of
existing bridges for particular vehicle configurations (in terms of axle
spacings and weight distributions), and these models consider a limited subset
of failure modes as compared to design models (e.g., bridge abutments and
piers are often not evaluated at all).

Experimentation and Data Collection – Thomas Kuhn (1962) and others


noted that ‘all observation is theory-laden’, meaning that we can only gather
data from observation and experimentation in the context of models, since
models are what enable data to have meaning. There is thus a two-way
interaction between data and models, where models enable the collection of
data, but the collected data can then reshape the models.

These diverse goals of modeling make it clear that there will rarely be a single, one-
size-fits-all model which is best for all goals (Weisberg 2013). Instead, a multitude
of possible models can typically be developed relative to a given phenomenon –
including an imagined phenomenon, such as a structure yet to be built – and so an
explicit goal should be specified when developing a model. This does not preclude
having more than one goal for a model, but in that case there will likely be trade-offs
in trying to achieve the various goals, and the modeler will then need to choose what
weights to assign to each goal (Weisberg 2013).

All of these considerations highlight the fact that a model is essentially what Koen
(2003) provocatively defines as a heuristic: “anything that provides a plausible aid or
direction in the solution of a problem but is in the final analysis unjustified, incapable
of justification, and potentially fallible.” We explore the development of models in
further detail next.

Page 5 Proceedings of 2013 ASCE Structures Congress


DEVELOPMENT OF MODELS

Are models created or discovered? Arguably, the development of models is a


combination of both (Bailer-Jones 2009; Weisberg 2013). On one side, models are
based on objective empirical phenomena and aim to represent them, so discovery is
involved. On the other side, we modelers choose the form and goals of our models,
and we likewise evaluate our models according to criteria of our choosing, so a
creative and subjective element is involved as well.

Looking at the objective side, models rely on reasonably stable empirical patterns in
data, and provide a means to summarize that data, as well as interpolate and
extrapolate from that data, thus reaching beyond the known to the unknown. As more
data becomes available, models may evolve to account for the new data, possibly
changing both quantitatively and qualitatively in the process. This raises the
possibility (and hope) that our collective modeling efforts reflect an ‘antifragile’
process (Taleb 2012) in which, over time, the confrontations with reality to which we
subject our models result in not just ‘weeding out’ inadequate (fragile) models and
maintaining adequate (robust) models, but also natural improvement of certain types
of (antifragile) models. In this way, our overall set of models becomes better over
time – without excluding temporary regressions – ideally with our models eventually
‘converging on reality’, even if no proof of such convergence is possible.

On the subjective side, the cognitive processes involved in modeling remain


somewhat mysterious, but there is no question that, in addition to rationality, intuition
plays a large role. As noted above, models shape our intuition, and development of
expertise involves incorporating ever better models into our intuition. However, this
is a two-way interaction, with intuition also playing a vital role in the development of
models. Particularly within structural engineering, it has also been argued (Gomez-
Rivas et al 2012) that use and exploration of physical models is an especially helpful
means to develop intuition regarding structural behavior and aid in validation of
computational models; structural engineers who grew up playing with Legos or
Lincoln Logs are likely to agree!

Regardless of how motivated we are to apply our rationality and intuition, Herbert
Simon (1957) has noted that humans have ‘bounded rationality’, being subject to
limited availability of information, limited cognitive capacity, and limited time to
complete tasks (such as modeling). An implication is that, rather than optimizing or
striving for ‘perfect’ models or Truth, in practice we ‘satisfice’, continuing our efforts
only until ‘good enough’ results have been achieved. This is closely related to
Koen’s (2003) idea of applying heuristics as described above; heuristics are
incorporated into models (e.g., the many approximations in structural design codes)
(Bulleit 2008), and models themselves serve as heuristics. However, modeling need
not be a solo effort, and Page (2007) has shown that when solving complex problems
(such as development of models), teams composed of individuals bringing diverse
cognitive perspectives will generally be more effective.

Page 6 Proceedings of 2013 ASCE Structures Congress


To reduce the extent of simplification and distortion involved in modeling, a potential
strategy to capture reality better is to make models more complex, possibly to the
extent that computers are mandatory (e.g., finite element models). However, this
strategy provides no guarantee of accuracy either, and can even sometimes backfire
because more complexity also means more things which can go wrong. For example,
finite element models, despite their apparent sophistication, may neglect key modes
of behavior such as buckling, as was the case in the collapse of the roof of the
Hartford Civic Center coliseum (DeLatte 2009). In a similar vein, contributing to the
collapse of global financial markets in late 2007, computational models were
developed by specialists in quantitative finance (‘quants’) which were extraordinarily
complex, to the extent of defying meaningful intuitive understanding and instead
functioning as ‘black boxes’ focused on predictive ability (Patterson 2011).

Along the same lines, the principle of Occam’s razor advises that we should favor
simpler models, and likewise Einstein famously advised that “everything should be
made as simple as possible, but not simpler.” Methodologically, these considerations
generally support starting with simpler models, focusing on primary features first,
adding complexity in relatively small increments, and evaluating the behavior of the
resulting models in the process; this is analogous to perturbation modeling in physics
(Weisberg 2013). To take a somewhat contrived structural engineering example,
consider the following progression of models used to compute support reactions for a
series of continuous spans:

Divide total vertical load equally among supports. The inaccuracy of this
model is obvious, but it at least ensures equilibrium and provides a first
approximation of reactions.
Compute support reactions based on tributary areas. This is equivalent to
assuming simple spans, and thus ignoring continuity, but it adds the
refinement of providing some reflection of varying span lengths.
Account for continuity, but assume constant EI. Though it clearly entails
some inaccuracy, this model is not uncommon in practice, and considerably
simplifies analysis, to the extent that hand calculations are often feasible.
Account for varying EI, but still assume linear elastic behavior. This
increases model complexity to the extent that computer modeling will usually
be mandatory in engineering practice.
Adjust EI to account for cracking, plasticity, creep, etc. Thoroughly
addressing such factors increases model complexity to an extent that many
projects will not warrant.

Regardless of the complexity of a given model, models are generally used in


combinations rather than in isolation. These will typically include hierarchical
relationships where some models are founded on other more fundamental models, the
latter possibly being implicit and even taken for granted. Indeed, what we commonly
refer to as ‘theories’ may be viewed simply as models which are general in scope and
not intended to be applied to specific phenomena, but provide a background basis to
develop models of such phenomena (Bailer-Jones 2009). For example, Newtonian
mechanics provides a theoretical framework for development of structural mechanics,

Page 7 Proceedings of 2013 ASCE Structures Congress


which is in turn a theoretical framework for development of models of specific
structures. In addition, models may often be combined with each other via non-
hierarchical networked links. For example, a bridge may have a model of the
superstructure, the reactions from which are ‘inputs’ to individual models of each pier
and abutment, all of which are linked with models of the foundations and subsurface
materials to account for soil-structure interactions.

BIASES IN MODELING

Kahneman and Tversky (1974), among others, were pioneers in performing


experimental psychology studies which revealed a variety of heuristics and biases in
human cognition, often leading to decisions with sub-optimal outcomes. Heuristics
have been discussed above, so we now focus on biases. While the literature
exploring biases specifically in modeling appears to be very limited, many well-
established biases (Plous 1993; Pompian 2012) are likely candidates for adversely
affecting the modeling process; for example (referencing models where appropriate):

Availability bias – Estimating the probability of an outcome based on how


prevalent it has been in one’s personal experience, despite that personal
experience likely not being representative of broader experience (e.g., in
choosing the ‘best’ model for a particular type of structure).
Confirmation bias – Selective emphasis on evidence which confirms a model,
while deemphasizing or rejecting evidence which contradicts it.
Conservatism bias – Adherence to a prior model at the expense of
underutilizing new evidence.
Framing bias – Responding to a situation (e.g., modeling) differently based
on the context in which it is framed, despite no change in the situation itself.
Loss aversion bias – Giving irrationally greater preference to avoiding losses
as compared to acquiring gains (which can result in excessive model
conservatism).
Outcome bias – Doing something based on past outcomes while ignoring the
process leading to those outcomes (e.g., use of a computational model in a
‘black box’ manner because the model has ‘worked’ so far).
Overconfidence bias – Unwarranted faith in one’s intuitive reasoning,
judgments, and cognitive abilities (which can lead to overconfidence in one’s
models, resulting in exposure to increased risks).
Recency bias – Giving greater weight to more recent events, as compared to
prior events, without justification (e.g., using a particular model because it
was used recently, despite overall experience supporting a different model).
Regret aversion bias – Excessively seeking to avoid regret associated with a
poor decision; especially common after a bad outcome (e.g., designing overly
conservatively after a recent structural failure).

Page 8 Proceedings of 2013 ASCE Structures Congress


Representativeness bias – Incorrectly classifying something into a prior
category (e.g., type of structure appropriate for a given model) based on an
invalid analogy.
Self-serving attribution bias – Ascribing successes to one’s ability and failures
to outside factors such as luck (e.g., resulting in invalid defense of one’s
model when a structure fails).
Status quo bias – Emotional predisposition to prefer that things stay the same
(which has a legitimate benefit in terms of mitigating risk in engineering, but
at the expense of curbing innovation).

Review of such a long list of biases which can adversely affect the modeling process
may prove disconcerting, but one of the best (and perhaps only) ways to address such
biases is to make a conscious search for their presence during modeling and actively
try to correct for them.

EVALUATION AND VALIDATION OF MODELS

As discussed above, models can be developed to achieve various subjectively chosen


goals with associated weights, and the adequacy of models needs to be evaluated with
respect to these goals. For example, a model intended for instructional purposes
should be conceptually clear, whereas a model intended for prediction should be
quantitatively accurate, with its accuracy possibly measured statistically. Considering
the diverse goals of models, no standard and objective means for evaluating models
can be prescribed, and instead judgment is generally necessary.

However, if we focus on predictive accuracy, as noted above, models are always


incomplete and inaccurate representations of phenomena, with the extent of
inaccuracy being uncertain to varying degrees. This uncertainty can be classified into
two categories (Bulleit 2008; Der Kiureghian and Ditlevsen 2009). First, ‘aleatory
uncertainty’ roughly corresponds to ‘known unknowns’. These are factors which are
uncertain but included in our models, and for which statistical representation may be
feasible, and therefore various statistical methods can be used for uncertainty
propagation, such as the Monte Carlo method (Melchers 1999). Second, ‘epistemic
uncertainty’ roughly corresponds to ‘unknown unknowns’. These are factors
excluded from our models; their effects may surprise us, be large, and rationalized in
hindsight, and Taleb (2010) has called such events ‘black swans’.

To help reduce uncertainties, models can be tested and calibrated against empirical
data over some domain, but such testing will always be finite and generally will not
cover all cases for which a model may be used. In other words, we generally cannot
avoid going from the ‘known’ to the ‘unknown’, and we are faced with the
philosophical ‘problem of induction’: no series of observations can causally prove
that a particular observation will follow – there is always an assumption, ‘an act of
faith’, in making such an inference.

Page 9 Proceedings of 2013 ASCE Structures Congress


In structural engineering, a further complication is that we perform limited testing of
structures to check our models. Instead, our models are mainly based on the
application of structural mechanics, as well as models of structural elements
calibrated by lab testing, commonly using reduced-scale specimens. Thus, designers
rarely get feedback on how well their models represent the specific full-scale
structures that they have designed. It may be argued that adequate performance of
those structures, both individually and collectively, validates the associated models;
but as noted above, substantial safety factors prevent models from truly being ‘put to
the test’ and thus ‘falsified’ unless they are highly inaccurate (Popper 1959).

These issues present a genuine challenge with regard to evaluation and validation of
models, but some pragmatic suggestions can still be offered:

Generally, treat models as ‘guilty until proven innocent’. Be skeptical and


critical.
Explicitly identify the assumptions underlying a model, in writing.
Explicitly identify what a model leaves out, in writing. By definition, we
cannot fully ‘know what we don’t know’, but applying imagination and
bringing multiple perspectives from a diverse team can help with this.
Conduct independent peer reviews and checks during and after model
development.
Evaluate models against your experience, intuition, and judgment, using
visualization tools where applicable (e.g., deflected shapes of structures). Do
the model and its predictions seem reasonable? Again, this test is subject to
many biases, but it can still help, especially if multiple perspectives are
brought to bear.
To check a model’s robustness, perform sensitivity studies of its parameters.
This includes both varying parameters around ‘best estimates’, as well as
testing what happens when parameters are taken to extreme values, to see if
the resulting behavior still makes sense.
To increase predictive accuracy, average results from multiple diverse models.
This approach is not only intuitively reasonable, but has been shown by Page
(2007) to be rigorously valid, resulting in derivation of a ‘diversity prediction
theorem’ based on the fact that errors among diverse models tend to cancel.
Also, the following are some suggestions specifically applicable in structural
engineering:

Develop and apply a strong understanding of structural mechanics, at both


conceptual and detailed levels. What they taught you in engineering school is
important!
When using computational models and their software implementations,
understand their assumptions, limitations, and methodologies, as well as
proper interpretation of outputs. Never use them as ‘black boxes’.
Perform equilibrium checks and test models with simplified load cases.

Page 10 Proceedings of 2013 ASCE Structures Congress


Develop multiple models of a given structure, preferably keeping the models
relatively diverse. Always have at least one model based entirely on hand
calculations among them, for comparison with any computational models and
software implementations. And when using computational models and
software implementations which are not known to have been extensively
validated, or which are being applied to a structure for which the model is not
typically used, develop at least one different computational model or software
implementation for comparison.
Investigate discrepancies between results from different models of the same
structure until they can be explained. Do not simply pick what appears to be
the best model and ignore the other models. And recognize that agreement
among models is not a proof of correctness, since it may be due to shared
and/or cancelling flaws in models.
To assess structural safety, consider the net effect of all partial safety factors,
inherent conservatism or unconservatism built into models, fragility or
robustness of a structure which is not explicitly accounted for by codes and
models, and the possibility of unknown unknowns – use your imagination and
try to make them known!

Further suggestions specific to modeling structures, many of which are more


technical and detailed than the more general and philosophical suggestions offered
above, can be found in MacLeod (2005), Dym (1997), and Dym and Williams (2012).

CONCLUSION

Perhaps the most central conclusion following from our exploration is that we
engineers don’t fully know what we’re doing when we model structures, and never
will – we can’t fully ‘get real’ – and safety factors are shielding us against structural
failure, thereby possibly fueling overconfidence. With this in mind, we should reduce
safety factors with great caution, only in circumstances in which a clear ability to
reduce uncertainty enables extra confidence to be placed in our models. Conversely,
when modeling atypical structures or in other circumstances in which uncertainty is
increased, we should be ready to increase safety factors and generally apply more
conservatism than required by structural codes. In short, be aware, and be humble!

REFERENCES

Bailer-Jones, D. (2009) Scientific Models in Philosophy of Science, University of


Pittsburg Press, Pittsburg, PA.
Bereiter, C. and Scardamalia, M. (1993) Surpassing Ourselves: An Inquiry into the
Nature and Implications of Expertise, Open Court Publishing, Chicago, IL.
Bobrow, D. (Ed.) (1985) Qualitiative Reasoning about Physical Systems, MIT Press,
Cambridge, MA.
Bulleit, W. (2008) “Uncertainty in Structural Engineering.” Practice Periodical on
Structural Design and Construction, 13:1, 24-30.

Page 11 Proceedings of 2013 ASCE Structures Congress


Delatte, Jr., N. J. (2009) Beyond Failure: Forensic Case Studies for Civil Engineers,
ASCE Press, Reston, VA.
Der Kiureghian, A. and Ditlevsen, O. (2009) “Aleatory or epistemic? Does it
matter?” Structural Safety, 31:2, 105–112.
Dym, C. (1997) Structural Modeling and Analysis, Cambridge Univesity Press,
Cambridge, UK.
Dym, C. and Williams, H. (2012) Analytical Estimates of Structural Behavior, CRC
Press, Boca Raton, FL.
Ferguson, E. S. (1992) Engineering and the Mind’s Eye, MIT Press, Cambridge,
MA.
Giere, R. (1999) Science without Laws, University of Chicago Press, Chicago, IL.
Gomez-Rivas, A., Pincus, G., and Tito, J. (2012) “Models, Computers and Structural
Analysis.” Proceedings of 10th Latin American and Carribean Conference for
Engineering and Technology (LACCEI 2012).
Koen, B. V. (2003) Discussion of the Method: Conducting the Engineer’s Approach
to Problem Solving, Oxford University Press, Oxford, UK.
Kuhn, T. (1962) The Structure of Scientific Revolutions, University of Chicago Press,
Chicago, IL.
MacLeod, I. (2005) Modern Structural Anaylsis: Modelling Process and Guidance,
Thomas Telford, Reston, VA.
Melchers, R. (1999) Structural Reliability Analysis and Prediction, 2nd Ed., Wiley,
New York, NY.
Patterson, S. (2011) The Quants: How a New Breed of Math Whizzes Conquered Wall
Street and Nearly Destroyed It, Crown Business, New York, NY.
Page, S. (2007) The Difference, Princeton University Press, Princeton, NJ.
Plous, S. (1993) The Psychology of Judgment and Decision Making, McGraw-Hill,
New York, NY.
Pompian, M. (2012) Behavioral Finance and Wealth Management, 2nd Ed., Wiley,
Hoboken, NJ.
Popper, K. (1959) The Logic of Scientific Discovery, Basic Books, New York, NY.
Rorty, R. (1991) Philosophy and the Mirror of Nature, Princeton University Press,
Princeton, NJ.
Simon, H. (1957) “A Behavioral Model of Rational Choice” in Models of Man,
Social and Rational, Wiley, New York, NY.
Taleb, N. (2010) The Black Swan: The Impact of the Highly Improbable, 2nd Ed.,
Random House, New York, NY.
Taleb, N. (2012) Antifragile: Things that Gain from Disorder, Random House, New
York, NY.
Tversky, A. and Kahneman, D. (1974) “Judgment under uncertainty: Heuristics and
biases.” Science, 185: 1124-1131.
Weisberg, M. (2013) Simulation and Similarity: Using Models to Understand the
World, Oxford University Press, Oxford, UK.
Weiner, N. (1948) Cybernetics: or Control and Communication in the Animal and the
Machine, MIT Press, Cambridge, MA.
Wigner, E. (1960) “The Unreasonable Effectiveness of Mathematics in the Natural
Sciences.” Communications in Pure and Applied Mathematics, 13: 1-14.

Page 12 Proceedings of 2013 ASCE Structures Congress

Potrebbero piacerti anche