Sei sulla pagina 1di 136

RELIABILITY BASED DESIGN OPTIMIZATION: FORMULATIONS AND

METHODOLOGIES
A Dissertation
Submitted to the Graduate School
of the University of Notre Dame
in Partial Fulllment of the Requirements
for the Degree of
Doctor of Philosophy
by
Harish Agarwal, B.Tech.(Hons), M.S.M.E.
John E. Renaud, Director
Graduate Program in Aerospace and Mechanical Engineering
Notre Dame, Indiana
December 2004
RELIABILITY BASED DESIGN OPTIMIZATION: FORMULATIONS AND
METHODOLOGIES
Abstract
by
Harish Agarwal
Modern products ranging from simple components to complex systems should
be designed to be optimal and reliable. The challenge of modern engineering is to
ensure that manufacturing costs are reduced and design cycle times are minimized
while achieving requirements for performance and reliability. If the market for the
product is competitive, improved quality and reliability can generate very strong
competitive advantages. Simulation based design plays an important role in design-
ing almost any kind of automotive, aerospace, and consumer products under these
competitive conditions. Single discipline simulations used for analysis are being cou-
pled together to create complex coupled simulation tools. This investigation focuses
on the development of ecient and robust methodologies for reliability based design
optimization in a simulation based design environment.
Original contributions of this research are the development of a novel ecient
and robust unilevel methodology for reliability based design optimization, the de-
velopment of an innovative decoupled reliability based design optimization method-
ology, the application of homotopy techniques in unilevel reliability based design
optimization methodology, and the development of a new framework for reliability
based design optimization under epistemic uncertainty.
Harish Agarwal
The unilevel methodology for reliability based design optimization is shown to
be mathematically equivalent to the traditional nested formulation. Numerical test
problems show that the unilevel methodology can reduce computational cost by at
least 50% as compared to the nested approach. The decoupled reliability based de-
sign optimization methodology is an approximate technique to obtain consistent re-
liable designs at lesser computational expense. Test problems show that the method-
ology is computationally ecient compared to the nested approach. A framework for
performing reliability based design optimization under epistemic uncertainty is also
developed. A trust region managed sequential approximate optimization method-
ology is employed for this purpose. Results from numerical test studies indicate
that the methodology can be used for performing design optimization under severe
uncertainty.
To Baba, Maa, Papa, Mom, Pawan, and Sweta
ii
CONTENTS
FIGURES . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . vi
TABLES . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . viii
ACKNOWLEDGMENTS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ix
CHAPTER 1: INTRODUCTION . . . . . . . . . . . . . . . . . . . . . . . . . 1
1.1 Design optimization . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
1.2 Reliability based design optimization . . . . . . . . . . . . . . . . . . 2
1.3 Research Objectives . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
1.3.1 Decoupled methodology for reliability based design optimization 5
1.3.2 Unilevel methodology for reliability based design optimization 6
1.3.3 Application of continuation methods in unilevel reliability based
design optimization formulation . . . . . . . . . . . . . . . . . 7
1.3.4 Reliability based design optimization under epistemic uncer-
tainty . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
1.4 Overview of the dissertation . . . . . . . . . . . . . . . . . . . . . . . 8
CHAPTER 2: DESIGN AND OPTIMIZATION . . . . . . . . . . . . . . . . . 10
2.1 Deterministic design optimization . . . . . . . . . . . . . . . . . . . . 12
2.1.1 Multidisciplinary systems design . . . . . . . . . . . . . . . . . 13
2.1.2 Multidisciplinary design optimization . . . . . . . . . . . . . . 15
2.1.3 MDO Algorithms . . . . . . . . . . . . . . . . . . . . . . . . . 16
2.2 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
CHAPTER 3: UNCERTAINTY: TYPES AND THEORIES . . . . . . . . . . 18
3.1 Classication of uncertainties . . . . . . . . . . . . . . . . . . . . . . 18
3.1.1 Aleatory Uncertainty . . . . . . . . . . . . . . . . . . . . . . . 21
3.1.2 Epistemic Uncertainty . . . . . . . . . . . . . . . . . . . . . . 22
3.1.3 Error (Numerical Uncertainty) . . . . . . . . . . . . . . . . . . 23
3.2 Uncertainty modeling techniques . . . . . . . . . . . . . . . . . . . . 23
3.2.1 Probability Theory . . . . . . . . . . . . . . . . . . . . . . . . 24
3.2.2 Dempster-Shafer Theory . . . . . . . . . . . . . . . . . . . . . 25
iii
3.2.3 Convex Models of Uncertainty and Interval Analysis . . . . . . 29
3.2.4 Possibility/Fuzzy Set Theory Based Approaches . . . . . . . . 29
3.3 Optimization Under Uncertainty . . . . . . . . . . . . . . . . . . . . . 30
3.3.1 Robust Design . . . . . . . . . . . . . . . . . . . . . . . . . . . 31
3.3.2 Reliability based design optimization . . . . . . . . . . . . . . 31
3.3.3 Fuzzy Optimization . . . . . . . . . . . . . . . . . . . . . . . . 32
3.3.4 Reliability Based Design Optimization Using Evidence Theory 33
3.4 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35
CHAPTER 4: RELIABILITY BASED DESIGN OPTIMIZATION . . . . . . 36
4.1 RBDO formulations: eciency and robustness . . . . . . . . . . . . . 36
4.1.1 Double Loop Methods for RBDO . . . . . . . . . . . . . . . . 37
4.1.2 Probabilistic reliability analysis . . . . . . . . . . . . . . . . . 39
4.1.3 Sequential Methods for RBDO . . . . . . . . . . . . . . . . . . 45
4.1.4 Unilevel Methods for RBDO . . . . . . . . . . . . . . . . . . . 48
4.2 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49
CHAPTER 5: DECOUPLED RBDO METHODOLOGY . . . . . . . . . . . . 50
5.1 A new sequential RBDO methodology . . . . . . . . . . . . . . . . . 50
5.1.1 Sensitivity of Optimal Solution to Problem Parameters . . . . 52
5.2 Test Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 54
5.2.1 Short Rectangular Column . . . . . . . . . . . . . . . . . . . . 54
5.2.2 Analytical Problem . . . . . . . . . . . . . . . . . . . . . . . . 56
5.2.3 Cantilever Beam . . . . . . . . . . . . . . . . . . . . . . . . . 61
5.2.4 Steel Column . . . . . . . . . . . . . . . . . . . . . . . . . . . 62
5.3 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 64
CHAPTER 6: UNILEVEL RBDO METHODOLOGY . . . . . . . . . . . . . 65
6.1 A new unilevel RBDO methodology . . . . . . . . . . . . . . . . . . . 65
6.2 Test Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 69
6.2.1 Analytical Problem . . . . . . . . . . . . . . . . . . . . . . . . 69
6.2.2 Control Augmented Structures Problem . . . . . . . . . . . . 73
6.3 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 79
CHAPTER 7: CONTINUATION METHODS IN OPTIMIZATION . . . . . . 82
7.1 Proposed Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . 82
7.2 Test Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 83
7.2.1 Short Rectangular Column . . . . . . . . . . . . . . . . . . . . 83
7.2.2 Steel Column . . . . . . . . . . . . . . . . . . . . . . . . . . . 85
7.3 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 87
CHAPTER 8: RELIABILITY BASED DESIGN OPTIMIZATION UNDER
EPISTEMIC UNCERTAINTY . . . . . . . . . . . . . . . . . . . . . . . . 89
8.1 Epistemic uncertainty quantication . . . . . . . . . . . . . . . . . . 90
iv
8.2 Deterministic Optimization . . . . . . . . . . . . . . . . . . . . . . . 95
8.3 Optimization under epistemic uncertainty . . . . . . . . . . . . . . . 96
8.4 Sequential Approximate Optimization . . . . . . . . . . . . . . . . . . 97
8.5 Test Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 100
8.6 Analytic Test Problem . . . . . . . . . . . . . . . . . . . . . . . . . . 100
8.7 Aircraft concept sizing problem . . . . . . . . . . . . . . . . . . . . . 104
8.8 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 110
CHAPTER 9: CONCLUSIONS AND FUTURE WORK . . . . . . . . . . . . 112
9.1 Summary and conclusions . . . . . . . . . . . . . . . . . . . . . . . . 113
9.1.1 Decoupled methodology for reliability based design optimization113
9.1.2 Unilevel methodology for reliability based design optimization 114
9.1.3 Continuation methods for unilevel RBDO . . . . . . . . . . . 115
9.1.4 Reliability based design optimization under epistemic uncer-
tainty . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 116
9.2 Recommendations for future work . . . . . . . . . . . . . . . . . . . . 117
9.2.1 Decoupled RBDO using higher order methods . . . . . . . . . 117
9.2.2 RBDO for system reliability . . . . . . . . . . . . . . . . . . . 117
9.2.3 Homotopy curve tracking for solving unilevel RBDO . . . . . 117
9.2.4 Considering total uncertainty in design optimization . . . . . . 118
9.2.5 Variable delity reliability based design optimization . . . . . 118
BIBLIOGRAPHY . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 120
v
FIGURES
2.1 Model of a multidisciplinary system analysis . . . . . . . . . . . . . . 14
3.1 Sources of Uncertainty . . . . . . . . . . . . . . . . . . . . . . . . . . 20
3.2 Belief (Bel) and Plausibility (Pl)[6] . . . . . . . . . . . . . . . . . . . 26
3.3 Design trade-o in RBDO . . . . . . . . . . . . . . . . . . . . . . . . 32
4.1 Reliability analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41
4.2 Approximate MPP Estimation . . . . . . . . . . . . . . . . . . . . . . 46
5.1 Proposed RBDO Methodology . . . . . . . . . . . . . . . . . . . . . . 51
5.2 Convergence history for the example problem . . . . . . . . . . . . . 57
5.3 Contours of objective and constraints . . . . . . . . . . . . . . . . . . 58
5.4 Plot showing two reliable optima. . . . . . . . . . . . . . . . . . . . . 59
6.1 Contours of objective and constraints . . . . . . . . . . . . . . . . . . 71
6.2 Plot showing two reliable optima. . . . . . . . . . . . . . . . . . . . . 72
6.3 Control augmented structures problem. . . . . . . . . . . . . . . . . . 74
6.4 Coupling in control augmented structures problem. . . . . . . . . . . 74
7.1 Convergence of objective function. . . . . . . . . . . . . . . . . . . . . 85
7.2 Convergence of optimization variables. . . . . . . . . . . . . . . . . . 86
7.3 Convergence of objective function. . . . . . . . . . . . . . . . . . . . . 88
8.1 Simplied Multidisciplinary Model . . . . . . . . . . . . . . . . . . . 90
8.2 Known BPA structure . . . . . . . . . . . . . . . . . . . . . . . . . . 91
8.3 Complementary Cumulative Belief and Plausibility Function . . . . . 95
vi
8.4 Experts Opinion for
1
. . . . . . . . . . . . . . . . . . . . . . . . . . 102
8.5 Experts Opinion for
2
. . . . . . . . . . . . . . . . . . . . . . . . . . 102
8.6 Design Variable History . . . . . . . . . . . . . . . . . . . . . . . . . 103
8.7 Convergence of objective function . . . . . . . . . . . . . . . . . . . . 104
8.8 Aircraft Concept Sizing Problem . . . . . . . . . . . . . . . . . . . . 106
8.9 Expert Opinion for p
3
and p
4
. . . . . . . . . . . . . . . . . . . . . . 108
8.10 Convergence of the Objective Function (ACS Problem) . . . . . . . . 109
vii
TABLES
5.1 STOCHASTIC PARAMETERS IN SHORT COLUMN PROBLEM . 55
5.2 COMPUTATIONAL COMPARISON OF RESULTS (SHORT RECT-
ANGULAR COLUMN) . . . . . . . . . . . . . . . . . . . . . . . . . 56
5.3 STARTING POINT [-5,3], SOLUTION [-3.006,0.049] . . . . . . . . . 59
5.4 STARTING POINT [5,3], SOLUTION [2.9277,1.3426] . . . . . . . . . 60
5.5 STOCHASTIC PARAMETERS IN CANTILEVER BEAM PROBLEM 61
5.6 STOCHASTIC PARAMETERS IN STEEL COLUMN PROBLEM . 63
6.1 STARTING POINT [-5,3], SOLUTION [-3.006,0.049] . . . . . . . . . 71
6.2 STARTING POINT [5,3], SOLUTION [2.9277,1.3426] . . . . . . . . . 73
6.3 STATISTICAL INFORMATION FOR THE RANDOM VARIABLES 77
6.4 MERIT FUNCTION AT THE INITIAL AND FINAL DESIGNS . . 79
6.5 HARD CONSTRAINTS AT THE FINAL DESIGN . . . . . . . . . . 80
6.6 COMPARISON OF COMPUTATIONAL COST OF RBDO METH-
ODS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 80
7.1 STOCHASTIC PARAMETERS IN SHORT COLUMN PROBLEM . 84
7.2 STOCHASTIC PARAMETERS IN STEEL COLUMN PROBLEM . 87
8.1 COMPARISON OF DESIGNS . . . . . . . . . . . . . . . . . . . . . . 105
8.2 DESIGN VARIABLES IN ACS PROBLEM . . . . . . . . . . . . . . 106
8.3 LIST OF PARAMETERS IN THE ACS PROBLEM . . . . . . . . . 107
8.4 LIST OF STATES IN THE ACS PROBLEM . . . . . . . . . . . . . 108
8.5 COMPARISON OF DESIGNS (ACS PROBLEM) . . . . . . . . . . . 110
viii
ACKNOWLEDGMENTS
First, I would like to express my sincere gratitude to my advisor, Dr. John E.
Renaud, for his support, encouragement, and guidance during my stay at Notre
Dame. He has been a constant source of help and inspiration to me. He gave a lot
of freedom in my course and research work, and has been extremely supportive and
understanding at all times.
I thank my readers comprising of Dr. John E. Renaud, Dr. Stephen M. Batill,
Dr. Steven B. Skaar, and Dr. Alan P. Bowling, for their help in the successful
completion of this dissertation.
I appreciate the administrative help provided by the department administrative
assistants Ms. Nancy Davis, Ms. Evelyn Addington, Ms. Judith Kenna, Ms. Nancy
OConnor.
I would like to thank the National Science Foundation, the Center for Applied
Mathematics at the University of Notre Dame, the Oce of Naval Research, and the
Department of Aerospace and Mechanical Engineering for their nancial support.
I would like to extend my thanks to Dr. Dhanesh Padmanabhan for helping me
settle in Notre Dame and for provided help in my research activities. I would like
to thank Dr. Victor Perez getting me involved in some of my initial research work
and providing information on cutting edge research.
Last but not the least, I thank my other lab members Dr. Xiaoyu Gu, Dr. Weiyu
Liu, Andres, Shawn, Alejandro, and Neal. I thank my roommates, Kameshwar,
Rajkumar, Sharad, Wyatt, Fabian, Parveen, Shishir, Anupam, and Himanshu.
ix
CHAPTER 1
INTRODUCTION
Modern competitive markets require that engineers design inexpensive and reli-
able systems and products. These requirements apply broadly across a variety of
businesses and products, ranging from small toys for children, to passenger cars
and space systems such as satellites or space stations. Reduced design cycle time,
products characterized by lower prices, higher quality, and reliability are the driving
factors behind the modern engineering design process. Engineers largely accomplish
these objectives through the use of better simulation models, continuously growing
in both complexity and delity. The modern design process is thus being increas-
ingly viewed as one of simulation based design. Depending upon the complexity of
the system to be designed, simulation based design can be practiced both in the
conceptual and preliminary design stage, and in the nal detailed design stages,
albeit with dierent delity design tools.
1.1 Design optimization
The computational speed of computers has increased exponentially during the last
50 years. This has led to the development of large-scale simulation tools like the
nite element methods, computational uid dynamics codes, etc, for analysis of
complex engineering systems. The availability of complex simulation models that
provide a better representation of the actual physical system has provided engineers
1
with an opportunity to obtain improved designs. The process of obtaining optimal
designs is known as design optimization.
Increasingly the modern engineering community is employing optimization as
a tool for design. Optimization is used to nd optimal designs characterized by
lower cost while satisfying performance requirements. Typical engineering examples
include minimizing the weight of a cantilever beam while satisfying constraint on
maximum stress and allowable deection, maximizing the lift on an aircraft subject
to constraint on an acceptable range, and so on. The basic paradigm in design
optimization is to nd a set of design variables that optimizes an objective function
while satisfying the performance constraints.
Most engineers when using optimization for design purposes, assume that the
design variables in the problem are deterministic. In this dissertation, this is referred
to as deterministic design optimization. A deterministic design optimization does
not account for the uncertainties that exist in modeling and simulation, manufactur-
ing processes, design variables and parameters, etc. However, a variety of dierent
kinds of uncertainties are present and need to be accounted for appropriately in the
design optimization process.
1.2 Reliability based design optimization
In a deterministic design optimization, the designs are often driven to the limit of
the design constraints, leaving little or no latitude for uncertainties. The resulting
deterministic optimal solution is usually associated with a high chance of failure, of
the artifact being designed, due to the inuence of uncertainties inherently present
during the modeling and manufacturing phases of the artifact and due to uncertain-
ties in the external operating conditions of the artifact. The uncertainties include
variations in certain parameters, which are either controllable (e.g. dimensions) or
2
uncontrollable (e.g. material properties), and model uncertainties and errors asso-
ciated with the simulation tools used for simulation based design [44].
Uncertainties in simulation based design are inherently present and need to be
accounted for in the design optimization process. Uncertainties may lead to large
variations in the performance characteristics of the system and a high chance of fail-
ure of the artifact. Optimized deterministic designs determined without considering
uncertainties can be unreliable and might lead to catastrophic failure of the artifact
being designed. Robust design optimization and reliability based design optimization
are methodologies that address these problems. Robust designs are designs at which
the variation in the performance functions is minimal. Reliable designs are designs
at which the chance of failure of the system is low. It is extremely desirable that
the engineers design for robustness and reliability as it helps in obtaining large mar-
ket shares for products under competitive economic conditions. This dissertation
specically focuses on reliability based design optimization problems.
Reliability based design optimization (RBDO) deals with obtaining optimal de-
signs characterized by a low probability of failure. In RBDO problems, there is a
trade-o between obtaining higher reliability and lowering cost. The rst step in
RBDO is to characterize the important uncertain variables and the failure modes.
In most engineering applications, the uncertainty is generally characterized using
probability theory. The probability distributions of the random variables are ob-
tained using statistical models. In designing artifacts with multiple failure modes,
it is important that an artifact be designed such that it is suciently reliable with
respect to each of the critical failure modes or to the overall system failure. In
a RBDO formulation, the critical failure modes in deterministic optimization are
replaced with constraints on probabilities of failure corresponding to each of the
failure driven modes or with a single constraint on the system probability of fail-
3
ure. The reliability index, or the probability of failure corresponding to either a
failure mode or the system, can be computed by performing a probabilistic reliabil-
ity analysis. Some of the techniques used in reliability analysis are the rst order
reliability method (FORM), second order reliability method (SORM), and Monte
Carlo simulation (MCS) techniques[20].
Traditionally, researchers have formulated RBDO as a nested optimization prob-
lem (also known as a double-loop method). Such a formulation is, by nature, com-
putationally expensive because of the inherent computational expense required for
the reliability analysis, which itself involves the solution to an optimization problem.
Solving such nested optimization problems are cost prohibitive, especially for mul-
tidisciplinary systems, which are themselves computationally intensive. Moreover,
the computational cost associated with RBDO grows exponentially as the number of
random variables and the number of critical failure modes increase. To alleviate the
high computational cost, researchers have developed sequential RBDO methods. In
these methods, a deterministic optimization and a reliability analysis are decoupled,
and the procedure is repeated until desired convergence is achieved. However, such
techniques are not provably convergent and may yield spurious optimal designs.
For decades, uncertainty has been formulated solely in terms of probability the-
ory. Such representation is now being questioned. This is because several other
mathematical theories, distinct from probability theory, are demonstrably capable
of characterizing situations under uncertainty[33, 49]. The risk assessment com-
munity has recognized dierent types of uncertainties and has argued that it is
inappropriate to represent all of them solely by probabilistic means when stochastic
information is not available. This is typically referred to as epistemic uncertainty.
Therefore, a need exists to develop a methodology to perform reliability based design
optimization under epistemic uncertainty.
4
1.3 Research Objectives
This dissertation investigates and develops formulations and methodologies for re-
liability based design optimization (RBDO). The main focus is to develop method-
ologies that are computationally ecient and mathematically robust. Eorts are
focused on reducing the cost of the optimization by developing innovative formu-
lations for RBDO. An ecient and robust unilevel formulation in which the lower
level optimization problem is replaced at the system level by Karush-Kuhn-Tucker
(KKT) optimality conditions is developed. In addition to the unilevel formulation,
a novel sequential methodology for RBDO is developed to address the concerns with
some of the existing sequential RBDO methodologies. In this investigation a frame-
work for design optimization under epistemic uncertainty is also developed. The
epistemic uncertainty is quantied using Dempster-Shafer theory.
1.3.1 Decoupled methodology for reliability based design optimization
In this dissertation, a new decoupled methodology for reliability based design opti-
mization is developed. Methodologies based on similar thoughts have been devel-
oped by other researchers [13, 72]. In these methodologies, a deterministic opti-
mization and a reliability analysis are performed separately, and the procedure is
repeated until desired convergence is achieved. Such techniques are referred to as
sequential or decoupled RBDO techniques in the rest of the manuscript. The se-
quential RBDO techniques oer a practical means of obtaining a consistent reliable
design at considerably reduced computational cost. In the methodology developed,
the sensitivities of the Most Probable Point (MPP) of failure with respect to the
decision variables are computed to update the MPPs during the deterministic op-
timization phase of the new RBDO approach. The MPP update is based on the
rst order Taylor series expansion around the design point from the last cycle. The
5
MPP update is found to be extremely accurate, especially around the vicinity of
the point from the previous cycle. The methodology not only nds the true optimal
solution but also the exact MPPs of failure, which is important to ensure that the
target reliability index is satised.
1.3.2 Unilevel methodology for reliability based design optimization
In this dissertation, a new unilevel formulation for performing RBDO is developed.
The proposed formulation provides improved robustness and provable convergence
as compared to a unilevel variant given by Kuschel and Rackwitz[36]. The formu-
lation given by Kuschel and Rackwitz[36] replaces the direct rst order reliability
method (FORM) problems (lower level optimization in the reliability index method
(RIA)) by their rst order necessary KKT optimality conditions. The FORM prob-
lem in RIA is numerically ill conditioned[69]; the same is true for the formulation
given by Kuschel and Rackwitz[36]. In this research, the basic idea is to replace the
inverse FORM problem (lower level optimization in the performance measure ap-
proach (PMA)) by its rst order Karush-Kuhn-Tucker (KKT) necessary optimality
conditions at the upper level optimization. It was shown in Tu et al [69] that PMA
is robust in terms of probabilistic constraint evaluation. The method developed in
this work is shown to be computationally equivalent to the original nested opti-
mization problem if the lower level optimization problem is solved by satisfying the
KKT necessary condition (which is what most numerical optimization algorithms
actually do). The unilevel method developed in this investigation is observed to be
more robust and has a provably convergent structure as compared to the one given
in Kuschel and Rackwitz[36].
6
1.3.3 Application of continuation methods in unilevel reliability based design op-
timization formulation
Optimization problems accompanied by many equality constraints are usually di-
cult to solve by most commercially available optimization algorithms. The unilevel
formulation for RBDO developed in this investigation is usually accompanied by a
large number of equality constraints which can cause numerical instability. Continu-
ation methods are employed to relax the constraints and to obtain a relaxed feasible
design. A series of less dicult optimization problems are solved for dierent values
of the continuation parameter. The relaxed problem is continuously deformed to
nd the solution to the original problem.
1.3.4 Reliability based design optimization under epistemic uncertainty
A traditional RBDO using probability theory typically requires complete statistical
information of the uncertainties. However there exist cases where all the uncertain
parameters in a system cannot be described probabilistically. Such uncertainties,
known as epistemic uncertainty or subjective uncertainty are usually extremely dif-
cult to characterize by mathematical means. In this dissertation, epistemic un-
certainty associated with the disciplinary design tools and the input parameters
in terms of the uncertain measures is quantied using Dempster-Shafer theory. A
trust region managed sequential approximate optimization (SAO) framework is used
for driving reliability based design optimization under epistemic uncertainty. The
uncertain measures of belief and plausibility provided by evidence theory are discon-
tinuous functions. In order to use gradient based optimization techniques, response
surface approximations are employed to smooth the discontinuous uncertain mea-
sures. Results indicate the methodology is eective in locating reliable designs under
epistemic uncertainty.
7
1.4 Overview of the dissertation
This dissertation is organized as follows. In chapter 2, a deterministic optimization
problem formulation is presented, an overview of multidisciplinary systems is given
and some multidisciplinary algorithms are discussed.
Chapter 3 presents an overview of the dierent kinds of uncertainties in simula-
tion based design and dierent uncertainty modeling theories. The details of reliabil-
ity analysis using probability theory and uncertainty analysis using Dempster-Shafer
theory are described.
Chapter 4 presents a typical reliability based design optimization formulation
that employs probabilistic reliability analysis for reliability calculation. Some back-
ground work in this area by other researchers along with some issues related to high
computational cost of reliability based design optimization, numerical robustness of
the formulations, spurious optimal designs, etc is detailed.
In chapter 5, a new sequential methodology for traditional reliability based de-
sign optimization is presented. The main optimization and the reliability analysis
corresponding to the failure modes are decoupled from each other. The methodology
is computationally ecient compared to the nested approach.
In chapter 6, a unilevel methodology for reliability based design optimization
is developed. The KKT optimality conditions corresponding to the lower level op-
timization is replaced at the system level. The methodology is computationally
ecient with respect to the traditional RBDO approaches.
Chapter 7 presents continuation methods for unilevel RBDO methodology. The
relaxation of the unilevel formulation using the continuation parameter, identifying
an initial feasible solution, and the deformation of the relaxed problem into the
original problem is discussed.
In chapter 8, a framework for reliability based design optimization under epis-
8
temic uncertainty is presented. Some research eorts in the area of epistemic un-
certainty quantication is discussed. The details of how epistemic uncertainty can
be accounted for in design optimization is detailed. The methodology is described
in application to some test problems.
In chapter 9, the advantages and limitations of the reliability based design op-
timization methodologies developed in this investigation are presented. Important
conclusions are drawn and some future work in this area is recommended.
9
CHAPTER 2
DESIGN AND OPTIMIZATION
Necessity is the mother of invention. A need or problem encourages creative eorts
to meet the need or solve the problem. In the engineering world, a need or prob-
lem consists of a number of activities, including analysis, design, fabrication, sales,
research and the development of systems. In this dissertation, the main subject
is, the design of systems, a major eld of the engineering profession. The process
of designing systems has been developed and used for centuries, and the existence
of buildings, bridges, highways, automobiles, airplanes, space vehicles and other
complex systems is an excellent testimonial.
Design is an iterative process. The designers experience, intuition, and ingenu-
ity are required in the design of systems in most elds of engineering (aerospace,
automotive, civil, chemical, industrial, electrical, mechanical, and so on). Iterative
implies analyzing several trial systems in a sequence before an acceptable design is
obtained. Engineers strive to design the best systems and, depending on the spec-
ications, best can have dierent connotations for dierent systems. In general, it
implies cost eective, ecient, reliable and durable systems. The process can involve
teams of specialists from dierent disciplines requiring considerable interaction.
Design, in engineering terminology, is transformed into specications and re-
quirements. The requirements can be expressed in terms of mathematical con-
straints. The region delimited by the constraints is known as the feasible region.
10
The designer is faced with the challenge of designing artifacts that are consistent
with the set of constraints. Competitive pressures continue to force product im-
provement demands on engineering and design departments. An improved design
is one that complies with the same requirements, but improves the value of the
merit function. When the constraints and merit function can be expressed in an
explicit mathematical form and they are functions of quantiable characteristics,
then a mathematical optimization problem can be posed. Being able to optimize a
product for a desired performance outcome in the pre-design phase can mean more
time for product innovation and shorter time to market.
The advent of high speed computing has led to the development of large scale
simulation models for complex engineering systems. Typical applications include
models of nature for weather forecasting, stock market models for investment deci-
sion making, engineering models for analyzing complex ow patterns over an airfoil,
nuclear power plant models, and so on. The modern design engineer frequently
employs advanced simulation models for design applications.
The development of good simulation models has allowed the engineers to design
better systems and products. The modern engineering design process is, therefore,
viewed as simulation based design. A prospective design can be much easily iden-
tied long before an actual prototype is built. The designer can locate improved
designs that are consistent with the constraints and have a better merit function
value by simply varying the design inputs and executing the simulation model. In
numerical optimization, the process of locating an improved design is automated,
and advance mathematical algorithms are employed to locate better designs e-
ciently.
11
2.1 Deterministic design optimization
In a deterministic design optimization, the designer seeks the optimum values of
design variables for which the merit function is the minimum and the deterministic
constraints are satised. A typical deterministic design optimization problem can
be formulated as
min f(d, p, y(d, p)) (2.1)
subject to g
R
i
(d, p, y(d, p)) 0, i = 1, .., N
hard
, (2.2)
g
D
j
(d, p, y(d, p)) 0, j = 1, .., N
soft
, (2.3)
d
l
d d
u
, (2.4)
where d are the design variables and p are the xed parameters of the optimization
problem. g
R
i
is the i
th
hard constraint that models the i
th
critical failure mechanism
of the system (e.g., stress, deection, loads, etc). g
D
j
is the j
th
soft constraint that
models the j
th
deterministic constraint due to other design considerations (e.g., cost,
marketing, etc). The design space is bounded by d
l
and d
u
. If g
R
i
< 0 at a given
design d then the artifact is said to have failed with respect to the i
th
failure mode.
The complete failure of the artifact depends on how all the failure modes contribute
to what is known as the system failure of the artifact.
The merit function and the constraints in the above formulation are explicit
functions of d, p, and y(d, p). y(d, p) are the outputs of analysis tools that are used
to predict performance characteristics of an artifact. These intermediate quantities
are referred to in this study as state variables and the analysis tools are referred to
as contributing analysis.
Though, a clear distinction is made between hard and soft constraints, deter-
ministic design optimization treats both these type of constraints similarly, and the
failure of the artifact due to the presence of uncertainties is not taken into consid-
12
eration. The distinction between hard and soft constraints is made to facilitate the
introduction of concepts on reliability analysis and reliability based design optimiza-
tion in subsequent chapters. It should be noted that equality constraints could also
be included in the optimization formulation, but have been omitted here without
any loss of generality.
2.1.1 Multidisciplinary systems design
The concept of a multidisciplinary problems, though inherent to most engineering
design problems, is more evident when analyzing complex systems such as automo-
biles or aircraft systems. This class of problems are characterized by two or more
disciplinary analyses such as controls, structures, and aerodynamics in a aeroser-
voelastic structure. Each discipline analysis can be a simulation tool or a collection
of simulation tools which are coupled through performance inputs and outputs of
the individual disciplines. Coupling between the various simulation tools exists in
the form of shared design variables and both input and output state performances.
The solution of these coupled simulation tools is referred to as a system analysis
(SA) or a multidisciplinary analysis (MDA). For a given set of independent vari-
ables that uniquely dene the artifact, referred to as design variables, d, the SA
is intended to determine the corresponding set of attributes which characterize the
system performance, referred to as state variables (SVs), y.
An analysis of multidisciplinary systems often requires users to iterate between
individual disciplines until the states are converged. The convergence of the states is
based on some convergence criterion which is typically based on the change on input
parameters. A typical multidisciplinary system with full coupling is illustrated in
Figure 2.1. Here the system analysis consists of three disciplines or contributing
analyses (CAs). Each contributing analysis (CA) makes use of a simulation based
13
d, p
CA1
CA2
CA3
y
1
y
2
y
3
SA
converged state variables y
y
2
y
1
y
3
Figure 2.1. Model of a multidisciplinary system analysis
discipline design tool. State information is exchanged between the three CAs and
an iterative solution is pursued until the states are converged and consistent. A
consistent design refers to one in which the output states y are not in contradiction
with the system of discipline equations (2.5) for a given design d.
A single evaluation of each discipline is referred as contributing analysis (CA).
In general, a CA in a multidisciplinary system can be expressed as
y
i
= CA
i
(d
i
, ds, yi
i
) (2.5)
The inputs for CA
i
are the discipline design vector d
i
, the vector of shared variables
ds, and the vector of input states from the other CAs, yi
i
. The output state vector
is y
i
.
14
In general, a multidisciplinary system with nss CAs can be expressed as
y = SA(d, p), (2.6)
where d = d
1
, d
2
, , d
nss
, d
s
, (2.7)
y = y
1
, y
2
, , y
nss
. (2.8)
It might be possible to have more than one solution (or no solution may exist) for a
given set of design variables and parameters. However, this is not very common in
practice. A coupled system analysis might take several iterations to converge and
therefore can be extremely expensive to evaluate.
2.1.2 Multidisciplinary design optimization
The optimization of multidisciplinary problems is known as multidisciplinary design
optimization (MDO). To perform optimization, the sensitivities of the converged SVs
can be calculated by solving the global sensitivity equations (GSEs) [55] obtained
by the implicit function dierentiation rule. The total sensitivity can be calculated
by using the following expression after a SA has been performed

d
y = C
1

d
y, (2.9)

p
y = C
1

p
y, (2.10)
where
d
and
p
are the total dierential operators with respect to d and p, whereas

d
and
p
are the partial dierential operators with respect to d and p respectively.
C is the coupling matrix obtained from the global sensitivity equations. y can be
divided into nss components y
1
, y
2
, ..., y
nss
, which are the outputs of CA
1
, CA
2
, ...,
CA
nss
respectively. The C (which is a function of d and p) is a matrix of dimension
MM, where M is the length of y, and consist of an array of rectangular matrices
15
and identity matrices as shown below.
C =
_

_
I
y
2
y
1

y
nss
y
1

y
1
y
2
I
ynss
y
2

y
1
y
nss

y
2
y
nss
I
_

_
(2.11)
The SA and sensitivities from the solution of GSE can be used to perform the de-
terministic design optimization (Eqns. (2.1)-(2.4) ), using many standard nonlinear
programming techniques.
2.1.3 MDO Algorithms
The optimization of a multidisciplinary system is usually computationally expensive
because of the cost associated with each SA evaluation. To reduce the computational
cost of MDO, a variety of dierent approaches have been developed over the last 20
years. The vast majority of them take advantage of the multidisciplinary nature of
the problem to divide it in multilevel subproblems for solution.
Alexandrov and Lewis[4] categorized the MDO approaches based on two dif-
ferent perspectives: a structural perspective and an algorithmic perspective. The
structural perspective comprises of approaches where the original problem is decom-
posed based on individual disciplines (structures) of the problem. The main idea
behind this approach is to provide disciplinary autonomy that solves an optimiza-
tion subproblem while a system level optimization coordinates the procedure. The
advantage of this approach is that structural organization of the problem is main-
tained and the coordination between individual disciplines is reduced. This results
in a bi-level formulation for these approaches. Well known examples of structural
decomposition approaches for MDO are the optimization by linear decomposition
(OLD) introduced in Sobieszczanski-Sobieski[63], concurrent subspace optimization
16
(CSSO) introduced in Sobieszczanski-Sobieski[64], collaborative optimization (CO),
proposed in Kroo[34], Braun and Kroo[8], and the recently introduced bi-level inte-
grated system synthesis (BLISS) Sobieszczanski et al [62].
In the algorithmic perspective, the MDO problem is reformulated to take ad-
vantage of the optimization algorithms. The idea here is to perform reliable and
ecient optimization. Under this perspective, robustness and eciency in the opti-
mization is more important than disciplinary autonomy. The formulations that arise
are single level approaches. Some of the MDO methods that fall in this category
are the simultaneous analysis and design (SAND) [25] or the all-at-once approach
[16] and the individual discipline feasible (IDF) approaches [38, 16, 17].
2.2 Summary
In a deterministic design optimization, the basic idea is to optimize a merit function
subject to deterministic constraints and design variable bounds. The merit function
and constraints explicitly depend on intermediate variables also called state vari-
ables. Multidisciplinary systems require iteration between various disciplines until
the states are converged and consistent. The solution to multidisciplinary systems
typically requires iterative root solving methods. The sensitivities of the state vari-
ables can be eciently obtained by solving global sensitivity equations (GSE) that
are based on the implicit dierentiation rule.
17
CHAPTER 3
UNCERTAINTY: TYPES AND THEORIES
Uncertainties are inherently present in any form of simulation based design. Over
the last few years there has been an increased emphasis focused on accounting
for the various forms of uncertainties that are introduced in mathematical models
and simulation tools. Engineers, scientists, and decision makers are dierentiating
and characterizing the dierent forms of uncertainty. Various representations of
uncertainties exist and it is very important that each one of them is accounted for
by appropriate means depending upon the information available.
For decades, uncertainty has been formulated in terms of probability theory.
Such representation is now being questioned. This is because several other math-
ematical theories, distinct from probability theory, are demonstrably capable of
characterizing situations under uncertainty[33, 49]. The risk assessment community
has recognized dierent types of uncertainties and has argued that it is inappropri-
ate to represent all of them solely by probabilistic means when enough information
is not available.
3.1 Classication of uncertainties
Advances in computational capabilities have led to the development of large scale
simulation tools for use in design. It is important that the uncertainties in mathe-
matical models (i.e., simulation tools) used for nondeterministic engineering systems
18
design and optimization are quantied appropriately. The nondeterministic nature
of the mathematical model of the system exists because: a) the system responses
of the model can be non-unique due to the existence of uncertainties in the input
parameters of the model, or b) there are multiple alternative mathematical models
for the system and the environment. The simulation tool is deterministic in the
sense that for all input data, the simulation tool gives unique values of response
quantities[47].
In general, a distinction can be made between aleatory uncertainty (also re-
ferred to as stochastic uncertainty, irreducible uncertainty, inherent uncertainty,
variability), epistemic uncertainty (also referred to as reducible uncertainty, sub-
jective uncertainty, model form uncertainty or simply uncertainty) and numerical
uncertainty (also known as error) [43, 47]. Oberkampf et al [45] have described var-
ious methods for estimating the total uncertainty by identifying all possible sources
of variability, uncertainty, and error in mathematical models and simulation tools.
Numerical models of engineering systems, such as nite element analysis (FEA)
or computational uid dynamics (CFD), are extensively used by the designers. The
uncertainty in these simulation tools comes from a variety of sources (see Figure
3.1). First, there is epistemic uncertainty when a physical model is converted into
a mathematical model because all the non-linearity of the physical model cannot be
exactly transformed into mathematical equations. Second, there is uncertainty (this
can be both aleatoric or epistemic) in the data that are inputs to the system. Third,
the mathematical equations can be solved using a variety of techniques and usually
these dierent methods provide slightly dierent results. For example, simple an-
alytic models for beam theory make the assumption that plane sections through a
beam, taken normal to the axis, remain plane after the beam is subject to bending.
A more rigorous solution from the mathematical theory of elasticity would show
19
Figure 3.1. Sources of Uncertainty
that a slight warpage of these planes can occur. Thus, we can have dierent delity
models to carry out the required analysis with each delity model giving a dierent
result. This is commonly referred to as model form uncertainty. Hence there is
uncertainty in the computer model used. Last but not the least, there is numerical
error due to round-o.
Some of the modern uncertainty theories are the theory of fuzzy sets[75], Dempster-
Shafer theory[22], possibility theory[18], and the theory of upper and lower previsions[71].
Some of these theories only deal with epistemic uncertainty; most deal with both
20
epistemic and aleatory uncertainty; and some deal with other varieties of uncer-
tainty and logic appropriate for articial intelligence and expert systems. Many of
these new representations of uncertainty are able to more accurately represent epis-
temic uncertainty than is possible with traditional probability theory. Engineering
applications of some of these theories can be found in recent publications[23, 10].
3.1.1 Aleatory Uncertainty
Variation type uncertainties also known as aleatory uncertainty describes the inher-
ent variation of the physical system. Such variation is usually because of the random
nature of the input data. They can occur in the form of manufacturing tolerances
or uncontrollable variations in external environment. They are usually modeled
as random phenomenon, which are characterized by probability distributions. The
probability distributions are constructed using relative frequency of occurrence of
events, which require large amount of information.
Most often such information does not exist and designers usually make assump-
tions for the characteristics (mean, variances, correlation coecients) of the random
phenomenon causing the variation.
One can have the following dierent cases:
(i) The bounds on the certain variations are known exactly but the probability
density functions or distributions governing the variations are not known. For ex-
ample, the maximum and minimum temperatures at which an artifact may work
under is known but not the characteristics of the underlying random process.
(ii) Individual probability density functions of variations in certain parameters
or design variables are known, but the correlation among them are unknown.
In most engineering applications, even though it is reasonable to represent the
stochastic uncertainty (variability) associated with design variables and parameters
21
by probabilistic means, there are places where such a representation is questionable
or not adequate. There are cases where some of the inputs to the system are known
to exist in interval(s) and nothing else is known about the distribution, mean or
variance. Sometimes data is available only as discrete points obtained from previous
experiments. When such inputs are part of an engineering system it is extremely
dicult to estimate the corresponding uncertainty for the outputs of the system
and to use it for design applications. Such type of uncertainties characterized by
incomplete information can be represented using techniques such as convex models
of uncertainty, interval methods, possibility theory, etc.
3.1.2 Epistemic Uncertainty
Epistemic uncertainty also known as subjective uncertainty arises due to ignorance,
lack of knowledge or incomplete information. In engineering systems, the epistemic
uncertainty can be either parametric or model-based. Epistemic uncertainty also
arises in decision making.
Parametric and Tool Uncertainty
In engineering systems, the epistemic uncertainty is mostly either parametric or
model-based. Parametric uncertainty is associated with the uncertain parameters
for which the information available is sparse or inadequate. Model-form uncertainty
also known as tool uncertainty is associated with improper models of the system
due to a lack of knowledge of the physics of the system. In some applications,
models (tools) are conservative or consistently over-predicting or under-predicting
in predictions of certain characteristics. For example, in structural dynamics, the
use of consistent mass matrix is known to consistently overestimate the natural
frequencies of a given structure, whereas using a lumped mass matrix does not
follow such a trend. Usually, the model uncertainty is lower for higher delity
22
analysis tools.
Model uncertainty also results with the selection of dierent mathematical mod-
els to simulate dierent conditions. It arises because of the lack of information about
the conditions or the range of conditions the system could operate under, and also
due to the lack of a unied modeling technique. For example, employing dierent
models for laminar and turbulent ows in a typical uid mechanics application.
Uncertainty related to decision making
Uncertainty associated with decision making is also known as vagueness and impre-
cision. In a decision making problem which is mostly a multiobjective optimization
problem, it is usually not possible to minimize all objectives (or satisfy all goals), due
to constraints and conicting objectives. In such circumstances, designers usually
make decisions based on what objectives to trade-o, or how to perform trade-o.
Design optimization under such situations are done using traditional multiobjec-
tive optimization methods, fuzzy multiobjective optimization and preference based
designing. Fuzzy sets are typically used to model such multiobjective problems.
3.1.3 Error (Numerical Uncertainty)
Error also known as numerical uncertainty is commonly associated with the nu-
merical models used for modeling and simulation. Some of the common examples
of such errors are error tolerance in the convergence of a coupled system analysis,
round-o errors, truncation errors, error associated with the solution of ODEs and
PDEs which typically uses discretization schemes.
3.2 Uncertainty modeling techniques
Uncertainties can be quantied using various uncertainty theories. Some of them
are probability theory, Dempster-Shafer theory, convex models of uncertainty and
23
possibility or fuzzy set theory. Probability theory is popularly employed to model
uncertainties especially when sucient data is available. A brief description of some
of these uncertainty theories is presented.
3.2.1 Probability Theory
Probability theory represents the uncertainties as random variables. In general, the
variables can be both discrete and continuous. In this dissertation, we will consider
only the continuous random variables. A random variable will be represented by
an uppercase (e.g., X), and a particular realization of a random variable will be
represented by a lowercase letter (e.g., x). The nature of randomness and the
information on probability is represented by the probability density function (PDF),
f
X
(x). To calculate the probability of X having a value between x
1
and x
2
, the
area under the PDF between these two limits needs to be calculated. This can be
expressed as
P(x
1
< X x
2
) =
x
2
_
x
1
f
X
(x)dx. (3.1)
To calculate P(X x), which is specically denoted as F
X
(x) and is known as
the cumulative distribution function (CDF) or simply as distribution function, the
area under the PDF needs to be integrated for all possible values of X less than or
equal to x; in other words, the integration needs to be carried out theoretically from
to x, and can be expressed as
P(X x) = F
X
(x) =
x
_

f
X
(x)dx. (3.2)
The CDF directly gives the probability of a random variable having a value less
than or equal to a specic value. The PDF is the rst derivative of the CDF and
can be expressed as
f
X
(x) =
dF
X
(x)
dx
. (3.3)
24
The mean of X, denoted as , the standard deviation of X, denoted as , and
the correlation between random variables X
1
and X
2
denoted as
12
, is given by
=

xf
X
(x)dx, (3.4)

2
=

(x )
2
f
X
(x)dx, (3.5)

12

2
=

(x
1

1
)(x
2

2
)f
X
1
,X
2
(x
1
, x
2
)dx
1
dx
2
, (3.6)
where f
X
1
,X
2
(x
1
, x
2
) is the joint probability density function of X
1
and X
2
. These
statistical properties are usually obtained by sampling data.
Lets say there is a scalar function g(X
1
, X
2
, .., X
n
). The expected value (mean)
of g can be calculated as

g
=

g(x
1
, x
2
, .., x
n
)f
X
1
,X
2
,..,X
n
(x
1
, x
2
, .., x
n
) dx
1
dx
2
.. dx
n
. (3.7)
Similarly, the distribution and other statistical properties of g can be obtained.
However, it is not always possible when g is obtained from expensive analysis tools.
Approximate techniques can be used for this purpose. They are described in the
next chapter.
3.2.2 Dempster-Shafer Theory
In this section, the basic concepts of evidence theory are summarized. The uncertain
measures provided by evidence theory are mentioned along with their advantages
and disadvantages.
Evidence theory uses two measures of uncertainty, belief and plausibility. In
comparison, probability theory uses just one measure, the probability of an event.
Belief and plausibility measures are determined from the known evidence for a
proposition without being necessary to distribute the evidence to subsets of the
25
Figure 3.2. Belief (Bel) and Plausibility (Pl)[6]
proposition. This means that evidence in the form of experimental data or expert
opinion can be obtained for a parameter value within an interval. The evidence
does not assume a particular value within the interval or the likelihood of any value
with regard to any other value in the interval. Since there is uncertainty in the
given information, the evidential measure for the occurrence of an event and the
evidential measure for its negation do not have to sum to unity as shown in Figure
3.2 (Bel(/) +Bel(

/) ,= 1).
The basic measure in evidence theory is known as the Basic Probability Assign-
ment (BPA). It is a function m that maps the power set (2
U
) of the universal set |
(also known as the frame of discernment) to [0, 1]. The power set simply represents
all the possible subsets of the universal set |. Let c represents some event which is
subset of the universal set |. Then m(c) refers to the BPA corresponding exactly
to the event c and it expresses the degree of support of the evidential claim that the
true alternative (prediction, diagnosis, etc.) is in the set c but not in any special
subset of c. Any additional evidence supporting the claim that the true alternative
is in a subset of c, say / c, must be expressed by another nonzero value m(/).
26
m(c) must satisfy the following axioms of evidence theory.
(1) m(c) 0 for any c 2
U
(2) m() = 0
(3)

m(c) = 1 for all c 2


U
where denotes an empty set. All the possible events c which are subsets of the
universal set | (c |) and have m(c) > 0 are known as the focal elements. The
pair T, m), where T denotes the set of all focal elements induced by m is called a
body of evidence.
The measures of uncertainty provided by evidence theory are known as belief
(Bel) and plausibility (Pl). Once a body of evidence is given, these measures can
be obtained by using the following formulas.
Bel(/) =

B|BA
m(B) (3.8)
Pl(/) =

B|BA =
m(B) (3.9)
Observe that the belief of an event Bel(/) is calculated by summing the BPAs
of the propositions that totally agree with the event / whereas the plausibility of an
event is calculated by summing BPAs of propositions that agree with the event /
totally and partially. In other words, Bel and Pl give the lower and upper bounds
of the event respectively. They are related to each other by the following equation.
Pl(/) +Bel(

/) = 1 (3.10)
where

/ represents the negation of the event /. The BPA can be obtained from
the Belief measure with the following inverse relation.
m(/) =

B|BA
(1)
|AB|
Bel(B) (3.11)
27
where [/B[ is the dierence of the cardinality of the two sets.
Sometimes the available evidence can come from dierent sources. Such bodies
of evidences can be aggregated using existing rules of combination. Commonly used
combination rules are listed below[61].
(1) The Dempster rule of combination
(2) Discount+Combine Method
(3) Yagers modied Dempsters rule
(4) Inagakis unied combination rule
(5) Zhangs center combination rule
(6) Dubois and Prades disjunctive consensus rule
(7) Mixing or Averaging
Dempster-Shafer theory is based on the assumption that these sources are inde-
pendent. However, information obtained from a variety of resources needs to be
properly aggregated. There is always a debate about which combination rule is
most appropriate. Dempsters rule of combination (1) is one of the most popular
rules of combination used. It is given by the following formula.
m(/) =

BC=A
m
1
(B)m
2
(()
1

BC=
m
1
(B)m
2
(()
, / ,= (3.12)
Dempsters rule has been subject to some criticism in the sense that it tends to com-
pletely ignore the conicts that exist between the available evidence from dierent
sources. This is because of the normalization factor in the denominator. Thus, it is
not suitable for cases where there is lot of inconsistency in the available evidence.
However, it is appropriate to apply, when there is some degree of consistency or suf-
cient agreement among the opinions of dierent sources. In the present research,
it will be assumed that there is some consistency in the available evidence from
dierent sources. Hence, Dempsters rule of combination will be applied to combine
28
evidence. When there is little or no consistency among the evidences from dierent
sources, it is appropriate to use mixing or averaging rule [47].
3.2.3 Convex Models of Uncertainty and Interval Analysis
In some cases, uncertain events form patterns that can be modeled using Convex
Models of Uncertainty [7]. Examples of convex models include intervals, ellipses or
any convex sets. Convex Models of uncertainties require less detailed information
to characterize uncertainties than a probability model of uncertainties. They often
require a worst case analysis in design applications which can be formulated as a
constrained optimization problem. Depending on the nature of the performance
function, local or global optimization techniques will be required. When the convex
models are intervals, techniques in interval analysis can be used.
3.2.4 Possibility/Fuzzy Set Theory Based Approaches
Fuzzy set theory [19] can be used to model uncertainties when there is little infor-
mation or sparse data. The conventional sets have xed boundaries and are called
crisp sets, which is a special case of fuzzy sets. Let A be a fuzzy set or event (e.g. a
quantity is equal to 10) in a universe of discourse U (e.g. all possible values of the
quantity) and let x U, then the degree of membership of x in A is dened using a
membership function, also called the characteristic function,
A
(x) (e.g. a triangle
shaped function with a peak of 1 at x = 10 and non-zero only when 9 < x < 11 and
0 elsewhere.
Possibility theory can be used when there is insucient information about ran-
dom variations. Possibility distributions can be assigned to such variations that
are analogous to cumulative distribution functions in probability theory. The basic
denitions in possibility theory associated with the possibility of a compliment of an
event, union or intersection of events are very dierent from that used in probability
29
theory. The membership function associated with a fuzzy set can be assumed to
be a possibility distribution of that set. The possibility distribution (membership
function) of a function of a variable (fuzzy set) with a given possibility distribution
(membership function) can be found using Zadehs Extension Principle also called
Vertex Method. The vertex method is based on combinatorial interval analysis, and
the computational expense increases exponentially with dimension of the uncertain
variables and increases with nonlinearity of the function. The vertex method can
be used to nd the induced preferences on performance parameters due to pre-
scribed preferences for design variables, and possibility distributions of performance
parameters due to uncertain variables characterized by possibility distributions.
Researchers [11, 40, 10] have shown that using possibility theory can yield more
conservative designs as compared to probability theory. This is especially true when
the available information is scarce or when the design criterion is to achieve low
probability or possibility of failure.
3.3 Optimization Under Uncertainty
A deterministic optimization formulation does not account for the uncertainties in
the design variables and parameters and simulation models. Optimized designs
based on a deterministic formulation are usually associated with a high probability
of failure because of the violation of certain hard constraints and can be subjected
to failure in service. This is particularly true if the hard constraints are active at
the deterministic optimum. In todays competitive marketplace, it is very impor-
tant that the resulting designs are optimum and at the same time reliable. Hence,
it is extremely important that design optimization accounts for the uncertainties.
Some of the existing methodologies that perform optimization accounting for the
uncertainties are discussed in the following sections.
30
3.3.1 Robust Design
In some applications, it is important to ensure that the performance function is
insensitive to variations. A robust design needs to be found for such applications.
In [52], a formulation for performing robust design optimization which corresponds
to nding designs with minimum variation of certain performance characteristics
is presented. A Signal to Noise Ratio is maximized, where noise corresponds to
variation type uncertainties and signal corresponds to a performance parameter,
PP. In Taguchi techniques, experimental arrays are used to conduct experiments
at various levels of control factors (design variables, d), and for each experimental
control setting, a Signal to Noise Ratio is calculated. This usually requires another
experimental array for dierent settings of the uncertain variables, x. The Signal
to Noise Ratio (S/N) is calculated as follows
S/N(d
j
) = 10 log
_
m

i=1
(PP(d
j
, x
i
) )
2

1
m
_
(3.13)
When the uncertainties are known probabilistically, a robust design optimization
corresponds to minimizing the variance of the performance function.
3.3.2 Reliability based design optimization
The basic idea in reliability based design optimization is to employ numerical opti-
mization algorithms to obtain optimal designs ensuring reliability. When the opti-
mization is performed without accounting the uncertainties, certain hard constraints
that are active at the deterministic solution may lead to system failure. Figure 3.3
illustrates such a case, where the chance that the deterministic solution fails is about
75% due to uncertainties in design variable settings. The reliable solution is char-
acterized by a slightly higher function value and is located inside the feasible region.
In most practical applications, the uncertainties are modeled using probability the-
ory. The probability of failure corresponding to a failure mode can be obtained and
31
90
70
50
Deterministic Optimum
Reliable Optimum
Almost 75% of the designs around fails
Figure 3.3. Design trade-o in RBDO
can be posed as a constraint in the optimization problem to obtain safer designs.
3.3.3 Fuzzy Optimization
In fuzzy optimization, the uncertainties (fuzzy requirements or preferences) are mod-
eled by using fuzzy sets. If the performance parameters are PP
i
and the corre-
sponding preferences be
i
, the design optimization problem is to maximize all the
preferences and hence is a multiobjective problem, usually called fuzzy program-
ming or fuzzy multi-objective optimization [60]. Usually all the preferences cannot
be simultaneously maximized and hence requires a trade-o between preferences.
This is achieved through an aggregation operator P(.). The optimization problem
therefore consist of maximizing the aggregated preference. Typical examples of P
are
P(
1
,
2
, ..,
k
) = min(
1
,
2
, ..,
k
) (3.14)
P(
1
,
2
, ..,
k
) = (
1
,
2
, ..,
k
)
1
k
. (3.15)
Eqn. (3.14) represents a non-compensating trade-o while Eqn. (3.15) represents
an equally compensating trade-o among the various preferences.
32
Antonsson and Otto[5] developed a method of imprecision (Mol) in which de-
signer preferences for design variables and performance variables are modeled in
terms of membership functions in the entire design variable and performance pa-
rameter space. A design trade-o strategy is identied and the optimal solution is
obtained by maximizing the aggregate preference functions for the design variables
and the induced preference on the performance parameters. The vertex method
can be used for computing induced preferences and as well as identifying the design
variables corresponding to the maxima of the aggregate preference [37].
Jensen and Sepulveda[29] provided a fuzzy optimization methodology in which
a trust region based sequential approximate optimization framework is used for
minimizing a non-compensating aggregate preference function. The methodology
develops approximations for intermediate variables and employs the vertex method
for computing the preference functions of the approximate intermediate variables in
an inexpensive way.
3.3.4 Reliability Based Design Optimization Using Evidence Theory
Uncertainty in engineering systems can be either aleatory or epistemic. Aleatory
uncertainty also known as stochastic uncertainty is associated with the inherent
random nature of the parameters of the system. It can be described mathematically
using probabilistic means. Once the probabilistic description is available for the
random parameters, the risk associated with the systems responses can be quan-
tied in terms of probability measure using appropriate methods such as FORM,
SORM, HORM (higher-order reliability method), Monte Carlo, etc. However, there
exist cases where all the uncertain parameters in a system cannot be described
probabilistically. In such cases, the usual practice is to assume some distribution
for the parameters for which probabilistic description is not available and perform
33
probabilistic analysis. The results obtained from such analysis can be faulty. Epis-
temic uncertainty also known as subjective uncertainty arises due to ignorance, lack
of knowledge or incomplete information. A variety of dierent theories exist to
quantify such uncertainty. In engineering systems, the epistemic uncertainty can
be either parametric or model-based. Parametric uncertainty is associated with the
uncertain parameters for which the information available is sparse or inadequate and
hence cannot be described probabilistically. Model-form uncertainty is associated
with improper models of the system due to a lack of knowledge of the physics of
the system. Model form uncertainties also arise when variable delity mathematical
models are employed for simulation and design.
Fuzzy sets, possibility theory, Dempster-Shafer theory, etc provides a means for
mathematically quantifying epistemic uncertainty. In this dissertation, an attempt
is made on how uncertainty can be quantied in multidisciplinary systems analysis
subject to epistemic uncertainty associated with the disciplinary design tools and
input parameters. Evidence theory is used to quantify uncertainty in terms of the
uncertain measures of belief and plausibility.
After the epistemic uncertainty has been quantied mathematically, the designer
seeks the optimum design under uncertainty. The measures of uncertainty provided
by evidence theory are discontinuous functions. Such non-smooth functions can-
not be used in traditional gradient-based optimizers because the sensitivities of the
uncertain measures do not exist. In this research surrogate models are used to repre-
sent the uncertain measures as continuous functions. A formal trust region managed
sequential approximate optimization approach is used to drive the optimization pro-
cess. The trust region is managed by a trust region ratio based on the performance
of the Lagrangian which is a penalty function of the objective and the constraints.
The methodology is illustrated in application to multidisciplinary problems.
34
3.4 Summary
A variety of uncertainties exist during simulation based design of an engineering
system. These include aleatory uncertainty, epistemic uncertainty, and errors. In
general, probability theory is used to model aleatory uncertainty. Other uncertainty
theories such as Dempster-Shafer theory, fuzzy set theory, possibility theory, and
convex models of uncertainty, can be used to model epistemic uncertainty. It is ex-
tremely important that the uncertainties are taken into account in design optimiza-
tion. A deterministic design optimization does not account for the uncertainties. A
variety of techniques have been developed in the last few decades to address this
issue. These techniques include robust design, reliability based design optimization,
fuzzy optimization, and so on. This dissertation mainly focuses on reliability based
design optimization.
35
CHAPTER 4
RELIABILITY BASED DESIGN OPTIMIZATION
In this chapter, aleatory uncertainty is considered in design optimization. This
is typically referred as reliability based design optimization. Reliability based de-
sign optimization (RBDO) is a methodology for nding optimized designs that are
characterized with a low probability of failure. Primarily, reliability based design
optimization consists of optimizing a merit function while satisfying reliability con-
straints. The reliability constraints are constraints on the probability of failure
corresponding to each of the failure modes of the system or a single constraint on
the system probability of failure. The probability of failure is usually estimated by
performing a reliability analysis. During the last few years, a variety of dierent
formulations have been developed for reliability based design optimization. This
chapter presents RBDO formulations and research issues associated with standard
methodologies.
4.1 RBDO formulations: eciency and robustness
There are two important concepts in relation to a RBDO formulation. They are
eciency and robustness. An ecient formulation is one in which the solution can
be obtained faster as compared to the other formulations. A real engineering de-
sign problem usually consists of large number of failure modes. Traditional RBDO
formulations requires solutions to nested optimization which is computationally in-
36
ecient. Thus it is important that the formulation that is solved for obtaining
reliable designs is computationally ecient.
Robustness, on the other hand, means that the RBDO formulation does not
depend on the starting point, etc. It implies that if the optimizer is invoked, it will
provide a local optimal solution. Some of the existing RBDO formulations are not
robust in the sense that there could be designs at which the formulation may not
hold. Hence it is also important that the formulation used is robust.
In the last two decades, researchers have proposed a variety of frameworks for
eciently performing reliability based design optimization. A careful survey of the
literature reveals that the various RBDO methods can be divided into three broad
categories.
4.1.1 Double Loop Methods for RBDO
A deterministic optimization formulation does not account for the uncertainties in
the design variables and parameters. Optimized designs based on a deterministic
formulation are usually associated with a high probability of failure because of the
likely violation of certain hard constraints in service. This is particularly true if
the hard constraints are active at the deterministic optimal solution. To obtain a
reliable optimal solution, a deterministic optimization formulation is replaced with
a reliability based design optimization formulation.
Traditionally, the reliability based optimization problem has been formulated as a
double loop optimization problem. In a typical RBDO formulation, the critical hard
constraints from the deterministic formulation are replaced by reliability constraints,
37
as in
min f(d, p, y(d, p)) (4.1)
subject to g
rc
(X, ) 0, (4.2)
g
D
j
(d, p, y(d, p)) 0, j = 1, .., N
soft
, (4.3)
d
l
d d
u
, (4.4)
where g
rc
are the reliability constraints. They are either constraints on probabilities
of failure corresponding to each hard constraint or are a single constraint on the
overall system probability of failure. In this dissertation, only component failure
modes are considered. It should be noted that the reliability constraints depend on
the random variables X and limit state parameters . The distribution parameters
of the random variables are obtained from the design variables d and the xed
parameters p (see section 4.1.2 on reliability analysis below). g
rc
can be formulated
as
g
rc
i
= P
allow
i
P
i
, i = 1, .., N
hard
, (4.5)
where P
i
is the failure probability of the hard constraint g
R
i
at a given design, and
P
allow
i
is the allowable probability of failure for this failure mode. The probability
of failure is usually estimated by employing standard reliability techniques. A brief
description of standard reliability methods is given in the next section. It has to
be noted that the RBDO formulation given above (Equations (4.1)-(4.4)) assumes
that the violation of soft constraints due to variational uncertainties are permissible
and can be traded o for more reliable designs. For practical problems, design
robustness represented by the merit function and the soft constraints could be a
signicant issue, one that would require the solution to a hybrid robustness and
reliability based design optimization formulation.
38
4.1.2 Probabilistic reliability analysis
Reliability analysis is a tool to compute the reliability index or the probability of
failure corresponding to a given failure mode or for the entire system [27]. The un-
certainties are modeled as continuous random variables, X = (X
1
, X
2
, ..., X
n
)
T
, with
known (or assumed) joint cumulative distribution function (CDF), F
X
(x). The de-
sign variables, d, consist of either distribution parameters of the random variables
X, such as means, modes, standard deviations, and coecients of variation, or de-
terministic parameters, also called limit state parameters, denoted by . The design
parameters p consist of either the means, the modes, or any rst order distribution
quantities of certain random variables. Mathematically this can be represented by
the statements
[p, d] = [, ] , (4.6)
p is a subvector of . (4.7)
Random variables can be consistently denoted as X(), and the i
th
failure mode
can be denoted as g
R
i
(X, ). In the following, x denotes a realization of the random
variables X, and the subscript i is dropped without loss of clarity. Letting g
R
(x, )
0 represent the failure domain, and g
R
(x, ) = 0 be the so-called limit state function,
the time-invariant probability of failure for the hard constraint is given by
P(, ) =
_
g
R
(x,)0
f
X
(x) dx, (4.8)
where f
X
(x) is the joint probability density function (PDF) of X. It is usually impos-
sible to nd an analytical expression for the above integral. In standard reliability
techniques, a probability distribution transformation T : R
n
R
n
is usually em-
ployed. An arbitrary n-dimensional random vector X = (X
1
, X
2
, ..., X
n
)
T
is mapped
into an independent standard normal vector U = (U
1
, U
2
, ..., U
n
)
T
. This transforma-
tion is known as the Rosenblatt Transformation [58]. The standard normal random
39
variables are characterized by a zero mean and unit variance. The limit state func-
tion in U-space can be obtained as g
R
(x, ) = g
R
(T
1
(u), ) = G
R
(u, ) = 0. The
failure domain in U-space is G
R
(u, ) 0. Equation (4.8) thus transforms to
P
i
(, ) =
_
G
R
(u,)0

U
(u) du, (4.9)
where
U
(u) is the standard normal density. If the limit state function in U-space is
ane, i.e., if G
R
(u, ) =
T
u +, then an exact result for the probability of failure
is P
f
= (

), where () is the cumulative Gaussian distribution function. If


the limit state function is close to being ane, i.e., if G
R
(u, )
T
u + with
=
T
u

, where u

is the solution of the following optimization problem,


min [[u[[ (4.10)
subject to G
R
(u, ) = 0, (4.11)
then the rst order estimate of the probability of failure is P
f
= (

), where
represents a normal to the manifold (4.11) at the solution point. The solution
u

of the above optimization problem, the so-called design point, -point or the
most probable point (MPP) of failure, denes the reliability index
p
=

T
u

. This
method of estimating the probability of failure is known as the rst order reliability
method (FORM) [27].
In the second order reliability method (SORM), the limit state function is ap-
proximated as a quadratic surface. A simple closed form solution for the probability
computation using a second order approximation was given by Breitung [9] using
the theory of asymptotic approximations as
P
f
(, ) =
_
G
R
(u,)0

U
(u) du
(
p
)
n1

l=1
(1
l
)
1/2
, (4.12)
40
where the
l
are related to the principal curvatures of the limit state function at the
minimum distance point u

, and
p
is the reliability index using FORM. Breitung
[9] showed that the second-order probability estimate asymptotically approaches the
rst order estimate as
p
approaches innity if
p

l
remains constant. Tvedt [70]
presented a numerical integration scheme to obtain exact probability of failure for
a general quadratic limit state function. Kiureghian et al [32] presented a method
of computing probability of failure by tting parabolic approximation to the limit
state surface. The probability of failure can also be computed by using importance
sampling techniques [28, 21] that employ sampling around the MPP, thereby requir-
ing fewer samples than a traditional Monte Carlo technique. The concept of FORM
and SORM is illustrated in Figure 4.1 for an example with two random variables,
X
1
and X
2
.
(Failed)
(Safe)
Original Space
(Failed)
(Safe)
Standard Space
Figure 4.1. Reliability analysis
The rst order approximation, P
f
(
p
), is suciently accurate for most
practical cases. Thus, only rst order approximations of the probability of failure are
used in practice. Using the FORM estimate, the reliability constraints in Equation
(4.5) can be written in terms of reliability indices as
g
rc
i
=
i

reqd
i
, (4.13)
41
where
i
is the rst order reliability index, and
reqd
i
=
1
(P
allow
i
) is the desired
reliability index for the i
th
hard constraint. When the reliability constraints are
formulated as given in equation (4.14), the approach is referred to as the reliability
index approach (RIA).
The sensitivities of the probabilities of failure and reliability index with respect
to the distribution parameters and limit-state parameters can also be obtained. The
sensitivities in FORM are given as follows

=

u
G
R
(u

, )
[[
u
G
R
(u

, )[[
T(x

, )

(4.14)

=
1
[[
u
G
R
(u

, )[[
G
R
(u

, )

(4.15)
A big advantage of reliability analysis is that the inuence of the uncertainties on
the probability of failure can be found from the components of

. The designer can


recommend those uncertainties be reduced that inuences the probability of failure
the most by appropriate quality control measures.
The system probability of failure can be computed if the components constituting
its failure are known. The system failure can either be a series event or a parallel
event. In a series system, failure of any component (failure mode) corresponds to
the failure of the system. In a parallel system, the failure modes that constitute
system failure has to be dened. In a K-out-of-N system, K component modes have
to fail for the system to fail. For complex systems, fault tree diagrams are used to
analyze the system failure. The system probability of failure for series or parallel
systems can be bounded by using unimodal bounds or relaxed bimodal bounds. The
unimodal bounds for series and parallel system are as follows
max
i
P(G
R
i
0) P(G
R
i
0)
n

i=1
P(G
R
i
0) (4.16)
0 P(G
R
i
0) min
i
P(G
R
i
0) (4.17)
42
In this dissertation, only series systems are considered. Moreover, the rst order
approximation to the probability of failure, P
f
() (
p
), is reasonably accurate
for most practical cases. Thus, only rst order approximations of the probability of
failure will be employed.
It should be noted that the rst order reliability analysis involves a probability
distribution transformation, the search for the MPP, and the evaluation of the cu-
mulative Gaussian distribution function. To solve the FORM problem (Equations
4.10-4.11), various algorithms have been reported in the literature [39]. One of the
approaches is the Hasofer-Lind and Rackwitz-Fiessler (HL-RF) algorithm that is
based on a Newton-Raphson root solving approach. Variants of the HL-RF meth-
ods exist that use additional line searches to HL-RF scheme. The family of HL-RF
algorithms can exhibit poor convergence for highly nonlinear or badly scaled prob-
lems, since they are based on rst order approximations of the hard constraint.
Using a sequential quadratic programming (SQP) algorithm is often a more robust
approach. The solution typically requires many system analysis evaluations. More-
over, there might be cases where the optimizer may fail to provide a solution to
the FORM problem, especially when the limit state surface is far from the origin in
U-space or when the case G
R
(u, ) = 0 never occurs at a particular design variable
setting.
In design automation it cannot be known a priori what design points the upper
level optimizer (minimizing the merit function subject to reliability and determin-
istic constraints) will visit, therefore it is not known if the optimizer for the FORM
problem (evaluation of reliability constraints) will provide a consistent result. This
problem was addressed recently by Padmanabhan et al [48] by using a trust region
algorithm for equality constrained problems. For cases when G
R
(u, ) = 0 does not
43
occur, the algorithm provided the best possible solution for the problem through
min |u| (4.18)
subject to G
R
(u, ) = c. (4.19)
The reliability constraints formulated by the RIA are therefore not robust. RIA
is usually more eective if the probabilistic constraint is violated, but it yields a
singularity if the design has zero failure probability [69]. To overcome this diculty,
Tu et al [69] provided an improved formulation to solve the RBDO problem. In
this method, known as the performance measure approach (PMA), the reliability
constraints are stated by an inverse formulation as
g
rc
i
= G
R
i
(u
i

=
, ) i = 1, .., N
hard
. (4.20)
u
i

=
is the solution to the inverse reliability analysis (IRA) optimization problem
min G
R
i
(u, ) (4.21)
subject to |u| = =
reqd
i
, (4.22)
where the optimum solution u
i

=
corresponds to MPP in IRA of the i
th
hard con-
straint. Solving RBDO by the PMA formulation is usually more ecient and robust
than the RIA formulation where the reliability is evaluated directly. The eciency
lies in the fact that the search for the MPP of an inverse reliability problem is
easier to solve than the search for the MPP corresponding to an actual reliability
[69]. The RIA and the PMA approaches for RBDO are essentially inverse of one
another and would yield the same solution if the constraints are active at the op-
timum [69]. If the constraint on the reliability index (as in the RIA formulation)
or the constraint on the optimum value of the limit-state function (as in the PMA
formulation) is not active at the solution, the reliable solution obtained from the
two approaches might dier. In general, the RIA formulation yields a conservative
44
solution. Similar RBDO formulations were independently developed by other re-
searchers [53, 59, 31]. In these RBDO formulations, constraint (4.22) is considered
as an inequality constraint (|u|
reqd
i
), which is a more robust way of handling
the constraint on the reliability index. The major dierence lies in the fact that
in these papers semi-innite optimization algorithms were employed to solve the
RBDO problem. Semi-innite optimization algorithms solve the inner optimization
problem approximately. However, the overall RBDO is still a nested double-loop
optimization procedure. As mentioned earlier, such formulations are computation-
ally intensive for problems where the function evaluations are expensive. Moreover,
the formulation becomes impractical when the number of hard constraints increase,
which is often the case in real-life design problems. To alleviate the computational
cost associated with the nested formulation, sequential RBDO methods have been
developed.
4.1.3 Sequential Methods for RBDO
The basic concept behind sequential RBDO techniques is to decouple the upper level
optimization from the reliability analysis to avoid a nested optimization problem.
In sequential RBDO methods, the main optimization and the search of the MPPs
of failure (reliability analysis) is performed separately and the procedure is repeated
until desired convergence is achieved. The idea is to nd a consistent reliable de-
sign at considerably lower computational cost as compared to the nested approach.
A consistent reliable design is a feasible design that satises all the reliability con-
straints and other soft constraints. The reliability analysis is used to check if a given
design meets the desired reliability level. In most sequential techniques of RBDO,
a design obtained by performing a deterministic optimization is updated based on
the information obtained from the reliability analysis or by using some nonlinear
45
transformations, and the updated design is used as a starting point for the next
cycle.
Chen et al [13] proposed a sequential RBDO methodology for normally dis-
tributed random variables. Wang and Kodiyalam [72] generalized this methodol-
ogy for nonnormal random variables and reported enormous computational savings
when compared to the nested RBDO formulation. The methodology was extended
for multidisciplinary systems in Agarwal et al [2]. Instead of using the reliability
analysis (inverse reliability problem) to obtain the true MPP of failure (u

=
), Wang
and Kodiyalam [72] use the direction cosines of the probabilistic constraint at the
mean values of the random variables in the standard space ( =
g
R
u

g
R
u

) and the
target reliability index () to make an estimate of the MPP of failure ( u

=
= )
(see gure 4.2). It should be noted that the estimated MPPs lie on the target
Limit State Surface Limit State Surface
Failure Region Failure Region
Required Reliability Sphere Required Reliability Sphere
Figure 4.2. Approximate MPP Estimation
reliability sphere. During optimization the corresponding MPP in X-space needs
to be calculated to evaluate the probabilistic performance functions. The MPP of
failure in X-space is found by mapping u

=
to the original space. If the random
46
variables in X-space are independent and normally distributed, then the MPP in
original space is given by x

=
x
u

x
. If the variables have a nonnormal dis-
tribution, then the equivalent means (

x
) and equivalent standard deviations (

x
) of
an approximate normal distribution are computed and used in the above expression
to estimate the MPP in X-space [72].
The advantage of this methodology is that it completely eliminates the lower
level optimization for evaluating the reliability based constraints. The most probable
point (MPP) of failure for each failure driven constraint is estimated approximately.
If the limit state function is close to linear in the standard space, then the estimate
of the MPP in U-space will be accurate enough and the nal solution may be close
to the actual solution. However, if the limit state function in the standard space in
suciently nonlinear, which is often the case in most real-life design problems, then
the MPP estimates might be extremely inaccurate, which might result in a design
which is not truly optimal. This is referred to as spurious optimal design.
Chen and Du [12] also proposed a sequential optimization and reliability assess-
ment methodology (SORA). In SORA, the boundaries of the violated constraints
(with low reliability) are shifted into the feasible direction based on the reliability
information obtained in the previous iteration. Two dierent formulations were
used for reliability assessment, the probability formulation (RIA) and the percentile
performance formulation (PMA). The percentile formulation was reported to be
computationally less demanding compared to the probability formulation and the
overall cost of RBDO was reported to be signicantly less compared to the nested
formulation. It should be noted that in this methodology an exact rst order re-
liability analysis is performed to obtain the MPP of failure for each failure driven
constraint, which was not the case in the approximate RBDO methodology of [72].
Therefore, a consistent reliable design is almost guaranteed to be obtained from this
47
framework. However, a true local optimum cannot be guaranteed. This is because
the MPP of failure for the hard constraints are obtained at the previous design point.
A shift factor, s
i
, from the mean values of the random variables is calculated and
is used to update the MPP of failure for probabilistic constraint evaluation during
the deterministic optimization phase in the next iteration, as the optimizer varies
the mean values of the random variables. This MPP update might be inaccurate
because of the fact that as the optimizer varies the design variables, the MPP of
failure (and hence the shift factor) also changes and is not addressed in SORA. This
might lead to spurious optimal designs.
4.1.4 Unilevel Methods for RBDO
During the last few years, researchers in the area of structural and multidisciplinary
optimization have continuously faced the challenge to develop more ecient tech-
niques to solve the RBDO problem. As outlined before, RBDO is typically a nested
optimization problem, requiring a large number of system analysis evaluations. The
major concern in evaluating reliability constraints is the fact that the reliability
analysis methods are formulated as optimization problems [54]. To overcome this
diculty, a unilevel formulation has been developed in Kuschel and Rackwitz [36].
In their method, the direct FORM problem (lower level optimization - Eqs. (4.10)-
(4.11) ) is replaced by the corresponding rst order Karush-Kuhn-Tucker (KKT)
optimality conditions of the rst order reliability problem. As mentioned earlier,
the direct FORM problem can be ill conditioned, and the same may be true for the
unilevel formulation given by Kuschel and Rackwitz [36]. The reason being that
the probabilistic hard constraints might have a zero failure probability at a partic-
ular design setting, and hence the optimizer might not converge due to the hard
constraints (which are posed as equality constraints) not being satised. Moreover,
48
the conditions under which such a replacement is equivalent to the original bi-level
formulation was not detailed in Kuschel and Rackwitz [36]. Therefore, the unilevel
approach of Kuschel and Rackwitz [36] does not guarantee that the unilevel ap-
proach is mathematically equivalent to the bilevel approach. In this investigation, a
new unilevel method is being developed which enforces the constraint qualication
of the KKT conditions and avoids the singularities associated with zero probability
of failure.
4.2 Summary
It has been noted that the traditional reliability based optimization problem is a
nested optimization problem. Solving such nested optimization problems for a large
number of failure driven constraints and/or nondeterministic variables is extremely
expensive. Researchers have developed sequential approaches to speed up the opti-
mization process and to obtain a consistent reliability based design. However, these
methodologies are not guaranteed to provide the true optimal solution. A unilevel
formulation has been developed to perform the optimization and reliability eval-
uation in a single optimization. But the existing formulation does not guarantee
mathematical equivalency to the original bi-level problem.
49
CHAPTER 5
DECOUPLED RBDO METHODOLOGY
In this chapter, a novel reliability based design optimization methodology (RBDO)
is developed. A new decoupled method for reliability based design optimization is
developed. In the proposed method, the sensitivities of the Most Probable Point
(MPP) of failure with respect to the decision variables are introduced to update the
MPPs during the deterministic optimization phase of the proposed RBDO approach.
For the test problem considered, the method not only nds the optimal solution but
it also locates the exact MPP of failure, which is important to ensure that the target
reliability index is met. The MPP update is based on the rst order Taylor series
expansion around the design point from the last cycle. The MPP update is found to
be extremely accurate, especially around the vicinity of the point from the previous
cycle.
5.1 A new sequential RBDO methodology
In this investigation, a framework for eciently performing RBDO is also developed.
As described earlier, a traditional nested RBDO formulation is extremely impracti-
cal for most real-life design problems of reasonable size (100 variables and 10 fail-
ure modes) and scope (e.g., multidisciplinary systems). Researchers have proposed
sequential RBDO approaches to speed up the optimization process and therefore
obtain a consistent reliability based design. These approaches are practical and at-
50
tractive because of the fact that a workable design can be obtained at considerably
lower computational cost. In the following, a new sequential RBDO methodology
is described in which the main optimization and reliability assessment phases are
decoupled. The sensitivities of the MPPs with respect to the design variables are
used to update them during the deterministic optimization phase. This helps in a
good estimate of the MPP as the design variables are varied by the optimizer during
the deterministic optimization phase.
The owchart of the proposed RBDO methodology is shown in Figure 6. The
Inverse Reliability Assessment
Deterministic Optimization
Converge
Yes
Final
Design
No
Calculate Optimal Sensitivities of MPPs
Figure 5.1. Proposed RBDO Methodology
methodology consist of the following steps.
1. Given an initial design d
0
. Set iteration counter k = 0.
51
2. Solve the following deterministic optimization problem starting from design
d
k
to get a new design d
k+1
.
min
d
: f(d, p) (5.1)
sub to : g
R
i
(x
i

, ) 0 i = 1, .., N
hard
(5.2)
g
D
j
(d, p) 0 j = 1, .., N
soft
(5.3)
d
l
d d
u
(5.4)
During the rst iteraton (k = 0), the MPP of failure for evaluating the
probabilistic constraints is set equal to the mean values of the random vari-
ables (x
i

=
x
). It should be noted that the mean of the random variables
(distribution parameters) is a subset of the design variables and xed pa-
rameters (see equation (4.6) ). This corresponds to solving the deterministic
optimization problem (equations (2.1) - (2.4)). From authors experience, it
has been observed that starting from a deterministic solution results in lower
computational cost for RBDO.
In subsequent iterations (k > 0), the MPP of failure for evaluating the
probabilistic constraints is obtained from the rst order Taylor series expansion
about the previous design point
u
i

= u
i
,k
+
u
i
,k
d
(d d
k
) i = 1, .., N
hard
. (5.5)
Note that
u
i
,k
d
is a matrix and its columns contains the gradient of the
MPPs with respect to each of the decision variables. For example, the rst
column of the matrix contains the gradient of the MPP vector, u
i

, with
respect to the rst design variable d
1
. The MPPs in the X-space are obtained
by using the transformation.
3. At the new design d
k+1
, perform an exact rst order inverse reliability analysis
(equations (4.21) - (4.22) ) for each hard constraint. This gives the MPP of
failure of each hard constraint, (u
i
,k+1
).
4. Check for convergence on the design variables and the MPPs and that the
constraints are satised. If converged, stop. Else, go to the next step.
5. Compute the post-optimal sensitivities
u
i
,k
d
for each hard constraint (i.e.,
how the MPP of failure will change with a change in design variables - see
section below).
6. Set k=k+1 and go to step 2.
5.1.1 Sensitivity of Optimal Solution to Problem Parameters
The proposed RBDO framework requires the sensitivities of the MPPs with re-
spect to the design variables. The post-optimal sensitivities are needed to update
52
the MPPs based on linearization around the previous design point. The following
techniques could be used to compute the post-optimal sensitivities for the MPPs.
1. The sensitivity of the optimal solution to problem parameters can be computed
by dierentiating the rst order Karush-Kuhn-Tucker (KKT) optimality con-
ditions [55]. The Lagrangian L for the inverse reliability optimization problem
(equations (4.21) - (4.22)) is
L = G
R
(u, ) +(u
T
u
2
), (5.6)
where is the lagrange multiplier corresponding to the equality constraint
(say h
R
). The rst order optimality conditions for this problem are
L
u
=
G
R
u

+ 2u

= 0. (5.7)
Dierentiating the rst order KKT optimality conditions with respect to a
parameter in the vector, z = [, ]
T
, the following linear system of equations
is obtained
_

2
L
u
2
u
u
T
0
_

_
_
_
_
u

z
l

z
l
_
_
_
+
_
_
_

2
L
uz
l
h
R
z
l
_
_
_
= 0. (5.8)
The above system needs to be solved for each parameter in the optimization
problem to obtain the sensitivity of the optimal solution with respect to that
parameter,
u

z
l
. In the proposed sequential RBDO framework, the system
of equations needs to be solved for only those parameters that are decision
variables in the upper level optimization. It should be noted that the Hessian
of the limit state function needs to be computed when using this technique. If
the Hessian of the limit state function is not available or is dicult to obtain,
other techniques have to be used. In the present implementation, a damped
BFGS update is used to obtain the second order information [42]. This method
is dened by
r
k
=
k
y
k
+ (1
k
)H
k
s
k
, (5.9)
where the scalar

k
=
_

_
1 : s
T
k
y
k
0.2s
T
k
H
k
s
k
0.8s
T
k
H
k
s
k
s
T
k
H
k
s
k
s
T
k
y
k
: s
T
k
y
k
0.2s
T
k
H
k
s
k
_

_
, (5.10)
and y
k
and s
k
are the dierences in the function and gradient values of the
previous iteration from the current iteration, respectively. The Hessian update
is
H
k+1
= H
k

H
k
s
k
s
T
k
H
k
s
T
k
H
k
s
k
+
r
k
r
T
k
s
T
k
r
k
. (5.11)
53
2. The sensitivity of the optimal solution to problem parameters can also be
obtained by using nite dierence techniques. These techniques can be ex-
tremely expensive, as the dimension of the decision variables and the number
of hard constraints increase. This is because a full optimization is required to
compute the sensitivity of the MPP with respect to each decision variable and
this has to be performed for each hard constraint. However, signicant com-
putational savings can be achieved if the previous optimum MPP is used as
a warm starting point to compute the change in MPP as the design variables
are perturbed.
3. Approximations to the limit state function can also be utilized to compute
the sensitivity of optimal solution to problem parameters. This technique is
described below.
(a) At a given design d
k
, perform inverse reliability analysis to obtain exact
MPP, x
i
,k
.
(b) Construct linear approximations of the hard constraint as follows.
g
R
i
= g
R
i
(x
,k
i
,
k
) +
g
R
i
x

T
x
,k
i
,
k
(x x
,k
i
) +
g
R
i

T
x
,k
i
,
k
(
k
i
) (5.12)
(c) Perform inverse reliability analysis over the linear approximation at per-
turbed values of design variables to obtain approximate sensitivities.
5.2 Test Problems
The decoupled RBDO methodology developed in this investigation is implemented
for a series of analytical, structural, and multidisciplinary design problems. The
methodology is compared to the nested RBDO approach using the PMA approach
for probabilistic constraint evaluation.
5.2.1 Short Rectangular Column
This problem has been used for testing and comparing RBDO methodologies in
Kuschel and Rackwitz [35]. The design problem is to determine the depth h and
width b of a short column with rectangular cross section with a minimal total mass
bh assuming unit mass per unit area. The uncertain vector, X = (P, M, Y ), the
stochastic parameters, and the correlations of the vector elements are listed in Table
5.1. The limit state function in terms of the random vector, X = (P, M, Y ), and
54
Table 5.1
STOCHASTIC PARAMETERS IN SHORT COLUMN PROBLEM
Variable Dist. Mean/St. dev. Cor. P Cor. M Cor. Y
Yield Stress (P) N 500/100 1 0.5 0
Bending Moments (M) N 2000/400 0.5 1 0
Axial Force (Y ) LogN 5/0.5 0 0 1
the limit state parameters, = (b, h), (which happens to be same as the design
vector d in this problem) is given by
g
R
(x, ) = 1
4M
bh
2
Y

P
2
(bhY )
2
. (5.13)
The objective function is given by
f(d) = bh. (5.14)
The depth h and the width b of the rectangular column had to satisfy 15 h 25
and 5 b 15. The allowable failure probability is 0.00621 or in other words a
reliability index for the failure mode greater than or equal to 2.5. The optimization
process was started from the point (u
0
, d
0
) = ((1, 1, 1), (5, 15)). Both approaches
results in an optimal solution d

= (8.668, 25.0). The computational eort for this


problem is compared in Table 5.2. The nested approach requires 77 evaluations
of the limit state function and 85 evaluations of its gradients as compared to 31
evaluations of the limit state function and 31 evaluation of its gradients for the
proposed framework. Therefore, it is noted that the proposed methodology for
RBDO is computationally more ecient than the traditional RBDO approach for
this particular problem. The proposed method took three cycles for convergence,
the design history for which is shown in Figure 5.2. It is observed that after the
55
Table 5.2
COMPUTATIONAL COMPARISON OF RESULTS (SHORT RECTANGULAR
COLUMN)
Formulation f
f
d
g
R g
R
d
g
R
u
Nested (PMA) 8 8 77 8 77
Decoupled Method 12 12 31 12 19
rst cycle, the new design is very close to the optimal solution. However, at this
design the MPP is not converged. Therefore, by using the proposed methodology,
we are able to converge to the true MPP within a few cycles.
5.2.2 Analytical Problem
This is an analytical multidisciplinary test problem. Even though the problem is
just two-dimensional, it is suciently nonlinear and has the attributes of a general
multidisciplinary problem. This problem has two design variables, d
1
and d
2
, and
two parameters, p
1
and p
2
. There are two random variables, X
1
and X
2
. The
design parameters, p
1
and p
2
, are the means of the random variables, X
1
and X
2
,
respectively. This problem involves a coupled system analysis and has two CAs.
The problem has two hard constraints, g
R
1
and g
R
2
. The reliability based design
56
0 1 2 3
5
10
15
20
25
d
b
h
0 1 2 3
2
1
0
1
2
u
u
1
u
2
u
3
Figure 5.2. Convergence history for the example problem
optimization problem in standard form is as follows.
Minimize d
2
1
+ 10d
2
2
+y
1
subject to g
R
1
= Y
1
(X, )/8 1 0
g
R
2
= 1 Y
2
(X, )/5 0
10 d
1
10
0 d
2
10
where d
1
=
1
, d
2
=
2
and p
1
=
X
1
= 0, p
2
=
X
2
= 0
CA
1
: Y
1
(X, ) =
2
1
+
2
0.2Y
2
(X, ) +X
1
;
y
1
(d, p) = d
2
1
+d
2
0.2y
2
(d, p) +p
1
;
CA
2
: Y
2
(X, ) =
1

2
2
+
_
Y
1
(X, ) +X
2
;
y
2
(d, p) = d
1
d
2
2
+
_
y
1
(d, p) +p
2
.
57
It is assumed that the random variables X
1
and X
2
have a uniform distribution over
the intervals [-1,1] and [-0.75,0.75] respectively. The desired value of the reliability
index
reqd
i
(for i = 1,2) is chosen as 3 for both the hard constraints.
Figure 5.3 shows the contours of the merit function and the constraints. The
10 8 6 4 2 0 2 4 6 8 10
0
1
2
3
4
5
6
7
8
9
10
d
1
d
2
1000
850
700
550
400
2
5
0
1
0
0
5
0
100
g
1
R
(p
1
,p
2
)=0
g
2
R
(p
1
,p
2
)=0
Figure 5.3. Contours of objective and constraints
zero contours of the hard constraints are plotted at the design parameters, p
1
and
p
2
(mean of the random variables, X
1
and X
2
). It should be noted that in determin-
istic optimization, two local optima exist for this problem. At the global solution,
only the rst hard constraint is active, whereas at the local solution both the hard
constraints are active. They are shown by star symbols. Both of these solutions can
be located easily by choosing dierent starting points in the design space.
Similarly, two local optimum designs exist for the RBDO problem as well. Both
reliable designs get pushed into the feasible region, characterized with a higher
merit function value and a lower probability of failure. They are shown by the
shaded squares in Figure 5.4.
To locate the two local optimal solutions of this problem, two dierent starting
points, [5, 3] and [5,3], are chosen. The results corresponding to the starting point
58
6 5 4 3 2 1 0 1
0
1
2
3
4
5
6
7
d
1
d
2
50
100
150
200
250
0
0 1 2 3 4 5 6 7 8
0
1
2
3
4
5
6
7
d
1
d
2
5
0
1
0
0
150
200
250
350
400
500
0
0 0
Figure 5.4. Plot showing two reliable optima.
[-5,3] are listed in Table 5.3.
Table 5.3
STARTING POINT [-5,3], SOLUTION [-3.006,0.049]
Cost
Double-Loop Decoupled
Measure
RIA PMA RBDO
SA calls
Not
225 65
converged
Starting at the design d = [5, 3], the proposed decoupled RBDO framework,
converges to the reliable optimum point without any diculty. The proposed
unilevel method requires 65 system analysis evaluations as compared to 225 when
using the traditional double loop PMA method. Analytical gradients were used in
implementing this problem for all methods. Note that the double loop method that
59
uses the reliability index approach to prescribe the probabilistic constraints does
not converge. For the designs that are visited by the upper level optimizer (say, d
k
at the k
th
iteration), the FORM problem does not have a solution (because of zero
failure probability at these designs). Starting from the design [-5,3], the optimizer
tries to nd the local design [-3.006,0.049]. However, it turns out that at this design,
the second hard constraint, g
R
2
, is never zero in the space of uniformly distributed
random variables, X. Since in the RIA method, the limit state function is enforced
as an equality constraint, the lower level optimizer does not converge.
The results corresponding to the starting point [5,3] are listed in Table 5.4.
Note that the double loop method that uses the RIA for probabilistic constraint
Table 5.4
STARTING POINT [5,3], SOLUTION [2.9277,1.3426]
Cost
Double-Loop Decoupled
Measure
RIA PMA RBDO
SA calls
Not
184 65
converged
evaluation fails to converge for this starting point too. Again, the reason for this
is that there is zero failure probability (innite reliability index) at the designs
visited by the upper-level optimizer and therefore the lower level optimizer does
not provide any true solution. All the other methods converge to the same local
optimum solution. The decoupled methodology developed in this investigation is
found to be suciently more ecient as compared to the nested formulation.
60
5.2.3 Cantilever Beam
This problem is taken from Thanedar and Kodiyalam [68]. A cantilever beam is
subjected to an oscillatory fatigue loading, Q
1
, and random design load in service,
Q
2
. The random variables in the problem are assumed to be independent with
statistical parameters given in Table 5.5.
Table 5.5
STOCHASTIC PARAMETERS IN CANTILEVER BEAM PROBLEM
Variable Symbol Distribution Mean/St. dev. Unit
Youngs modulus E Normal 30000/3000 ksi
Fatigue load Q
1
Lognormal 0.5056/0.1492 klb
Random load Q
2
Lognormal 0.4045/0.1492 klb
Unit yeild strength R Weibull 50/6 ksi
fatigue strength coecient A Lognormal 1.6323 10
10
/0.4724 ksi
The design variables in the problem are width (b) and depth (h) of the beam.
The objective is to minimize the weight of the beam (bh) (assuming unit weight per
unit volume) subject to following hard constraints
g
R
1
=
0.3Eb
3
d
900
Q
2
0,
g
R
2
= A(6Q
1
L/bd
2
) N
0
0,
g
R
3
=
0
(4Q
2
L
3
/Ebd
3
) 0,
g
R
4
= R (6Q
2
L/bd
2
) 0,
where N
0
= 2 10
6
,
0
= 0.15

, and L = 30

. A minimum reliability index of 3


is desired for each failure mode. It is clear that the beam design problem exhibits
nonlinear limit state functions (g
R
1
through g
R
4
), nonnormal random variables and
multicriteria constraints.
61
The optimization process was started from the point, d
0
= (1, 1). Both ap-
proaches result in an optimal solution d

= (0.2941, 4.5559). The computational


cost for the two methods is compared in terms of the total number of g-function
evaluations taken by each method. The proposed decoupled RBDO method took 238
g-function evaluations as compared to 523 evaluations by the nested RBDO method.
This does not include derivative calculations as analytical rst order derivatives were
used. Therefore, it is noted that the proposed methodology is signicantly more ef-
cient compared to the traditional approach while providing the same solution.
5.2.4 Steel Column
This problem is taken from Kuschel and Rackwitz [35]. The problem is a steel
column with design vector, d = (b, d, h), where
b = mean of ange breadth,
d = mean of ange thickness, and
h = mean of height of steel prole.
The length of the steel column (s) is 7500 mm. The objective is to min-
imize the cost function, f = bd + 5h. The independent random vector, X =
(F
s
, P
1
, P
2
, P
3
, B, D, H, F
0
, E), and its stochastic characteristics are given in Table
7.2.
The limit state function in terms of the random vector, X, the limit state pa-
62
Table 5.6
STOCHASTIC PARAMETERS IN STEEL COLUMN PROBLEM
Variable Symbol Distribution Mean/Standard deviation Unit
Yield stress F
s
Lognormal 400/35 MPa
Dead weight load P
1
Normal 500000/50000 N
Variable load P
2
Gumbel 600000/90000 N
Variable load P
3
Gumbel 600000/90000 N
Flange breadth B Lognormal b/3 mm
Flange thickness D Lognormal d/2 mm
Height of prole H Lognormal h/5 mm
Initial Deection F
0
Normal 30/10 mm
Youngs modulus E Weibull 21000/4200 Mpa
rameters, = d, is given as
G
R
(X, ) = F
s
T
_
1
/
s
+
F
0
/
s
.

b

b
T
_
,
where
/
s
= 2BD, (area of section)
/
s
= BDH, (modulus of section)
/
i
=
1
2
BDH
2
, (moment of inertia)

b
=

2
E/
i
s
2
, (Euler buckling load)
The means of the ange breadth b and ange thickness d must be within the intervals
[200, 400] and [10, 30] respectively. The interval [100, 500] denes the admissible
mean height h of the T-shaped steel prole. It is required that the optimal design
satises a reliability level of 3.
Again, both the methods yield the same optimal solution, d = (200, 17.1831, 100).
The computational cost of the two approaches is compared in terms of the number
of g-function evaluations taken be each method. The proposed decoupled RBDO
63
methodology took 236 evaluations of the limit state function as compared to 457
evaluations taken by the nested RBDO approach. This does not include derivative
calculations as analytical rst order derivatives were used. Again, it is noted that
the proposed methodology is signicantly more ecient compared to the traditional
approach while providing the same solution.
5.3 Summary
A new decoupled iterative RBDO methodology is presented. The deterministic
optimization phase is separated from the reliability analysis phase. During the de-
terministic optimization phase the most probable point of failure corresponding to
each failure mode is obtained by using rst order Taylor series expansion about
the design point from the previous cycle. The most probable point update during
deterministic optimization requires the sensitivities of the MPPs with respect to
the design vector. This requires the second order derivatives of the failure mode.
In this investigation, a damped BFGS update scheme is employed to compute the
second order derivatives. It is observed that the estimated most probable point
converges to the exact values in a few cycles. This implies that the Hessian up-
date scheme gives an accurate estimate of the second order information of the limit
state function. The framework is tested using a series of structural and multidisci-
plinary design problems. It is found that the proposed methodology provides the
same solution as the traditional nested optimization formulation, and is signicantly
more computationally ecient. For the problems considered, the decoupled RBDO
methodology reduces the computational cost by 2 to 3 times as compared to the
traditional approach.
64
CHAPTER 6
UNILEVEL RBDO METHODOLOGY
In this chapter, a novel unilevel formulation for reliability based design optimiza-
tion is developed. In this formulation the lower level optimization (evaluation of
reliability constraints in the double-loop formulation) is replaced by its correspond-
ing rst order Karush-Kuhn-Tucker (KKT) necessary optimality conditions at the
upper level optimization. It is shown that such a replacement is computationally
equivalent to solving the original nested optimization if the lower level optimization
problem is solved by numerically satisfying the KKT conditions (which is typically
the case). Numerical studies show that the proposed formulation is numerically
robust (stable) and computationally ecient compared to the existing approaches
for reliability based design optimization.
6.1 A new unilevel RBDO methodology
The main focus of this research has been to develop a robust and ecient for-
mulation for performing RBDO. As mentioned earlier, the probabilistic constraint
specication using the performance measure approach is robust compared to the
reliability index approach. However, the methodology is still nested and is hence
expensive. In this research, the inverse reliability analysis optimization problem
is replaced by the corresponding rst order necessary Karush-Kuhn-Tucker (KKT)
optimality conditions. The KKT conditions for the reliability constraints similar to
65
PMA (Eqns. (4.21)-(4.22)) are used. The treatment of Eqn. (4.22) is a bit subtle.
No simple modication of Eqn. (4.22) will result in an equality constraint that is
both quasiconvex and quasiconcave, which would be required for the suciency of
the KKT conditions. For necessity of the KKT conditions, observe that |u| is
convex and |u| 0 trivially satises Slaters constraint qualication (feasible
set has a strictly interior point) [41]. Assume that G
R
(u, ) is pseudoconvex with
respect to u for each xed . Now G
R
(u, ) pseudoconvex and |u| convex means
that the KKT conditions are also sucient, hence the original and KKT formulation
will be equivalent. Therefore, to facilitate development of the current method, the
inverse FORM can be restated as
min G
R
i
(u, ) (6.1)
subject to |u| . (6.2)
The Lagrangian corresponding to the optimization problem is
L = G
R
(u, ) +(|u| ), (6.3)
where is the scalar Lagrange multiplier. The rst order necessary conditions for
the problem are

u
G
R
(u

, ) +
u
(|u

| ) = 0, (6.4)
|u

| 0, (6.5)
0, (6.6)
(|u

| ) = 0 (6.7)
where u

is the solution point u

=
of the inverse reliability optimization problem
when |u

| = . u

= 0 is a special degenerate case, so assume henceforth that


u

,= 0. From equation (6.4), we have (assuming ,= 0)


u

=
1

|u

|
u
G
R
(u

, ). (6.8)
66
Observe that Eqn. (6.8) implies
= |
u
G
R
(u

, )| 0, (6.9)
which is consistent with Eqn. (6.6) and is valid even if = 0. Substituting for in
equation (6.8) and rearranging,

u
G
R
(u

, )
|
u
G
R
(u

, )|
=
u

|u

|
. (6.10)
Eqn. (6.10) says that u

and
u
G
R
(u

, ) point in opposite directions, which is


consistent with u

being the closest point in the manifold G


R
(u, ) = constant to
the origin.
Eqn. (6.10) is true for all , if u

= u

=
is the solution to the inverse reliability
optimization problem, because |u| 0 satises the reverse convex constraint
qualication (the equality constraint (4.22) is equivalent to the convex constraint
(6.2) and |u| 0, hence constraint qualications are satised and the KKT
condition (6.10) is necessary). In general, without the pseudoconvexity assumption
on G
R
, solving equation (6.10) does not necessarily imply that u

is the optimal
solution to the optimization problem.
It should be noted that the KKT conditions for the direct and inverse FORM
problems dier only in terms of what constraints are being presented as equality
constraints to the upper level optimizer. When using the KKT conditions of the
direct FORM problem in the upper level optimization, the limit state function is
presented as an equality constraint and the constraint on the reliability index is
an inequality constraint. As mentioned earlier, it is possible to have cases where
the limit state function never becomes zero. In other words, it is associated with
zero (or one) failure probability. When such a case occurs, the formulation given
by Kuschel and Rackwitz [36] might fail to yield a solution. In other words, it is
numerically unstable.
67
In the current unilevel formulation, the rst order conditions of the inverse
FORM problem are used. The corresponding KKT conditions for the inverse relia-
bility problem (Eqns. (6.1)-(6.2) ) are
h1
i

u
G
R
i
(u
i
, ) +
i
u
i
|u
i
|
= 0, (6.11)
g1
i
[[u
i
[[
reqd
i
0, (6.12)
h2
i

i
(|u
i
|
reqd
i
) = 0, (6.13)
g2
i

i
0. (6.14)
Using these rst order optimality conditions, the unilevel RBDO architecture can
be stated as follows
min
daug
f(d, p, y(d, p)) (6.15)
d
aug
= [d, u
1
, .., u
N
hard
,
1
, ..,
N
hard
]
sub. to G
R
i
(u, ) 0 i = 1, .., N
hard
, (6.16)
h1
i
= 0 i = 1, .., N
hard
, (6.17)
h2
i
= 0 i = 1, .., N
hard
, (6.18)
g1
i
0 i = 1, .., N
hard
, (6.19)
g2
i
0 i = 1, .., N
hard
, (6.20)
g
D
j
(d, p, y(d, p)) 0 j = 1, .., N
soft
, (6.21)
d
l
d d
u
. (6.22)
If d

is a solution of Eqns. (4.1)(4.4), then there exist u

i
and

i
such that
[d

, u

1
, .., u
N

hard
,

1
, ..,

N
hard
] is a solution of Eqns. (7.4)(6.22). The converse is
true, under the mild assumption that all the functions G
R
i
(u, ) are pseudoconvex
in u for each xed .
It should be noted that the dimensionality of the problem has increased, as
in the unilevel method given in Kuschel and Rackwitz [36]. The optimization is
68
performed with respect to the design variables d, the MPPs of failure, and the
Lagrange multipliers, simultaneously. At the beginning of the optimization, u
i
does
not correspond to the true MPP at the design d. The exact MPPs of failure u
i

and the optimum design d

are found at convergence.


6.2 Test Problems
The proposed unilevel method is implemented for a simple analytical problem to
illustrate the method. A higher dimensional multidisciplinary structures control
problem is used to illustrate the ecacy of the method for relatively medium size
(20-30 variables) engineering problems. The double loop methods (DLM) for RBDO
that use the reliability index approach (RIA) or the performance measure approach
(PMA) for reliability constraint evaluation are compared to the unilevel methods on
the analytical problem. The unilevel method developed by Kuschel and Rackwitz
[36] is referred as unilevel-RIA and the method developed in this investigation is
referred as unilevel-PMA. The proposed methodology (unilevel-PMA) is also com-
pared with the double-loop PMA approach on the structures control problem.
6.2.1 Analytical Problem
The method is illustrated with a small analytical multidisciplinary test problem.
This problem is chosen to illustrate the robustness of the proposed formulation
compared to other methods. Even though the problem is just two-dimensional, it is
suciently nonlinear and has the attributes of a general multidisciplinary problem.
This problem has two design variables, d
1
and d
2
, and two parameters, p
1
and p
2
.
There are two random variables, X
1
and X
2
. The design parameters, p
1
and p
2
, are
the means of the random variables, X
1
and X
2
, respectively. This problem involves
a coupled system analysis and has two CAs. The problem has two hard constraints,
g
R
1
and g
R
2
. The reliability-based design optimization problem in standard form is
69
as follows.
Minimize : d
2
1
+ 10d
2
2
+y
1
subject to : g
R
1
= Y
1
(X, )/8 1 0
g
R
2
= 1 Y
2
(X, )/5 0
10 d
1
10
0 d
2
10
where d
1
=
1
, d
2
=
2
and p
1
=
X
1
= 0, p
2
=
X
2
= 0
CA
1
: Y
1
(X, ) =
2
1
+
2
0.2Y
2
(X, ) +X
1
;
y
1
(d, p) = d
2
1
+d
2
0.2y
2
(d, p) +p
1
;
CA
2
: Y
2
(X, ) =
1

2
2
+
_
Y
1
(X, ) +X
2
;
y
2
(d, p) = d
1
d
2
2
+
_
y
1
(d, p) +p
2
.
It is assumed that the random variables X
1
and X
2
have a uniform distribution over
the intervals [-1,1] and [-0.75,0.75] respectively. The desired value of the reliability
index
reqd
i
(for i = 1,2) is chosen as 3 for both the hard constraints.
Figure 6.1 shows the contours of the merit function and the constraints. The
zero contours of the hard constraints are plotted at the design parameters, p
1
and
p
2
(mean of the random variables, X
1
and X
2
). It should be noted that in determin-
istic optimization, two local optima exist for this problem. At the global solution,
only the rst hard constraint is active, whereas at the local solution both the hard
constraints are active. They are shown by star symbols. Both of these solutions can
be located easily by choosing dierent starting points in the design space.
Similarly, two local optimum designs exist for the RBDO problem as well. Both
reliable designs get pushed into the feasible region, characterized with a higher
70
10 8 6 4 2 0 2 4 6 8 10
0
1
2
3
4
5
6
7
8
9
10
d
1
d
2
1000
850
700
550
400
2
5
0
1
0
0
5
0
100
g
1
R
(p
1
,p
2
)=0
g
2
R
(p
1
,p
2
)=0
Figure 6.1. Contours of objective and constraints
merit function value and a lower probability of failure. They are shown by the
shaded squares in Figure 6.2.
To locate the two local optimal solutions of this problem, two dierent starting
points, [5, 3] and [5,3], are chosen. The results corresponding to the starting point
[-5,3] are listed in Table 6.1.
Table 6.1
STARTING POINT [-5,3], SOLUTION [-3.006,0.049]
Cost
Double-Loop Unilevel
Measure
RIA PMA RIA PMA
SA calls
Not
225
Not
24
converged converged
Starting at the design d = [5, 3], the proposed unilevel method, which uses
the KKT conditions of the performance measure approach (PMA) to prescribe the
probabilistic constraints, converges to the reliable optimum point without any dif-
71
6 5 4 3 2 1 0 1
0
1
2
3
4
5
6
7
d
1
d
2
50
100
150
200
250
0
0 1 2 3 4 5 6 7 8
0
1
2
3
4
5
6
7
d
1
d
2
5
0
1
0
0
150
200
250
350
400
500
0
0 0
Figure 6.2. Plot showing two reliable optima.
culty. The proposed unilevel method requires 24 system analysis evaluations as
compared to 225 when using the traditional double loop PMA method. Analyti-
cal gradients were used in implementing this problem for all methods. Note that
the double loop method that uses the reliability index approach to prescribe the
probabilistic constraints does not converge. For the designs that are visited by the
upper-level optimizer (say, d
k
at the k
th
iteration), the FORM problem does not
have a solution (because of zero failure probability at these designs). Similar con-
clusions can be drawn for the unilevel-RIA method. Starting from the design [-5,3],
the optimizer tries to nd the local design [-3.006,0.049]. However, it turns out that
at this design, the second hard constraint, g
R
2
, is never zero in the space of uniformly
distributed random variables, X. Since in the unilevel-RIA method, the limit state
function is enforced as an equality constraint, the optimizer does not converge.
The results corresponding to the starting point [5,3] are listed in Table 6.2.
Note that the double loop method that uses the RIA for probabilistic constraint
72
Table 6.2
STARTING POINT [5,3], SOLUTION [2.9277,1.3426]
Cost
Double-Loop Unilevel
Measure
RIA PMA RIA PMA
SA calls
Not
184 24 21
converged
evaluation fails to converge for this starting point too. Again, the reason for this
is that there is zero failure probability (innite reliability index) at the designs
visited by the upper-level optimizer and therefore the lower level optimizer does
not provide any true solution. All the other methods converge to the same local
optimum solution. The computational cost associated with the two unilevel methods
is comparable. However, the unilevel-PMA method developed here is numerically
robust compared to the unilevel-RIA method. Both unilevel methods are found to
be computationally more ecient as compared to the double loop methods.
6.2.2 Control Augmented Structures Problem
Figure 6.3 shows the control-augmented structure as proposed by Sobieszczanski-
Sobieski et al [65]. This test problem has been used by Padmanabhan et al [48] for
testing RBDO methodologies. The problem as described in Padmanabhan et al [48]
is given here. There are two coupled disciplines (contributing analyses (CAs)) in
this problem. They are the structures subsystem and the controls subsystem. The
structure is a 5-element cantilever beam, numbered 1-5 from the free end to the xed
end, as shown in the Figure 6.3. Each element is of equal length, but the breadth
and height are design variables. Three static loads, T
1
, T
2
, and T
3
, are applied to
the rst three elements. The beam is also acted on by a time varying force P, which
73
T T T
P=f(t)
A
B 1 2 3
Figure 6.3. Control augmented structures problem.
is a ramp function. Controllers A and B are designed as an optimal linear quadratic
regulator to control the lateral and rotational displacements of the free end of the
beam, respectively. The analysis is coupled since the weights of the controllers,
which are assumed to be proportional to the control eort, are required for the mass
matrix of the structure, and one requires the eigenfrequencies and eigenvectors of
the structure in the modal analysis for designing the controller, as shown in Figure
6.4. The damping matrix is taken to be proportional to the stiness matrix by a
CA1
Structures
CA2
Controls
Controls
Weight
Eigenmodes
&
frequencies

LQR
Static & Dynamic
Analysis
Design Variables
(beam dimensions & damping constant)
Merit function
(Beam weight)
Constraints
(stresses, displacements, natural frequencies)
SA
Figure 6.4. Coupling in control augmented structures problem.
factor c for the dynamic analysis of the structure. This damping parameter is also
a design variable. The constraints arise due to constraints on static stresses, static
74
and dynamic displacements, and the natural frequencies. The main objective is to
minimize the total weight of the beam and the controllers.
Computation of Sensitivities of the SA
Sensitivities for the control augmented system analysis can be estimated using
a nite dierence scheme, or one can use analytic techniques. The sensitivities
obtained from the analytical techniques are superior to those calculated from nite
dierence techniques especially when used for coupled systems, since the use of nite
dierence techniques can give inaccurate and noisy derivatives, and also because
it is dicult to obtain accurate sensitivities for certain outputs like the natural
frequencies and the mode shapes using nite dierence techniques.
Since the problem being considered is coupled, one needs to use global sensitivity
equations (GSEs), which are based on the implicit function dierentiation rule.
Sensitivities of the outputs of the structures module can be found using analytic
and numerical techniques [26]. The sensitivities of the static displacements and
stresses are quite easy to compute, but the computation of sensitivities of natural
frequencies and corresponding mode shapes is more involved and can be done using
various methods like Nelsons method, the modal method, and the modied modal
method. Sutter et al [66] compare these methods in terms of computational costs
and rate of convergence. Analytic sensitivities of the controls output requires the
computation of sensitivities of the solution of an algebraic Riccati equation used for
obtaining a linear quadratic regulator, as described by Khot [30].
The design variables for this test problem are
d = [b
1
, b
2
, b
3
, b
4
, b
5
, h
1
, h
2
, h
3
, h
4
, h
5
, c]
T
, (6.23)
where
75
b
i
, h
i
= breadth and height of the i
th
element, resp.,
c = damping matrix to stiness matrix ratio (scalar).
The random variables in the problem are
= density of the beam material,
E = modulus of elasticity of the beam material, and

a
= ultimate static stress.
The constraints for the problem are formulated in terms of the allowable dis-
placements (lateral and rotational), rst and second natural frequencies, and the
stresses. They are
g
i
= 1
_
dl
i
dla
_
2
, i = 1, .., 5,
g
i+5
= 1
_
dr
i
dra
_
2
, i = 1, .., 5,
g
11
=

1

1a
1,
g
12
=

2

2a
1,
g
2i+11
= 1

r
i

a
, i = 1, .., 5,
g
2i+12
= 1

l
i

a
, i = 1, .., 5,
g
i+22
= 1
_
ddl
i
dla
_
2
, i = 1, .., 5,
g
i+27
= 1
_
ddr
i
dra
_
2
, i = 1, .., 5,
where
dl
i
, dr
i
= static lateral and rotational displacements of i
th
element, resp.,
dla,dra = maximum allowable static lateral and rotational displacements,

1
,
2
= rst and second natural frequencies,

1a
,
2a
= minimum required value for the rst and second natural frequencies,
76

r
i
,
l
i
= maximum static stresses at the right and left ends of i
th
element,

a
= maximum allowable static stress,
ddl
i
, ddr
i
= dynamic lateral and rotational displacements of i
th
element, and
ddla, ddra = maximum allowable dynamic lateral and rotational displacements.
The random variables for this problem
a
, , and E are assumed to be indepen-
dent and normally distributed with statistical parameters given in Table 6.3.
Table 6.3
STATISTICAL INFORMATION FOR THE RANDOM VARIABLES
Random Variable Mean Standard Deviation

a
(psi) 30,000 3000
(lb/in
3
) 0.1 0.01
E (ksi) 10500 1050
For RBDO test studies, constraints g
1
, g
6
, g
14
, g
16
, g
18
, g
20
, g
22
, and g
28
are
considered to be more important and therefore only these are considered as hard
constraints. The rest of the constraints are considered as soft (deterministic) con-
straints. The system probability of failure (P
allow
sys
) was required to be 0.001, which
was equally distributed among the 8 failure modes. This gives a desired reliability
index of
reqd
i
=
1
_
P
allow
sys
8
_
= 3.6623 for each failure mode.
The RBDO was performed using two dierent methods, the double loop method
that uses the PMA approach to prescribe the probabilistic constraint and the
unilevel-PMA method described earlier. The unilevel-PMA method for this test
case results in nontrivial problem. Since there are 8 failure modes and three ran-
dom variables for each failure mode, there are 32 equality constraints imposed by
the unilevel method. Also, since the unilevel method is solved in an augmented
77
design space consisting of the original design variables, the MPP of failure for each
failure mode, and the Lagrange multipliers for each failure mode, the dimensionality
of the design vector for this test case is 43. It should be noted that the sensitivities
of the rst order KKT conditions (Eqn. (6.4)) require calculation of second order
information for the failure modes with respect to the augmented design variable
vector. In the present implementation, a damped BFGS update is used to obtain
the second order information [42]. This method is dened by
r
k
=
k
y
k
+ (1
k
)H
k
s
k
, (6.24)
where the scalar

k
=
_

_
1 : s
T
k
y
k
0.2s
T
k
H
k
s
k
0.8s
T
k
H
k
s
k
s
T
k
H
k
s
k
s
T
k
y
k
: s
T
k
y
k
0.2s
T
k
H
k
s
k
_

_
, (6.25)
and y
k
and s
k
are the dierences in the function and gradient values of the previous
iteration from the current iteration, respectively. The Hessian update is
H
k+1
= H
k

H
k
s
k
s
T
k
H
k
s
T
k
H
k
s
k
+
r
k
r
T
k
s
T
k
r
k
. (6.26)
The starting and the nal designs for RBDO are given in Table 6.4. The
deterministic optimum design was chosen to be the starting design for RBDO. It
was noted that this signicantly reduces the required number of system analysis
evaluations. For the proposed method the designer has to choose a starting point
for the MPPs in the augmented design vector. Dierent choices for the initial MPPs
may lead to dierent nal designs. In this study, an inverse reliability analysis was
performed at the initial design d to identify a suitable starting point for the MPP
of failure for each hard constraint. At the initial design, g
1
is active. Both the
DLM-PMA and the unilevel-PMA methods yield the same nal design. Note that
the value of the merit function (weight of the beam) has increased considerably at
78
Table 6.4
MERIT FUNCTION AT THE INITIAL AND FINAL DESIGNS
Initial Design Final Design
b
15
3 3
h
1
3.703 3.5805
h
2
7.040 8.5816
h
3
9.807 12.136
h
4
11.998 14.863
h
5
13.840 17.162
c 0.06 0.06
f 1493.97 1753.5
the nal design. This is expected for a more reliable structure to account for the
variation in the random variables.
The values of the hard constraints at the nal design are given in Table 6.5.
It should be noted that the constraints g
6
, g
16
, g
18
, g
20
, and g
22
dictate the system
failure. The reliability constraints corresponding to these constraints are the only
active constraints in the RBDO. The other hard constraints have a value greater
than zero, which means that the reliability index corresponding to those constraints
is much higher than the desired index.
The computational cost of the two methods is compared in Table 6.6. The
proposed method is observed to be twice as fast as the nested approach. Therefore,
the proposed method is not only a robust formulation for RBDO problems, but is
also computationally ecient.
6.3 Summary
A traditional RBDO methodology is very expensive for designing engineering sys-
tems. To address this issue, a new unilevel formulation for RBDO is developed.
79
Table 6.5
HARD CONSTRAINTS AT THE FINAL DESIGN
g
i
value at optimum
g
1
0.2232
g
6
4.7 10
8
g
14
0.1794
g
16
1.1 10
16
g
18
1.1 10
16
g
20
1.1 10
16
g
22
3.3 10
16
g
28
0.3749
Table 6.6
COMPARISON OF COMPUTATIONAL COST OF RBDO METHODS
Method SA Calls
DLM-PMA 493
Unilevel-PMA 261
The rst order KKT conditions corresponding to the probabilistic constraint (as
in PMA) are enforced directly at the system level optimizer, thus eliminating the
lower level optimizations used to compute the probabilistic constraints. The pro-
posed formulation is solved in an augmented design space that consists of the original
decision variables, the MPP of failure corresponding to each failure driven mode,
and Lagrange multipliers. It is mathematically equivalent to the original nested op-
timization formulation if the inner optimization problem is solved by satisfying the
KKT conditions (which is eectively what most numerical optimization algorithms
do). Under mild pseudoconvexity assumptions on the hard constraints, the proposed
80
formulation is mathematically equivalent to the original nested formulation. The
method is tested using a simple analytical problem and a multidisciplinary struc-
tures control problem, and is observed to be numerically robust and computationally
ecient compared to the existing approaches for RBDO.
It is noted that the proposed formulation for RBDO is accompanied by a large
number of equality constraints. Most commercial optimizers exhibit numerical insta-
bility or show poor convergence behavior for problems with large numbers of equality
constraints. In the next chapter, continuation methods have been employed to solve
the unilevel RBDO problem.
81
CHAPTER 7
CONTINUATION METHODS IN OPTIMIZATION
In the unilevel formulation developed in the last chapter, the KKT conditions of
the inner optimization for each probabilistic constraint evaluation are imposed at
the system level as equality constraints. Most commercial optimizers are usually
numerically unstable when applied to problems accompanied by many equality con-
straints. In this chapter, continuation methods are used for constraint relaxation
and to obtain a simpler problem for which the solution is known. A series of opti-
mization problem are then solved as the relaxed optimization problem approaches
the original problem.
7.1 Proposed Algorithm
Since the problem of interest is accompanied by a large number of equality con-
straints, it is extremely important that the constraint relaxation techniques be such
that it is easier to identify an initial feasible starting point. In this investigation,
continuation methods have been used for this purpose[73]. Homotopy (continua-
tion) techniques have been shown to be extremely robust in the works of Perez et.
al.[50]. The constraint relaxation used in this investigation is of the following form
82
gr g + (1 )b 0, (7.1)
hr h + (1 )c = 0, (7.2)
where g and h are generic inequalities and equalities, gr and hr are the relaxed
inequalities and equalities respectively. The constants b and c are chosen to make
the relaxed constraints feasible at the beginning. For the inequalities, b is based on
the value of g. If g 0, b is set equal to zero. If g < 0, b is set equal to negative
of g. Similarly, for the equalities, the constant c is evaluated to satisfy the relaxed
equality at the initial design. It is set equal to the negative of the value of h in
current studies. The homotopy parameter drives the relaxed constraints to the
original constraints by gradually adjusting = 0 = 1.
After each cycle, the homotopy parameter is updated. In this investigation,
it is incremented by a constant value. Note that the homotopy parameter helps
in gradually solving simpler problems from a known solution. As the parameter is
changed from 0 to 1, the solution to the original problem is found.
7.2 Test Problems
The proposed algorithm is implemented for a short rectangular column design prob-
lem and a steel column design problem. Both the test problems are taken from the
literature and have been used to test RBDO methodologies.
7.2.1 Short Rectangular Column
The design problem is to determine the depth h and width b of a short column
with rectangular cross section with a minimal total mass bh assuming unit mass
per unit area. The uncertain vector, X = (P, M, Y ), the stochastic parameters,
and the correlations of the vector elements are listed in Table 7.1. The limit
83
Table 7.1
STOCHASTIC PARAMETERS IN SHORT COLUMN PROBLEM
Variable Dist. Mean/St. dev. Cor. P Cor. M Cor. Y
Yield Stress (P) N 500/100 1 0.5 0
Bending Moments (M) N 2000/400 0.5 1 0
Axial Force (Y ) LogN 5/0.5 0 0 1
state function in terms of the random vector, X = (P, M, Y ), and the limit state
parameters, = (b, h), (which happens to be same as the design vector d in this
problem) is given by
g
R
(X, ) = 1
4M
bh
2
Y

P
2
(bhY )
2
. (7.3)
The objective function is given by
f(d) = bh. (7.4)
The depth h and the width b of the rectangular column had to satisfy 15 h 25
and 5 b 15. The allowable failure probability is 0.00621 or in other words a
reliability index for the failure mode greater than or equal to 2.5. The optimiza-
tion process was started from the point (u
0
, d
0
,
0
) = ((1, 1, 1), (5, 15), 0.1). The
optimal solution for this problem is d

= (8.668, 25.0).
Figure 7.1 shows the history of the objective function. It is noted that as
the value of the homotopy parameter increases from 0 to 1, the objective function
gradually approaches the optimal solution.
Figure 7.2 shows the history of the augmented design variables. Observe that
the variables gradually approach the optimal solution of the original problem. The
homotopy parameter controls the progress of the optimization process. For highly
nonlinear problems, it might be dicult to locate the solution directly. The use
84
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1
190
195
200
205
210
215
220

f
Figure 7.1. Convergence of objective function.
of homotopy parameter allows to start from a known solution and gradually make
progress towards the optimal solution.
7.2.2 Steel Column
The problem is a steel column with design vector, d = (b, d, h), where
b = mean of ange breadth,
d = mean of ange thickness, and
h = mean of height of steel prole.
The length of the steel column (s)is 7500 mm. The objective is to mini-
mize the cost function, f = bd + 5h. The independent random vector, X =
(F
s
, P
1
, P
2
, P
3
, B, D, H, F
0
, E), and its stochastic characteristics are given in Table
7.2.
The limit state function in terms of the random vector, X, the limit state pa-
85
0 0.2 0.4 0.6 0.8 1
7.5
8
8.5
9

b
0 0.2 0.4 0.6 0.8 1
24
24.5
25
25.5
26

h
0 0.2 0.4 0.6 0.8 1
1.2
1.4
1.6
1.8
2
u
1

0 0.2 0.4 0.6 0.8 1


0.55
0.6
0.65
0.7
0.75

u
2
0 0.2 0.4 0.6 0.8 1
1.6
1.4
1.2
1
0.8

u
3
0 0.2 0.4 0.6 0.8 1
0.1
0.2
0.3
0.4

Figure 7.2. Convergence of optimization variables.


rameters, = d, is given as
G
R
(X, ) = F
s
T
_
1
/
s
+
F
0
/
s
.

b

b
T
_
,
where
/
s
= 2BD, (area of section)
/
s
= BDH, (modulus of section)
/
i
=
1
2
BDH
2
, (moment of inertia)

b
=

2
E/
i
s
2
, (Euler buckling load)
The means of the ange breadth b and ange thickness d must be within the intervals
[200, 400] and [10, 30] respectively. The interval [100, 500] denes the admissible
mean height h of the T-shaped steel prole. It is required that the optimal design
satises a reliability level of 3.
The optimal solution for this problem is d = (200, 17.1831, 100). Similar con-
vergence history was observed for this test problem as well. Figure 7.3 shows the
86
Table 7.2
STOCHASTIC PARAMETERS IN STEEL COLUMN PROBLEM
Variable Symbol Distribution Mean/Standard deviation Unit
Yield stress F
s
Lognormal 400/35 MPa
Dead weight load P
1
Normal 500000/50000 N
Variable load P
2
Gumbel 600000/90000 N
Variable load P
3
Gumbel 600000/90000 N
Flange breadth B Lognormal b/3 mm
Flange thickness D Lognormal d/2 mm
Height of prole H Lognormal h/5 mm
Initial Deection F
0
Normal 30/10 mm
Youngs modulus E Weibull 21000/4200 Mpa
convergence of the objective function. Again it is observed that the homotopy
parameter controls the progress of the optimization process.
7.3 Summary
An optimization methodology for reliability based design is presented. From the au-
thors experience, the unilevel formulation for RBDO, when directly coupled with an
optimizer, may have convergence diculties if there are many equality constraints
or if the problem is very nonlinear. Since the unilevel formulation is usually ac-
companied by many equality constraints, homotopy techniques are used to relax
the constraints and identify a starting point that is feasible with respect to the
relaxed constraints. In this investigation, the homotopy parameter is incremented
by a xed value. A series of optimization problems are solved for various values of
the homotopy parameter as the relaxed problem approaches the original problem.
It is realized that it is easier to solve the relaxed problem from a known solution
and make gradual progress towards the solution than solve the problem directly.
87
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1
2800
3000
3200
3400
3600
3800
4000
4200
4400
4600
4800

f
Figure 7.3. Convergence of objective function.
The proposed strategy is tested on two design problems. It is observed that the
homotopy parameter controls the progress made in each cycle of the optimization
process. As the homotopy parameter approaches the value of 1, the optimal solution
to the original problem is obtained.
88
CHAPTER 8
RELIABILITY BASED DESIGN OPTIMIZATION UNDER EPISTEMIC
UNCERTAINTY
Advances in computational performance have led to the development of large-scale
simulation tools for design. Systems generated using such simulation tools can fail in
service if the uncertainty of the simulation tools performance predictions is not ac-
counted for. In this chapter, an investigation of how uncertainty can be quantied in
multidisciplinary systems analysis subject to epistemic uncertainty associated with
the disciplinary design tools and input parameters is undertaken. Evidence theory
is used to quantify uncertainty in terms of the uncertain measures of belief and
plausibility. To illustrate the methodology, multidisciplinary analysis problems are
introduced as an extension to the epistemic uncertainty challenge problems identied
by Sandia National Laboratories.
After uncertainty has been characterized mathematically the designer seeks the
optimum design under uncertainty. The measures of uncertainty provided by ev-
idence theory are discontinuous functions. Such non-smooth functions cannot be
used in traditional gradient-based optimizers because the sensitivities of the uncer-
tain measures does not exist. In this investigation, surrogate models are used to
represent the uncertain measures as continuous functions. A sequential approximate
optimization approach is used to drive the optimization process. The methodology
is illustrated in application to multidisciplinary example problems.
89
8.1 Epistemic uncertainty quantication
In this section, a brief description of uncertainty quantication in multidisciplinary
system analysis is given. The nature of a multidisciplinary system was explained in
detail in chapter 3 along with the associated uncertainties. A simplied model is
shown again in Figure 8.1.
Figure 8.1. Simplied Multidisciplinary Model
There are two subsystems CA1 and CA2 interacting with each other. Each
subsystem simulation model has model form uncertainty
1
and
2
. Also there
are uncertain parameters p and q which are inputs to CA1 and CA2 respectively.
Model form uncertainty and parametric uncertainty are known within intervals with
prescribed BPAs (basic probability assignments) as shown in Figure 8.2 for a variable
a.
In general, the intervals for an unknown parameter can be nested or non-nested
and they are usually obtained by experimental means or from expert opinion. Ex-
90
Figure 8.2. Known BPA structure
pert knowledge is primarily intuitive. In a given situation an expert makes an
intuitive decision based on judgement and experience and the context of the prob-
lem, and the decision has a high probability of being correct. The intervals for
the model form uncertainty (
i
) usually represent the dierence in the value that a
given mathematical model predicts to that of the actual system. The BPA for it
reects the degree of belief of an expert on the uncertainty of the values given by
the mathematical model. Given this information, our objective is to determine the
belief and plausibility of y y
reqd
. The algorithmic steps are outlined below.
(1) Combine the evidences obtained from dierent sources for each uncertain
variable and for each model form uncertainty e.g (p, q,
1
,
2
, etc). Dempsters rule
of combination is employed in this investigation.
(2) Determine the BPA for all the possible sets of the uncertain variables and model
uncertainties. For example, if p is given by 2 intervals, q is given by 3 intervals,
1
is
91
given by 3 intervals and
2
is given by 2 intervals, the dierent possible combinations
of the intervals is the product of all of them and is equal to 36. The BPA for each
combination is simply the product of the BPAs for each interval assuming them to
be independent.
(3) Propagate each set (C) (e.g., C
ijkl
= [p
i
, q
j
,
1k
,
2l
] where i,j,k,l are the indices
for the intervals of p, q,
1
,
2
respectively) through the system analysis (Figure 8.1)
and obtain the bounds for the states y for each C. This is performed for the given
design x.
(4) Determine the Belief and Plausibility of y y
reqd
using Equations 1 and 2
respectively.
The above steps are now illustrated with an example problem. Researchers at
Sandia National Laboratories have identied a suite of epistemic uncertainty chal-
lenge problems[46] and one of the problem is adopted here for illustration purposes.
The mathematical model is given by the following equation.
y = (a +b)
a
(8.1)
where a and b are the input uncertain variables and y is the output response of
the model. The available evidence for a and b is assumed in order to illustrate the
calculation of belief and plausibility. The information from the rst expert is given
as intervals with their BPAs.
From expert 1, for variable a
a
11
=[0.6,1], a
12
=[1,2], a
13
=[2,3]
m(a
11
)=0.3, m(a
12
)=0.6, m(a
13
)=0.1
From expert 1, for variable b
b
11
=[1.2,1.6], b
12
=[1.6,2], b
13
=[2,3]
m(b
11
)=0.3, m(b
12
)=0.3, m(b
13
)=0.4
92
where a
ij
and b
ij
refers to the j
th
proposition from i
th
expert for variables a and b
respectively.
Similarly, from expert 2, for variable a
a
21
=[0.6,3], a
22
=[1,2]
m(a
21
)=0.6, m(a
22
)=0.4
From expert 2, for variable b
b
21
=[1.2,2], b
22
=[2,3]
m(b
11
)=0.8, m(a
22
)=0.2
Step 1 : Since the evidence for a and b comes from two sources, use Dempsters
rule of combination (Equation 3.12) to combine them. The combined evidence is as
follows.
a
c1
=[0.6,1], a
c2
=[1,2], a
c3
=[2,3]
m(a
c1
)=0.2143, m(a
c2
)=0.7143, m(a
c3
)=0.0714
b
c1
=[1.2,1.6], b
c2
=[1.6,2], b
c3
=[2,3]
m(b
c1
)=0.4286, m(a
c2
)=0.4286, m(b
c3
)=0.1429
where c refers to the combined evidences.
Step 2 : Using the combined evidence for a and b, obtain all possible sets of the
intervals and their BPAs. This is shown below.
C
ij
= [a
ci
, b
cj
]
m
c
(C
ij
) = m(a
ci
)m(b
cj
)
Step 3 : Obtain the lower and upper bounds for the system response y corre-
sponding to each set C
ij
. Since, we have assumed monotonicity of y in C
ij
, Equa-
tion 8.1 is evaluated only at the vertices of C
ij
. For example, for C
11
= [a
c1
, b
c1
] =
93
[[0.6, 1], [1.2, 1.6]], the function is evaluated at points [0.6, 1.2], [0.6, 1.6], [1, 1.2] and
[1, 1.6] to obtain the bounds for the state y. Note that the state y given by equation
8.1 is indeed monotonic in the intervals chosen for the variables a and b.
Bounds for y for each set and corresponding BPAs
m
c
(C
ij
) Lower Bounds Upper Bounds
C
11
0.0918 1.4229 2.6
C
12
0.0918 1.6049 3
C
13
0.0306 1.7741 4
C
21
0.3061 2.2000 12.96
C
22
0.3061 2.6000 16
C
23
0.1020 3.0000 25
C
31
0.0306 10.2400 97.336
C
32
0.0306 12.9600 125
C
33
0.0102 16.0000 216
Step 4 : Compute belief and plausibility of y y
reqd
. Belief and plausibility
plots are shown in Figure 8.3 as a function of y
reqd
.
The propagation of C through the system analysis requires some discussion.
In general, a global optimization problems needs to be solved to determine exact
bounds for the states corresponding to each C. Examples of such techniques are
genetic algorithms and branch and bound algorithms. In our research, we have
assumed that the state information is monotonic in each C. Hence, the system
analysis (considered to be expensive) is evaluated only at the vertices of the set
C. Using this information, the belief and plausibility are easily determined. The
examples used in this investigation are monotonic in the space of uncertain variables.
However, if nothing is known about the behavior of the states with respect to the
uncertain variables, we must use discretization or global optimization techniques to
94
10
1
10
2
0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1
y
reqd
(
y


y
r
e
q
d
)
Belief
Plausibility
Figure 8.3. Complementary Cumulative Belief and Plausibility Function
nd exact bounds for the state variables corresponding to each C.
This example illustrates how evidence theory can be used to characterize epis-
temic uncertainty. Our goal in this research is to use evidence theory to characterize
uncertainty in multidisciplinary systems and to optimize these systems under uncer-
tainty. In the next section, we briey discuss the deterministic and non-deterministic
forms of the optimization problem.
8.2 Deterministic Optimization
Deterministic design optimization deals with the design variables, parameters and
responses as quantities that can be dened precisely. A conventional deterministic
95
optimization problem is of the following form.
minimize : f(x) (8.2)
subject to : g(x) 0 (8.3)
x
min
x x
max
(8.4)
where f is the objective function to be minimized and is subject to the inequality
constraints g and variable bounds x
min
and x
max
.
8.3 Optimization under epistemic uncertainty
Optimized designs without considering the uncertainty in the design variables, pa-
rameters and simulation based design tools are unreliable and can be subjected
to failure in service. Therefore, it is imperative that the designers consider the
reliability of the resulting designs. Reliability based design optimization (RBDO)
methods employ numerical optimization to obtain optimal designs that are reliable.
In RBDO, the constraints for an optimization problem are formulated in terms of
the probability of failure (or reliability indices) corresponding to the failure modes
(also known as limit state functions)[54, 14, 15, 59, 1] . Such an analysis requires
the exact values of the distribution parameters (means and variances) and the exact
form of the distribution functions of the random variables. RBDO methods make
use of non-deterministic formulations for the constraints and therefore are in the
class of non-deterministic optimization problems.
It is not always possible to have all the information for the uncertain variables.
In such cases, the unknown information for the uncertain variables is normally
assumed. Such systems obtained by RBDO by making critical assumptions may
fail in practice. Therefore new methods must be developed for optimization under
uncertainty when the information available for the uncertain parameters is sparce.
96
A typical non-deterministic optimization problem is of the following form.
minimize : f(x) (8.5)
subject to : G(x) = U
M
(g(x) 0) U
M
reqd
0 (8.6)
x
min
x x
max
(8.7)
where G(x) are the non-deterministic uncertainty based constraints. It is required
that the value of the uncertain measure U
M
to be at least U
M
reqd
. Note that U
M
(g(x)
0) is a measure of the reliability of the system since g(x) 0 implies feasibility.
When little information is available, evidence theory can be used to provide un-
certain measures such as belief and plausibility. Having quantied uncertainty in
terms of belief and plausibility measures, we are now ready to use these measures in
non-deterministic design optimization (Equations 8.5-8.7). However, a careful study
of Fig. 8.3 shows that the uncertain measures, belief and plausibility, are discontinu-
ous functions. The discontinuities are associated with the dierent combinations C.
These uncertainty measures are thus nonsmooth functions and their sensitivities do
not exist, restricting us from using traditional gradient-based optimizers. Surrogate
models can be used instead and the optimization can be performed on them. A
sequential approximate optimization approach is employed in this investigation for
optimization. It is described in the next section.
8.4 Sequential Approximate Optimization
Engineering design of multidisciplinary systems often involves the use of large-scale
simulation based design tools. A common practice in the industry is the use of
surrogate models in place of these expensive computer simulations to drive the
multidisciplinary design process based on nonlinear programming techniques. In
addition, if the responses of the system analysis are discontinuous or noisy, it is
97
better to use approximation techniques to obtain smooth functions for use in the
design optimization process. In this research, the constraints that we are dealing
with are discontinuous and they are evaluated using complex multidisciplinary sim-
ulation tools. Response surface approximations are therefore used to obtain smooth
functions and a sequential approximate optimization strategy is used for design
optimization. Equations 8.5 and 8.6 are thus approximated by equations 8.8 and
8.9 respectively and an approximate optimization problem is formulated as shown
below.
minimize :

f(x) (8.8)
subject to : g(x) 0 (8.9)
x
lower
x x
upper
(8.10)
where

f and g are the approximations to the objective function and the constraints
respectively within local variable bounds x
lower
and x
upper
obtained using a trust
region methodology. Second order polynomial approximations are used in this re-
search. The approximations employ zero order matching. The information for the
gradient and Hessian terms are obtained from a least squares t.
Trust region model management strategies have been used in previous studies to
drive the sequential approximate optimization of multidisciplinary systems[57, 56,
51]. Trust region approaches manage the selection of move limits (i.e., local variable
bounds) for each sequence of approximate minimization based on the value of the
merit function obtained in the previous sequence. They are based on a trust region
ratio , which is used to monitor the performance of the current approximation with
respect to the actual function. At the k
th
iteration, a local optimization problem of
the form (Equations 8.8 -8.10) is solved around the current design point x
k
. The
move limits are dened by the trust region
k
, where [[x x
k
[[
p

k
and the p
98
norm denes the shape of the region.
k
is known as the trust region radius. In
this particular implementation,
k
denes a hypercube around x
k
which denes the
local bounds x
lower
and x
upper
.
In previous implementations [57, 56, 51] of sequential approximate optimization,
the trust region ratio
k
is obtained based on the performance of a merit function
which is an augmented Lagrangian. An augmented Lagrangian or penalty func-
tion is required to guarantee convergence of the trust region managed sequential
approximate optimization approach. In the preliminary study of Agarwal et al [3],

k
is obtained based only on the performance of the objective function. To manage
convergence in Agarwal et al [3], infeasible designs are rejected and the trust region
reduced. This approach is not provably convergent as discussed in Agarwal et al [3].
To address this issue the implementation is modied and the trust region ratio is
obtained based on the performance of the Lagrangian L = f +
T
g. Since true gradi-
ents of the constraints do not exist, the gradient information of the approximations
are employed. Using those approximate gradients and a least squares approach,
an estimate of the Lagrange multipliers is obtained. The solution of approximate
minimization problem (Equations 8.8 -8.9) subject to the local trust region gives a
new point x
k+1

. The trust region ratio


k
is computed based on the value of the
Lagrangian at the new design point.

k
=
L(x
k
) L(x
k+1

L(x
k


L(x
k+1

)
(8.11)

k
is the ratio of the actual change in the Lagrangian to the change predicted by the
approximation. If the value of
k
is close to one, it means that the approximation
is good whereas a negative value of
k
suggests a poor approximation. If the value
of
k
is greater than zero, the new point is accepted i.e., x
k+1
= x
k+1

and the trust


99
region radius
k
is updated according to the following rules.

k+1
=
_

_
c
1

k
if
k
< R
1
,
c
2

k
if
k
> R
2
,

k
otherwise.
(8.12)
Commonly used values for the limiting range constants are R
1
= 0.25 and R
2
= 0.75.
The trust region multiplication factors c
1
and c
2
are commonly set to 0.25 and 2
respectively.
8.5 Test Problems
The sequential approximate optimization approach is used to drive the non-deterministic
optimization process. Evidence theory is used to estimate epistemic uncertainty in
terms of belief. The approach is implemented in application to two multidisciplinary
test-problems. A small analytic problem and a higher dimensional aircraft concept
sizing test-problem are used. Both the problems involve coupled system analysis.
They are described in the following section.
8.6 Analytic Test Problem
The rst test-problem is an analytic problem. The system analysis makes use of
two design variables and outputs two states. The objective function is nonlinear and
their are two nonlinear constraints. The problem has two optimal solutions. This
small problem is useful for visualizing the results and understanding the performance
of the proposed method. The deterministic form of the optimization problem is as
100
follows.
minimize : f(x) = x
2
1
+ 10x
2
2
+y
1
(8.13)
subject to : g
1
= y
1
8 0 (8.14)
g
2
= 5 y
2
0 (8.15)
10 x
1
10 (8.16)
0 x
2
10 (8.17)
where the states y
1
and y
2
are calculated by contributing analyses CA
1
and CA
2
respectively as
CA
1
: y
1
= x
2
1
+x
2
0.2y
2
(8.18)
CA
2
: y
2
= x
1
x
2
2
+

y
1
(8.19)
To be suitable for optimization under uncertainty using evidence theory, the deter-
ministic form of the optimization problem (Equations 8.13 -8.17) is modied. Lets
say that the states y
1
and y
2
(Equations 8.25-8.26) given by the CAs (simulation
based design tools) are not certain (i.e., there exists epistemic uncertainty in their
performance predictions). Assume that the uncertainties in CA
1
and CA
2
are given
by
1
and
2
respectively. Estimates of the values of the epistemic uncertainty for

1
and
2
is obtained by elicitation of experts opinion. Two experts opinion are
obtained. The experts provide intervals for the uncertainty and a corresponding
basic probability assignment (BPA) for those intervals. The available information
from the two experts is shown in Figures 8.4 and 8.5.
Since the CAs are no more accurate, the original problem is reformulated as
101
- 1 -0.8 - 0.5 0.5 0.6 1
Expert 1
0.2 0.5 0.3
Expert 2
0.1 0.7 0.2
1
Figure 8.4. Experts Opinion for
1
- 0.75 - 0.7 - 0.25 0.1 0.5 0.7 0.75
Expert 1
0.2 0.5 0.3
Expert 2
0.6 0.4
2
Figure 8.5. Experts Opinion for
2
shown below.
minimize : f(x) = x
2
1
+ 10x
2
2
+y
1
(8.20)
subject to : G
1
= U
M
(y
1
8 0) U
M
reqd
0 (8.21)
G
2
= U
M
(5 y
2
0) U
M
reqd
0 (8.22)
10 x
1
10 (8.23)
0 x
2
10 (8.24)
where U
M
is the uncertain measure of interest and U
M
reqd
is the minimum required
value of the uncertain measure. To evaluate the objective function, it is assumed
that
1
and
2
are both zero. Since the formulation is posed in terms of the feasibility
of the original constraints, it means that U
M
reqd
is the minimum reliability that should
102
be met. In evidence theory, U
M
can be either Bel or Pl. The states y
1
and y
2
are
obtained from uncertain contributing analyses CA
1
and CA
2
respectively as
CA
1
: y
1
= x
2
1
+x
2
0.2y
2
+
1
(8.25)
CA
2
: y
2
= x
1
x
2
2
+

y
1
+
2
(8.26)
Note that the equations 8.21 and 8.22 dening the non-deterministic constraints
for the reformulated optimization problem are discontinuous and cannot be used in
gradient based optimizer directly. Hence the sequential approximate optimization
approach described earlier is used.
The uncertain measure used is belief (Bel). The minimum required value of
belief is taken to be 0.99. Figure 8.6 shows the history of the designs visited by the
framework. The starting point is shown by and the optimal solution is shown by
. Figure 8.7 shows the history of the objective function. Note that the objective
function drops signicantly after the rst few iterations. The convergence history
proceeds linearly as the trust region is reduced.
6 4 2 0 2 4 6 8
0
0.5
1
1.5
2
2.5
3
3.5
4
4.5
5
x
1
x
2
Starting Point
Design History
Optimal Design
Infeasible
Infeasible
Feasible
Figure 8.6. Design Variable History
103
0 5 10 15 20 25
0
50
100
150
Iterations
f
Figure 8.7. Convergence of objective function
The design obtained using the framework for optimization under uncertainty
(OUU) is compared with the deterministic optima (see Table 8.1). Note that the
objective function value has increased. This is an expected trade-o for a more
reliable design.
8.7 Aircraft concept sizing problem
Apart from the simple analytical problem mentioned in the last section, the se-
quential approximate optimization approach is also implemented in application to a
higher dimensional multidisciplinary aircraft concept sizing problem. The problem
was originally conceived and developed by the MDO research group at the Uni-
versity of Notre Dame [74]. The objective is to perform preliminary sizing of the
aircraft subject to performance constraints. The design variables in this problem
come from several disciplines such as the aircraft conguration, propulsion and aero-
dynamics, and ight regime. All the design variables have appropriate bounds on
them. There are also some parameters which are xed during the design process to
104
Table 8.1
COMPARISON OF DESIGNS
Starting Deterministic OUU
Point Optimization
x
1
-5 -2.82 -2.92
x
2
3 0.05 0.0
f 94.7 15.98 17.05
y
1
29.7 8 8.52
y
2
-8.5 0.006 0.0
Bel(y
1
8) 1.0 0.05 0.99
Bel(5 y
2
) 1.0 1.0 1.0
represent constraints on mission requirements, available technologies, and aircraft
class regulations.
The original deterministic design optimization problem has ten design variables
and ve parameters. The design of the system is decomposed into three contributing
analyses. This problem has been modied by Tappeta [67] to t the framework
of multiobjective coupled MDO systems (seven design variables and eight design
parameters). The problem has also been modied by Gu et al [24] to illustrate the
methodology of decision based collaborative optimization. It is further modied in
this research to be suitable for optimization under uncertainty (OUU) using evidence
theory. The modied problem (Tappeta [67]) is referred to as the ACS problem from
here-on and is described next. The OUU version of the ACS problem will be given
following this description.
The ACS problem has three disciplines as shown in Figure 8.8. They are
aerodynamics (CA
1
), weight (CA
2
), and performance (CA
3
). The dependency di-
agram indicates that there are two feed-forwards and no feed-backs between the
disciplines. However, CA
2
is internally coupled. The design variables and the cor-
105
Range
Stall Speed
Performance
~ x
1
x
4
~ x
1
x
7

, x
2
x
7
y
1
y
3
y
4
y
2
y
5
y
6
Wetted Area
Lift/Drag
y
1
Aero.
Total Weight
Empty Weight
y
4
y
3
Weight
Figure 8.8. Aircraft Concept Sizing Problem
responding bounds are listed in Table 8.2. Note that there are ve shared design
Table 8.2
DESIGN VARIABLES IN ACS PROBLEM
Description (Unit) Lower Upper
Bound Bound
x
1
aspect ratio of the wing 5 9
x
2
wing area (ft
2
) 100 300
x
3
fuselage length (ft) 20 30
x
4
fuselage diameter (ft) 4 5
x
5
density of air at cruise 0.0017 0.002378
altitude (slug/ft
3
)
x
6
cruise speed (ft/sec) 200 300
x
7
fuel weight (lbs) 100 2000
variables (x
1
x
4
and x
7
). Table 8.3 lists the design parameters and their values.
Table 8.4 provides a brief description of the various states and their relations with
each discipline. The state y
2
is coupled with respect to CA
1
and CA
3
and the
state y
4
is coupled with respect to CA
2
and CA
3
.
The objective in the ACS problem is to determine the least gross take-o weight
106
Table 8.3
LIST OF PARAMETERS IN THE ACS PROBLEM
Name Description Value
p
1
N
pass
number of passengers 2
p
2
N
en
number of engines 1
p
3
W
en
engine weight 197 (lbs)
p
4
W
pay
payload weight 398 (lbs)
p
5
N
zult
ultimate load factor 5.7
p
6
Eta propeller eciency 0.85
p
7
c specic fuel 0.4495
consumption (lbs/hr/hb)
p
8
C
1max
maximum lift coe. 1.7
of the wing
within the bounded design space subject to two performance constraints. The con-
straints are on the range and stall speed of the aircraft. The deterministic optimiza-
tion problem is as follows.
minimize : F = Weight = y
4
(8.27)
subject to : g
1
= 1
y
6
V
stall
req
0 (8.28)
g
2
= 1
Range
req
y
5
0 (8.29)
V
stall
req
= 70 ft/sec (8.30)
Range
req
= 560 miles (8.31)
The problem described above has been modied slightly to be suitable for design
optimization under uncertainty using evidence theory. The parameters p
3
and p
4
listed in table 8.3 are considered uncertain. The information on p
3
and p
4
is obtained
through expert opinion (lets say). Experts information is given in intervals with
corresponding BPA for each of the intervals. This is shown in Figure 8.9.
107
Table 8.4
LIST OF STATES IN THE ACS PROBLEM
Description (Unit) Output Input
From To
y
1
total aircraft wetted area (ft
2
) CA1
y
2
maximum lift to drag ratio CA1 CA3
y
3
empty weight (lbs) CA2
y
4
gross take-o weight (lbs) CA2 CA3
y
5
aircraft range (miles) CA3
y
6
stall speed (ft/sec) CA3
180 190 200 210 220
0.2 0.4 0.3 0.1
p
3
0.1 0.2 0.5 0.2
p
4
360 380 400 420 440
Figure 8.9. Expert Opinion for p
3
and p
4
Since the information on the parameters p
3
and p
4
is not certain, the determin-
istic optimization problem (Equations 8.27-8.29) is modied as follows.
minimize :
F = Weight = y
4
(8.32)
subject to :
G
1
= U
M
(1
y
6
V
stall
req
0) U
M
reqd
0 (8.33)
G
2
= U
M
(1
Range
req
y
5
0) U
M
reqd
0 (8.34)
The objective function is calculated assuming the values of p
3
and p
4
as given
108
in Table 8.3. The uncertain measure used is belief (Bel). The minimum value of
required belief is taken as 0.98. Figure 8.10 shows the convergence of the objective
function. Observe that after few iterations we are close to the optimal solution,
however to meet the required convergence criteria many more iterations are required.
2 4 6 8 10 12 14 16 18 20 22
1700
1750
1800
1850
1900
1950
2000
2050
2100
Iteration
O
b
j
e
c
t
i
v
e
Figure 8.10. Convergence of the Objective Function (ACS Problem)
The deterministic optima and the optimal solution under uncertainty are com-
pared in table 8.5. Note that the starting point is infeasible i.e., y
6
has a value
greater that 70, hence making g
2
infeasible. Observe that the objective function
(y
4
) has increased as compared to deterministic optima. This is an expected trade-
o for a more reliable design. This is evident from the fact that the value of y
5
has
increased and the value of y
6
has decreased as compared to the deterministic op-
tima, thus moving into the feasible region and ensuring the required belief measure.
In table 8.5, Bel
1
and Bel
2
means Bel(1
y
6
V
stall
req
0) and Bel(1
Rangereq
y
5
0)
respectively.
109
Table 8.5
COMPARISON OF DESIGNS (ACS PROBLEM)
Starting Deterministic OUU
Point Optimization
x
1
7.0 5 5
x
2
200.0 176.53 188.9
x
3
22.0 20 20
x
4
4.2 4 4
x
5
2.1E-03 0.0017 0.0017
x
6
250.0 200 200
x
7
200.0 142.86 150.7
y
1
810.28 710.31 742.48
y
2
12.911 10.971 11.09
y
3
1463.3 1207.6 1229.7
y
4
2061.3 1748.4 1778.4
y
5
788.11 560.02 587.87
y
6
71.407 70.001 68.246
Bel
1
1 0.1 0.98
Bel
2
0 0.1 0.98
8.8 Summary
In this investigation, an approach for performing design optimization under epis-
temic uncertainty is presented. Dempster-Shafer theory (evidence theory) has been
used to model the epistemic uncertainty arising due to incomplete information or
the lack of knowledge. The constraints posed in the design optimization problem
are evaluated using uncertain measures provided by evidence theory. The belief
measure is used in this research to formulate non-deterministic constraints. Since
the belief functions are discontinuous, a formal trust region managed sequential ap-
proximate optimization approach based on the Lagrangian is employed to drive the
design optimization. The trust region is managed by a trust region ratio based on
110
the performance of the Lagrangian. The Lagrangian is a penalty function of the
objective and the constraints. The framework is illustrated with multidisciplinary
test problems. The strength of the investigation is the use of evidence theory for
optimization under epistemic uncertainty. As a byproduct it also shows that sequen-
tial approximate optimization approaches can be used for handling discontinuous
constraints and obtaining improved designs.
111
CHAPTER 9
CONCLUSIONS AND FUTURE WORK
This chapter presents an overview and general conclusions related to the work devel-
oped in this dissertation. The general topic of research is developing novel reliability
based design optimization (RBDO) methodologies. In traditional RBDO, the un-
certainties are modelled using probability theory. In chapters 5 and 6, two dierent
methodologies for performing traditional RBDO were developed. Uncertainties in
the form of aleatory uncertainty were treated in design optimization to obtain op-
timal designs characterized by a low probability of failure. The main objective
was to reduce the computational cost associated with existing nested methodology
for RBDO. Both the methodologies were tested on engineering design problems of
reasonable size and scope. An optimization methodology based on continuation
techniques was developed for solving the unilevel RBDO methodology in chapter 7.
A second focus in this dissertation was to develop a methodology for performing op-
timization under epistemic uncertainty. Epistemic uncertainty, by its very nature, is
dicult to characterize using standard probabilistic means. Dempster-Shafer theory
was used to quantify epistemic uncertainty in chapter 8. A trust region managed
sequential approximate optimization (SAO) framework was proposed to perform
optimization under epistemic uncertainty.
112
9.1 Summary and conclusions
9.1.1 Decoupled methodology for reliability based design optimization
In chapter 5, a decoupled methodology for probabilistic design optimization is de-
veloped. Traditionally, RBDO formulations involve nested optimization making it
computationally intensive. The basic idea is to separate the main optimization phase
(optimizing an objective subject to constraints on performances) from the reliability
calculations (compute the performance that meets a given reliability requirement).
A methodology based on this paradigm is developed. During the deterministic
optimization phase, information on the most probable point (MPP) of failure corre-
sponding to each failure mode is required to calculate the performance constraints.
The most probable point of failure corresponding to each failure mode is obtained
by using the rst order Taylor series expansion about the design point from the
previous cycle. This MPP update strategy during the deterministic optimization
phase requires the sensitivities of the MPP with respect to the design vector. In
practice, this requires the second order derivatives of the failure mode. In current
implementation, a damped BFGS update scheme is employed to compute the second
order derivatives. The framework is tested using a series of structural and multidis-
ciplinary design problems taken from the literature. For the problems considered, it
is observed that the estimated most probable point converges to the exact values in
3-4 cycles. It is found that the proposed methodology provides the same solution as
the traditional nested optimization formulation, and is computationally 2-3 times
more ecient.
This methodology has its advantages and disadvantages. The major advantage is
the fact that a workable reliable design can be obtained at signicantly less compu-
tational eort. The calculations in the main optimization phase and the reliability
calculation phase can be solved independently, with dierent optimizers. By using
113
higher order reliability calculation techniques (SORM, MCS, etc), the methodology
has the potential to give optimal designs with high reliability. The major limitation
of the methodology is that it is not provably convergent. However, the problems
on which the methodology was tested were nonlinear and the MPP obtained were
exact, thus showing its potential.
9.1.2 Unilevel methodology for reliability based design optimization
In chapter 6, a new unilevel formulation for RBDO is developed. As mentioned
before, traditional RBDO involves nested optimization, making it computationally
intensive. In the proposed unilevel RBDO formulation, the rst order KKT con-
ditions corresponding to each probabilistic constraint (as in PMA) are enforced
directly at the system level optimizer, thus eliminating the lower level optimizations
used to compute the probabilistic constraints. The proposed formulation provides
improved robustness and provable convergence as compared to a unilevel variant
given by Kuschel and Rackwitz [36]. The formulation given by Kuschel and Rack-
witz [36] replaces the direct rst order reliability method (FORM) problems (lower
level optimization in the reliability index method (RIA)) by their rst order nec-
essary KKT optimality conditions. The FORM problem in RIA is numerically ill
conditioned [69]; the same is true for the formulation given by Kuschel and Rack-
witz [36]. It was shown in Tu et al [69] that PMA is numerically robust in terms of
probabilistic constraint evaluation and is therefore used in this investigation. The
proposed formulation is solved in an augmented design space that consists of the
original decision variables, the MPP of failure corresponding to each failure driven
mode, and the Lagrange multipliers corresponding to each lower level optimization.
It is computationally equivalent to the original nested optimization formulation if
the lower level optimization problem is solved by satisfying the KKT conditions
114
(which is eectively what most numerical optimization algorithms do). It is proved
that under mild pseudoconvexity assumptions on the hard constraints, the proposed
formulation is mathematically equivalent to the original nested formulation. The
method is tested using a simple analytical problem and a multidisciplinary struc-
tures control problem, and is observed to be numerically robust and computationally
ecient compared to the existing approaches for RBDO.
One of the major advantage of this methodology is the fact that the RBDO prob-
lem can be solved in a single optimization. This helps is reducing the computational
cost of RBDO. For the structures control test problem, the unilevel methodology
was found to be two times as ecient as the nested RBDO methodology. The major
limitation of the formulation is that it is accompanied by a large number of equal-
ity constraints. Sometimes the commercial optimizers exhibit numerical instability
or show poor convergence behavior for problems with large numbers of equality
constraints. Also, the unilevel methodology is applicable only for FORM based
reliability constraints.
9.1.3 Continuation methods for unilevel RBDO
The unilevel formulation for RBDO is usually accompanied by a large number of
equality constraints which often cause numerical instability for many commercial
optimizers. In chapter 7, an optimization methodology employing continuation
methods is developed for reliability based design using the unilevel formulation.
Since the unilevel formulation is usually accompanied by many equality constraints,
homotopy techniques are used to relax the constraints and identify a starting point
that is feasible with respect to the relaxed constraints. In this investigation, the
homotopy parameter is incremented by a xed value. A series of optimization
problems are solved for various values of the homotopy parameter as the relaxed
115
problem approaches the original problem. It is realized that it is easier to solve
the relaxed problem from a known solution and make gradual progress towards
the solution than to solve the problem directly. The proposed strategy is tested
on two design problems. It is observed that the homotopy parameter controls the
progress made in each cycle of the optimization process. As the homotopy parameter
approaches the value of 1, the local solution is obtained.
9.1.4 Reliability based design optimization under epistemic uncertainty
In chapter 8, a methodology for performing design optimization under epistemic
uncertainty is developed. Epistemic uncertainty in nondeterministic systems arises
due to ignorance, lack of knowledge or incomplete information. This is also known
as subjective uncertainty. In general, epistemic uncertainty is extremely dicult to
quantify using probabilistic means. Dempster-Shafer theory (evidence theory) has
been used to model the epistemic uncertainty arising due to incomplete information
or the lack of knowledge. The constraints posed in the design optimization problem
are evaluated using uncertain measures provided by evidence theory. The belief
measure is used in this research to formulate non-deterministic constraints. Since
the belief functions are discontinuous, a formal trust region managed sequential
approximate optimization approach based on the Lagrangian is employed to drive
the design optimization. The trust region is managed by a trust region ratio based
on the performance of the Lagrangian. The Lagrangian is a penalty function of the
objective and the constraints. The framework is illustrated with multidisciplinary
test problems. Optimal designs characterized with low uncertainty of failure can be
obtained in few cycles.
The main accomplishment of this research is the quantication of epistemic un-
certainty in design optimization. As a byproduct it also shows that sequential
116
approximate optimization approaches can be used for handling discontinuous con-
straints and obtaining better designs.
9.2 Recommendations for future work
9.2.1 Decoupled RBDO using higher order methods
In the decoupled RBDO methodology developed in chapter 5, rst order reliability
techniques were used for reliability analysis. Since the reliability evaluation is sepa-
rated from the main optimization, it is possible to do higher order reliability analysis.
Techniques such as the second order reliability methods (SORM), Monte-Carlo sim-
ulation (MCS), etc can be used for obtaining high order estimates of reliability. This
will lead to better designs with very accurate estimates of probability of reliability.
9.2.2 RBDO for system reliability
In current work, only series systems were considered. However, there are numerous
systems where failure is governed by a combination of component failure modes.
Most of the research work in reliability based design optimization is limited to
series systems. Therefore, there is a need to develop methodologies for reliability
based design optimization for parallel systems, and where system reliability can
be incorporated in design optimization. The main challenge would be to develop
techniques using which system reliability could be computed eciently.
9.2.3 Homotopy curve tracking for solving unilevel RBDO
In this investigation, a continuation technique is employed for managing the re-
laxed unilevel reliability based design optimization problem. In the continuation
procedure, the homotopy parameter is incremented by a xed value. A series of
optimization problems are solved for various values of the homotopy parameter as
the relaxed problem approaches the original problem. It is realized that it is eas-
117
ier to solve the relaxed problem from a known solution and make gradual progress
towards the solution than solve the problem directly. In continuation methods the
homotopy parameter controls the progress made in each cycle of the optimization
process. As the homotopy parameter approaches the value of 1, the optimal solu-
tion is obtained. The heuristic approach of updating the homotopy parameter by a
xed value has worked for the problems considered as part of testing the algorithm.
However, it has been proved in the literature that this may not work at all times.
The use of formal homotopy curve tracking techniques for solving unilevel reliability
based design optimization problem will be make it more robust and computationally
ecient.
9.2.4 Considering total uncertainty in design optimization
In this dissertation, aleatory uncertainty and epistemic uncertainty were considered
separately in design optimization. A hybrid RBDO methodology can be developed
that incorporates both uncertainty types. Epistemic uncertainty can be quantied
using Dempster-Shafer theory and aleatory uncertainty using probability theory. A
total reliability analysis will involve full uncertainty quantication. The decoupled
RBDO methodology developed in this dissertation can be modied accordingly to
develop a hybrid framework.
9.2.5 Variable delity reliability based design optimization
A considerable amount of computational eort is usually required in reliability based
design optimization. Therefore, in recent years, surrogates of the simulation models
are largely employed to reduce the cost of optimization. Variable delity methods
employ a set of models ranging in delity to reduce the cost of design optimization.
The decoupled RBDO methodology and the unilevel RBDO methodology developed
in this research can be individually combined with variable delity methods to
118
further reduce the computational cost associated with RBDO.
119
BIBLIOGRAPHY
[1] H. Agarwal and J. E. Renaud, Reliability based design optimization for
multidisciplinary systems using response surfaces. In Proceedings of the 43rd
AIAA/ASME/ASCE/AHS/ASC Structures, Structural Dynamics, and Mate-
rials Conference and Exhibit , AIAA-2002-1755 (Denver, Colorado. April 22-25
2002).
[2] H. Agarwal, J. E. Renaud and J. D. Mack, A decomposition approach for
reliability-based multidisciplinary design optimization. In Proceedings of the
44th AIAA/ASME/ASCE/AHS/ASC Structures, Structural Dynamics, and
Materials Conference & Exhibit , number AIAA 2003-1778, Norfolk, Virginia
(April 7-10 2003).
[3] H. Agarwal, J. E. Renaud, E. L. Preston and D. Padmanabhan, Uncertainty
quantication using evidence theory in multidisciplinary design optimization.
Reliability Engineering and System Safety (2003), (in press).
[4] N. M. Alexandrov and R. M. Lewis, Algorithmic perspectives on problem for-
mulation in mdo. In Proceedings of the 8th AIAA/NASA/USAF Multidisci-
plinary Analysis & Optimization Symposium, number AIAA-2000-4719, Long
Beach, CA (September 6-8 2000).
[5] E. K. Antonsson and K. N. Otto, Imprecision in engineering design. Journal of
Mechanical Design, 117(B): 2532 (1995).
[6] H.-R. Bae, R. V. Grandhi and R. A. Caneld, Uncertainty quantication
of structural response using evidence theory. In Proceedings of the 43rd
AIAA/ASME/ASCE/AHS/ASC Structures, Structural Dynamics, and Mate-
rials Conference, AIAA-2002-1468 (Denver, Colorado. April 22-25 2002).
[7] Y. Ben-Haim and I. Elishako, Convex Models of Uncertainty in Applied Me-
chanics. Studies in Applied Mechanics 25, Elsevier (1990).
[8] R. Braun and I. M. Kroo, Development and application of the collaborative
optimization architecture in a multidisciplinary design environment. In N. M.
Alexandrov and M. Y. Hussaini, editors, Multidisciplinary Design Optimiza-
tion: State of the Art, SIAM (1997).
[9] K. Brietung, Asymptotic approximations for multinormal integral. Journal of
Engineering Mechanics, 110(3): 357366 (1984).
120
[10] S. Chen, E. Nikolaidis, H. H. Cudney, R. Rosca and R. T. Haftka, Compari-
son of probabilistic and fuzzy set methods for designing under uncertainty. In
Proceedings of the 40th AIAA/ASME/ASCE/AHS/ASC Structures, Structural
Dynamics, and Materials Conference and Exhibit , pages 28602874, AIAA-99-
1579 (April 1999).
[11] S. Chen, E. Nikolaidis, H. H. Cudney, R. Rosca and R. T. Haftka, Compari-
son of probabilistic and fuzzy set methods for designing under uncertainty. In
Proceedings of the 40th AIAA/ASME/ASCE/AHS/ASC Structures, Structural
Dynamics, and Materials Conference & Exhibit , number AIAA 99-1579, St.
Louis (April 12-15 1999).
[12] W. Chen and X. Du, Sequential optimization and reliability assesment method
for ecient probabilistic design. In ASME Design Engineering Technical Con-
ferences and Computers and Information in Engineering Conference, number
DETC2002/DAC-34127, Montreal, Canada (2002).
[13] X. C. Chen, T. K. Hasselman and D. J. Neill, Reliability based structual
design optimization for practical applications. In Proceedings of the 38th
AIAA/ASME/ASCE/AHS/ASC Structures, Structural Dynamics, and Mate-
rials Conference, number AIAA-97-1403, pages 27242732 (1997).
[14] K. K. Choi and B. D. Youn, Hybrid analysis method for reliability-based de-
sign optimization. In Proceedings of 2001 ASME Design Engineering Techni-
cal Conferences: 27th Design Automation Conference, number DETC/DAC-
21044, Pittsburgh, PA (September 9-12 2001).
[15] K. K. Choi, B. D. Youn and R. Yang, Moving least square method for reliability-
based design optimization. In Proceedings of the Fourth World Congress of
Structural and Multidisciplinary Optimization, Dalian, China (June 4-8, 2001
2001).
[16] E. J. Cramer, Dennis, J. E., jr., P. D. Frank, R. M. Lewis and G. R. Shubin, On
alternative problem formulations for multidisciplinary design optimization. In
Proceedings of the 4th AIAA/NASA/ISSMO Symposium on Multidisciplinary
Analysis & Optimization, number AIAA-92-4752, pages 518530 (1992).
[17] J. E. Dennis, Jr and R. M. Lewis, A comparison of nonlinear programming
approaches to an elliptic inverse problem and a new domain decomposition ap-
proach. Technical Report TR94-33, Department of Computational and Applied
Mathematics, Rice University, Houston, Texas 77005-1892 (1994).
[18] D. Dubois and H. Prade, Possibility Theory : An Approach to Computerized
Processing of Uncertainty. Plenum Press, rst edition (1988).
[19] D. Dubois and H. Prade, Possibility Theory: An approach to Computer Pro-
cessing of Uncertainty. Plenum Press, NY (1988).
[20] I. Enevoldsen and J. D. Sorensen, Reliability-based optimization in structural
engineering. Structural Safety, 15(3): 169196 (1994).
121
[21] S. Engelund and R. Rackwitz, A benchmark study on importance sampling
techniques in structural reliability. Structural Safety, 12: 255276 (1993).
[22] M. Fedrizzi, J. Kacprzyk and R. R. Yager, Advances in Dempster-Shafer Theory
of Evidence. John Wiley & Sons Inc. (1994).
[23] T. Fetz, M. Oberguggenberger and S. Pittschmann, Applications of possibility
and evidence theory in civil engineering. In 1st International Symposium on
Imprecise Probabilities and Their Applications (29 June - 2 July 1999).
[24] X. Gu, J. E. Renaud, L. M. Ashe, S. M. Batill, S. M. Budhiraja and A. S.
Krajewski, Decision based collaborative optimization. ASME Journal of Me-
chanical Design, 124(1): 113 (2001).
[25] R. T. Haftka, Simultaneous analysis and design. AIAA Journal , 25(1): 139145
(1985).
[26] R. T. Haftka, Z. Gurdal and M. P. Kamat, Elements of Structural Optimization,
volume 1. Kluwer Academy Publications, second edition (1990).
[27] A. Haldar and S. Mahadevan, Probability, Reliability and Statistical Methods
in Engineering Design. John Wiley & Sons (2000).
[28] A. Harbitz, An ecient sampling method for probability of failure calculation.
Structural Safety, 3: 109115 (1986).
[29] H. A. Jensen and A. E. Sepulveda, Use of approximation concepts in fuzzy
design problem. Advances in Engineering Software, 31: 263273 (2000).
[30] N. S. Khot, Optimization of controlled structures. Advances in Design Automa-
tion (1994).
[31] C. Kirjner-Neto, E. Polak and A. der Kiureghian, An outer approximations ap-
proach to reliability-based optimal design of structures. Journal of Optimization
Theory and Applications, 98(1): 116 (July 1998).
[32] A. D. Kiureghian, H.-Z. Lin and S.-J. Hwang, Second order reliability approx-
imation. journal of engineering mechanics, 113(8): 12081225 (1987).
[33] G. J. Klir and M. J. Wierman, Uncertainty Based Information : Elements of
Generalized Information Theory. Physica-Verlag (1998).
[34] I. M. Kroo, Decomposition and collaborative optimization for large-scale
aerospace design programs. In N. M. Alexandrov and M. Y. Hussaini, editors,
Multidisciplinary Design Optimization: State of the Art , SIAM (1997).
[35] N. Kuschel and R. Rackwitz, Two basic problems in reliability based struc-
tural optimization. Mathematical Methods of Operations Research, 46: 309333
(1997).
[36] N. Kuschel and R. Rackwitz, A new approach for structural optimization of
series systems. Applications of Statistics and Probability, 2(8): 987994 (2000).
122
[37] S. W. Law and E. K. Antonsson, Implementing the method of imprecision:
An engineering design example. In Proceedings of the 3rd IEEE International
Conference on Fuzzy Systems, volume 1, pages 358363 (1994).
[38] R. M. Lewis, Practical aspects of variable reduction formulations and reduced
basis algorithms in multidisciplinary design optimization. In N. M. Alexandrov
and M. Y. Hussaini, editors, Multidisciplinary Design Optimization: State of
the Art, SIAM (1997).
[39] P.-L. Liu and A. D. Kiureghian, Optimization algorithms for structural relia-
bility. Structural Safety, 9(3): 161177 (1991).
[40] G. Maglaras, E. Nikolaidis, R. T. Haftka and H. H. Cudney, Analytical-
experimental comparison of probabilistic and fuzzy set based methods for de-
signing under uncertainty. Structural Optimization, 13: 6980 (1997).
[41] O. L. Mangasarian, Nonlinear Programming. Classics in Applied Mathematics,
SIAM, Philadelphia (1994).
[42] J. Nocedal and S. J. Wright, Numerical Optimization. Springer-Verlag (1999),
Springer series in Operations Research.
[43] W. L. Oberkampf, S. M. Deland, B. M. Rutherford, K. V. Diegert and K. F.
Alvin, A new methodology for the estimation of total uncertainty in computa-
tional simulation. In Proceedings of the 40th AIAA/ASME/ASCE/AHS/ASC
Structures, Structural Dynamics, and Materials Conference (April 1999).
[44] W. L. Oberkampf, S. M. DeLand, B. M. Rutherford, K. V. Diegert and K. F.
Alvin, Estimation of total uncertainty in modeling and simulation. Technical
Report SAND2000-0824, Sandia National Laboratories (April 2000).
[45] W. L. Oberkampf, K. V. Diegert, K. F. Alvin and B. M. Rutherford, Variability,
uncertainty, and error in computational simulation. In ASME Proceedings of
the 7th. AIAA/ASME Joint Thermophysics and Heat Transfer Conference,
volume 2, pages 259272 (1998).
[46] W. L. Oberkampf, J. C. Helton, C. A. Joslyn, S. Wojtkiewicz and S. Ferson,
Challenge problems : Uncertainty in system response given uncertain parame-
ters. Reliability Engineering and System Safety (2003), (in press).
[47] W. L. Oberkampf, J. C. Helton and K. Sentz, Mathematical representation
of uncertainty. In Proceedings of the 42nd AIAA/ASME/ASCE/AHS/ASC
Structures, Structural Dynamics, and Materials Conference & Exhibit , num-
ber AIAA 2001-1645, Seattle, WA (April 16-19, 2001 2001).
[48] D. Padmanabhan and S. M. Batill, Reliability based optimization using approx-
imations with applications to multi-disciplinary system design. In Proceedings
of the 40th AIAA Sciences Meeting & Exhibit , number AIAA-2002-0449, Reno,
NV (January 2002).
[49] S. Parsons, Qualitative Methods for Reasoning under Uncertainty. The MIT
Press (2001).
123
[50] V. M. Perez, J. E. Renaud and L. T. Watson, Interior point sequen-
tial approximate optimization methodology. In Proceedings of the 10th
AIAA/NASA/USAF/ISSMO Symposium on Multidisciplinary Analysis & Op-
timization, number AIAA-2002-5505, Atlanta, GA. (September 4-6 2002).
[51] V. M. Perez, J. E. Renaud and L. T. Watson, An interior point se-
quential approximate optimization methodology. In Proceedings of the 9th
AIAA/NASA/USAF Multidisciplinary Analysis & Optimization Symposium,
AIAA-2002-5505, Atlanta, GA (September 4-6 2002).
[52] M. S. Phadke, Quality Engineering Using Robust Design. Prentice Hall, Engle-
wood Clis, NJ (1989).
[53] E. Polak, R. J.-B. Wets and A. der Kiureghian, On an approach to optimization
problems with a probabilistic cost and or constraints. Nonlinear Optimization
and related topics, pages 299316 (2000).
[54] R. Rackwitz, Reliability analysis-a review and some perspectives. Structural
Safety, 23(4): 365395 (2001).
[55] J. E. Renaud, Sequential approximation in non-hierarchic system decomposi-
tion and optimization: a multidisciplinary design tool . Ph.D. thesis, Renssalaer
Polytechnic Institute, Department of Mechanical Engineering, Troy, New York
(August 1992).
[56] J. F. Rodriguez, J. E. Renaud and L. T. Watson, Convergence of trust re-
gion augmented lagrangian methods using variable delity approximation data.
Structural Optimization, 15: 141156 (1998).
[57] J. F. Rodriguez, J. E. Renaud and L. T. Watson, Convergence using variable
delity approximation data in a trust region managed augmented lagrangian
approximate optimization. AIAA Journal , pages 749768 (1998).
[58] M. Rosenblatt, Remarks on a multivariate transformation. The Annals of Math-
ematical Statistics, 23(3): 470472 (September 1952).
[59] J. O. Royset, A. D. Kiureghian and E. Polak, Reliability based optimal struc-
tural design by the decoupling approach. Reliability Engineering and System
Safety, 73(3): 213221 (2001).
[60] M. Sakawa, Fuzzy Sets and Interactive Multiobjective Optimization. Plenum
Press (1993).
[61] K. Sentz and S. Ferson, Combination of evidence in dempster-shafer theory.
Technical report, Sandia National Laboratories (April 2002), SAND 2002-0835.
[62] J. Sobieszczanski, J. S. Agte and Sandusky R. R., jr., Bi-level integrated sys-
tem synthesis (bliss). In Proceedings of the 7th AIAA/NASA/USAF Multi-
disciplinary Analysis & Optimization, number AIAA-98-4916, St. Louis, Mis-
souri (September 2-4 2000), Extended paper published as Technical Report
NASA/TM-1998-208715.
124
[63] J. Sobieszczanski-Sobieski, A linear decomposition method for large optimiza-
tion problems- blueprint for development. Technical Report TM-83248-1982,
NASA (1982).
[64] J. Sobieszczanski-Sobieski, Optimization by decomposition: A step from hierar-
chic to non-hierarchic systems. In 2nd NASA/Air Force Symposium on Recent
Advances in Multidisciplinary Analysis and Optimization, number NASA TM-
101494, CP-3031, Part 1, pages 2830, Hampton, VA (1988).
[65] J. Sobieszczanski-Sobieski, C. L. Bloebaum and P. Hajela, Sensitivity of control-
augmented structure obtained by a system decomposition method. AIAA Jour-
nal , 29(2): 264270 (February 1990).
[66] T. R. Sutter, C. J. Camarda, J. L. Walsh and H. M. Adelman, Comparision
of several methods for calculating vibration mode shape derivatives. AIAA
Journal , 26: 15061511 (1988).
[67] R. V. Tappeta, An Investigation of Alternative Problem Formulations for Mul-
tidisciplinary Optimization. Masters thesis, University of Notre Dame (Decem-
ber, 1996).
[68] P. B. Thanedar and S. Kodiyalam, Structural optimization using probabilistic
constraints. Structural Optimization, 4: 236240 (1992).
[69] J. Tu, K. K. Choi and Y. H. Park, A new study on reliability-based design
optimization. Journal of Mechanical Design, 121: 557564 (December 1999).
[70] L. Tvedt, Distribution of quadratic forms in normal space-application to struc-
tural reliability. Journal of Engineering Mechanics, 116(6): 11831197 (1990).
[71] P. Walley, Statistical Reasoning with Imprecise Probabilities. London: Chap-
man and Hall. (1991).
[72] L. Wang and S. Kodiyalam, An ecient method for probabilistic and
robust design with non-normal distribution. In Proceedings of the 43rd
AIAA/ASME/ASCE/AHS/ASC Structures, Structural Dynamics, and Materi-
als Conference, number AIAA 2002-1754, Denver, Colorado (April 22-25 2002).
[73] L. T. Watson, Theory of globally convergent probability-one homotopies
for nonlinear programming. SIAM Journal on Optimization, 11(3): 761780
(2001).
[74] B. A. Wujek, J. E. Renaud, S. M. Batill, E. W. Johnson and J. B. Brockman,
Design ow management and multidisciplinary design optimization in applica-
tion to aircraft concept sizing. In 34th Aerospace Sciences Meeting & Exhibit ,
AIAA (January 1996).
[75] H. J. Zimmermann, Fuzzy Set Theory and its Applications. Kluwer Academic
Publishers, second edition (1991).
125

Potrebbero piacerti anche