Documenti di Didattica
Documenti di Professioni
Documenti di Cultura
Seyedali Mirjalili
MS,BS
November 2015
i
Abstract
Robust optimisation refers to the process of combining good performance with
low sensitivity to possible perturbations. Due to the presence of different un-
certainties when optimising real problems, failure to employ robust optimisa-
tion techniques may result in finding unreliable solutions. Robust optimisation
techniques play key roles in finding reliable solutions when considering possible
uncertainties during optimisation.
Evolutionary optimisation algorithms have become very popular for solving
real problems in science and industry mainly due to simplicity, gradient-free
mechanism, and flexibility. Such techniques have been employed widely as very
reliable alternatives to mathematical optimisation approaches for tackling diffi-
culties of real search spaces such as constraints, local optima, multiple objectives,
and uncertainties. Despite the advances in considering the first three difficulties
in the literature, there is significant room for further improvements in the area
of robust optimisation, especially combined with multi-objective approaches.
Finding optimal solutions that are less sensitive to perturbations requires a
highly systematic robust optimisation algorithm design process. This includes
designing challenging robust test problems to compare algorithms, performance
metrics to measure by how much one robust algorithm is better than another,
and computationally cheap robust algorithms to find robust solutions for optimi-
sation problems. The first two phases of a systematic algorithm design process,
developing test functions and performance metrics, are prerequisite to the third
phase, algorithm development. Firstly, this thesis identifies the current gaps in
the literature relating to each of these phases to establish a systematic robust
algorithm design process as follows:
• The need for more standard and challenging robust test functions for both
single- and multi-objective algorithms.
• The need for more standard performance metrics for quantifying the per-
formance of robust multi-objective algorithms.
• The need for more investigation and analysis of the current robustness
metrics.
Secondly, the current robustness metrics are investigated and analysed in de-
tails. Thirdly, several test functions and performance metrics are proposed to fill
ii
out the first two above-mentioned gaps in the literature. Fourthly, a novel met-
ric called the confidence measure is proposed to reduce the computational cost
and increase the reliability of the current robust optimisation methods. Lastly
but most importantly, the proposed confidence metric is employed to establish
novel and cheap approaches for finding robust optimal solutions in single- and
multi-objective search spaces called confidence-based robust optimisation and
confidence-based robust multi-objective optimisation. The most well-regarded
evolutionary population-based algorithms such as Genetic Algorithm (GA), Par-
ticle Swarm Optimisation (PSO), and Multi-Objective Particle Swarm Optimi-
sation (MOPSO) are modified as the first confidence-based robust optimisation
algorithms.
Several experiments are conducted using the proposed benchmark problems
and performance metrics to evaluate the proposed confidence-based robust algo-
rithms qualitatively and quantitatively. The thesis also considers the application
of the proposed techniques in designing a marine propeller problem to emphasise
the applicability of the confidence-based optimisation in practice. The results
show that the proposed confidence-based algorithms mainly benefit from high
reliability and low computational cost when solving the benchmark problems.
The merits of the proposed benchmark problems and performance metrics in
comparing different algorithms are evidenced by the results of the test beds as
well. The results of real applications demonstrate that the proposed method is
able to confidently and reliably find robust optimal solutions without significant
extra computational burden for real problems with unknown search spaces.
iii
Certificate of Originality
This work has not previously been submitted for a degree or diploma in any
university. To the best of my knowledge and belief, the thesis contains no material
previously published or written by another person except where due reference is
made in the thesis itself.
iv
Approval
Name: Seyedali Mirjalili
Title: Confidence-based Robust Optimisation of Engineer-
ing Design Problems
Submission Date: 7 November, 2015
Acknowledgements
Commencing, pursuing and completing this dissertation, like any other project,
required an abundance of resources as well as strong motivation which would not
have been possible without guidance and support of a group of people. Therefore,
I would like to express my gratitude to the people below.
I should first like to thank my principal supervisor and mentor, Dr. Andrew
Lewis whose advice, guidance, patience and support has been always available
for me throughout my academic journey in my Ph. D. candidature. I should
also like to thank Dr. René Hexel for his generous support and guidance as my
associate supervisor and Dr. Seyed Ali Mohammad Mirjalili for his invaluable
advice.
vi
From Chapter 5:
From Chapter 9:
Reprinted from Information Sciences, Vol. 317, Seyedali Mirjalili, Andrew Lewis,
and Sanaz Mostaghim, Confidence measure: A novel metric for robust meta-
heuristic optimisation algorithms, Pages No. 114-142, Copyright
c 2015, with
permission from Elsevier.
Reprinted from Information Sciences, Vol. 300, Seyedali Mirjalili and Andrew
Lewis, Novel frameworks for creating robust multi-objective benchmark prob-
lems, Pages No. 158-192, Copyright
c 2015, with permission from Elsevier.
Reprinted from Information Sciences, Vol. 328, Seyedali Mirjalili and Andrew
Lewis, Obstacles and difficulties for robust benchmark problems: A novel penalty-
based robust optimisation method, Pages No. 485-509, Copyright
c 2016, with
permission from Elsevier.
Reprinted from Applied Soft Computing, Vol. 35, Seyedali Mirjalili and Andrew
Lewis, Hindrances for robust multi-objective test problems, Pages No. 333-348,
Copyright
c 2015, with permission from Elsevier.
Reprinted from Swarm and Evolutionary Computation, Vol. 21, Seyedali Mir-
jalili and Andrew Lewis, Novel performance metrics for robust multi-objective
optimization algorithms, Pages No. 1-23, Copyright
c 2015, with permission
from Elsevier.
1 Introduction 1
1.1 Problem Background . . . . . . . . . . . . . . . . . . . . . . . . . 4
1.2 Problem Statement and Objectives . . . . . . . . . . . . . . . . . 6
1.3 Scope and Significance . . . . . . . . . . . . . . . . . . . . . . . . 7
1.4 Organisation of the thesis . . . . . . . . . . . . . . . . . . . . . . 8
2 Related work 11
2.1 Evolutionary single-objective optimisation . . . . . . . . . . . . . 13
2.2 Evolutionary Multi-objective optimisation . . . . . . . . . . . . . 21
2.3 Robust single-objective optimisation . . . . . . . . . . . . . . . . 29
2.3.1 Preliminaries and definitions . . . . . . . . . . . . . . . . . 30
2.3.2 Expectation measure . . . . . . . . . . . . . . . . . . . . . 33
2.3.3 Variance measure . . . . . . . . . . . . . . . . . . . . . . . 35
2.4 Robust multi-objective optimisation . . . . . . . . . . . . . . . . . 38
2.4.1 Preliminaries and definitions . . . . . . . . . . . . . . . . . 38
2.4.2 Current expectation and variance measures . . . . . . . . . 44
2.5 Benchmark problems . . . . . . . . . . . . . . . . . . . . . . . . . 50
2.5.1 Benchmark problems for single-objective robust optimisation 54
2.5.2 Benchmark problems for multi-objective robust optimisation 56
2.6 Performance metrics . . . . . . . . . . . . . . . . . . . . . . . . . 60
2.6.1 Convergence performance indicators: . . . . . . . . . . . . 62
2.6.1.1 Generational Distance (GD): . . . . . . . . . . . 62
2.6.1.2 Inverted Generational Distance (IGD): . . . . . . 62
2.6.1.3 Delta Measure: . . . . . . . . . . . . . . . . . . . 63
2.6.1.4 Hypervolume metric: . . . . . . . . . . . . . . . . 63
2.6.1.5 Inverse hypervolume metric: . . . . . . . . . . . . 63
2.6.2 Coverage performance indicators: . . . . . . . . . . . . . . 63
2.6.2.1 Spacing (SP): . . . . . . . . . . . . . . . . . . . 63
2.6.2.2 Radial coverage metric: . . . . . . . . . . . . . . 64
2.6.2.3 Maximum Spread (M ): . . . . . . . . . . . . . . 64
2.6.3 Success performance indicators: . . . . . . . . . . . . . . . 65
2.6.3.1 Error Ratio (ER): . . . . . . . . . . . . . . . . . 65
2.6.3.2 Success counting (SCC): . . . . . . . . . . . . . . 65
ix
x CONTENTS
2.7 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 66
3 Analysis 68
3.1 Benchmark problems . . . . . . . . . . . . . . . . . . . . . . . . . 68
3.2 Performance metrics . . . . . . . . . . . . . . . . . . . . . . . . . 69
3.3 Robust algorithms . . . . . . . . . . . . . . . . . . . . . . . . . . 70
3.4 Systematic robust optimisation algorithm design process . . . . . 74
3.5 Objectives and plan . . . . . . . . . . . . . . . . . . . . . . . . . . 75
3.6 Contributions and scope . . . . . . . . . . . . . . . . . . . . . . . 77
3.7 Significance of Study . . . . . . . . . . . . . . . . . . . . . . . . . 79
3.8 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 81
4 Benchmark problems 84
4.1 Benchmarks for robust single-objective optimisation . . . . . . . . 86
4.1.1 Framework I . . . . . . . . . . . . . . . . . . . . . . . . . . 87
4.1.2 Framework II . . . . . . . . . . . . . . . . . . . . . . . . . 89
4.1.3 Framework III . . . . . . . . . . . . . . . . . . . . . . . . . 91
4.1.4 Obstacles and difficulties for single-objective robust bench-
mark problems . . . . . . . . . . . . . . . . . . . . . . . . 93
4.1.4.1 Desired number of variables . . . . . . . . . . . . 93
4.1.4.2 Biased search space . . . . . . . . . . . . . . . . . 95
4.1.4.3 Deceptive search space . . . . . . . . . . . . . . . 96
4.1.4.4 Multi-modal search space . . . . . . . . . . . . . 97
4.1.4.5 Flat search space . . . . . . . . . . . . . . . . . . 99
4.2 Benchmarks for robust multi-objective optimisation . . . . . . . . 101
4.2.1 Framework 1 . . . . . . . . . . . . . . . . . . . . . . . . . 102
4.2.2 Framework 2 . . . . . . . . . . . . . . . . . . . . . . . . . 106
4.2.3 Framework 3 . . . . . . . . . . . . . . . . . . . . . . . . . 109
4.2.4 Hindrances for robust multi-objective test problems . . . . 111
4.2.4.1 Biased search space . . . . . . . . . . . . . . . . . 112
4.2.4.2 Deceptive search space . . . . . . . . . . . . . . . 113
4.2.4.3 Multi-modal search space . . . . . . . . . . . . . 115
4.2.4.4 Flat (non-improving) search space . . . . . . . . 118
4.3 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 119
10 Conclusion 202
10.1 Summary and conclusions . . . . . . . . . . . . . . . . . . . . . . 202
10.2 Achievements and significance . . . . . . . . . . . . . . . . . . . . 207
10.3 Future work . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 210
Index 210
Bibliography 210
xii CONTENTS
Appendices 229
xv
xvi LIST OF FIGURES
4.20 Effect of γ on the fronts. Note that the red curve indicates the
robustness of the robust front and black curves are the fronts. . . 109
4.21 Effect of ζ on the parameter and objective spaces. Note that the
red curve indicates the robustness of the robust front and black
curves are the fronts. . . . . . . . . . . . . . . . . . . . . . . . . . 110
4.22 Parameter and objective spaces constructed by the third frame-
work. The red curve indicates the robustness of the robust front
and black curves are the fronts. . . . . . . . . . . . . . . . . . . . 111
4.23 A non-biased objective space versus a biased objective space (50,000
random solutions). The proposed bias function requires the ran-
dom points to cluster away from the Pareto optimal front. . . . . 112
4.24 Bias of the search space is increased inversely proportional to ψ . 113
4.25 There are four deceptive non-robust optima and one robust opti-
mum in the function H(x) . . . . . . . . . . . . . . . . . . . . . . 114
4.26 Different shapes of Pareto fronts that can be obtained by manip-
ulating β . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 115
4.27 H(x) creates one robust and 2M global Pareto optimal fronts . . 117
4.28 Parameter space and objective space of the proposed multi-modal
robust multi-objective test problem . . . . . . . . . . . . . . . . . 117
4.29 Different shapes of Pareto fronts that can be obtained by manip-
ulating β . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 118
4.30 H(x) makes two global optima close to the boundaries . . . . . . 119
4.31 H(x) makes two global optima close to the boundaries . . . . . . 120
8.1 Robust fronts obtained for RMTP1, RMTP7, RMTP9, and RMTP27,
one test case per row. . . . . . . . . . . . . . . . . . . . . . . . . 170
8.2 Robust fronts obtained for RMTP13 to RMTP16 and RMTP19,
one test case per row. Note that the dominated (local) front is
robust and considered as reference for the performance measures. 179
8.3 Robust fronts obtained for RMTP21 to RMTP25 one test case per
row. Note that the worst front is the most robust and considered
as reference for the performance measures. . . . . . . . . . . . . 181
9.1 Airfoils along the blade define the shape of the propeller (NACA
a = 0.8 meanline and NACA 65A010 thickness) . . . . . . . . . . 188
9.2 Propeller used as the case study . . . . . . . . . . . . . . . . . . . 189
9.3 (left) Pareto optimal front obtained by the MOPSO algorithm (6
blades), (right) Pareto optimal fronts for different numbers of blades190
9.4 (left) Best Pareto optimal fronts obtained for different RPM (right)
Optimal RPM . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 192
9.5 PF obtained when varying RPM compared to PFs obtained with
different RPM values . . . . . . . . . . . . . . . . . . . . . . . . . 193
9.6 Optimal RPM coordinates . . . . . . . . . . . . . . . . . . . . . . 194
LIST OF FIGURES xix
xx
Nomenclature
C Confidence Measure
DE Differential Evolution
EA Evolutionary Algorithm
EP Evolutionary Programming
ER Error Ratio
xxi
xxii
ES Evolution Strategy
GA Genetic Algorithm
GD Generational Distance
O Big O
R Robustness Measure
SA Simulated Annealing
SP Spacing
TP Test Problem
TS Tabu Search
ZDT ZitzlerDebThiele
Chapter 1
Introduction
In the past, the computational engineering design process used to be mostly ex-
perimentally based [103]. This meant that a real system first had to be designed
and constructed to be able to do experiments. In other words, the design model
was an actual physical model. For instance, an actual airplane or prototype
would have to put in a massive wind tunnel to investigate the aerodynamics of
the aircraft [1]. Obviously, the process of design was very tedious, expensive,
and slow.
After the development of computers, engineers started to simulate models in
computers to compute and investigate different aspects of real systems. This was
a revolutionary idea since there was no need for an actual model in the design
phase anymore. Another advantage of modelling problems in computers was the
reduced time and cost. It was no longer necessary to build a wind tunnel and
real model to compute and investigate the aerodynamics of an aircraft. The next
step was to investigate not only the known characteristics of the problem but also
explore and discover new features. Exploring the search space of the simulated
model in a computer allowed designers to better understand the problem and find
optimal values for design parameters. Despite the use of computer in modelling,
a designer still had to manipulate the parameters of the problem manually.
After the first two steps, people started to develop and utilise computa-
tional/optimisation algorithms to use the computer itself to find optimal solu-
tions of the simulated model for a given problem. Thus, the computer manipu-
lated and chose the parameters with minimum human involvement. This was the
birth of automated and computer-aided design fields. Evolutionary Algorithms
(EA) also became popular tools in finding the optimal solutions for optimisation
1
2 1. Introduction
problems.
Generally speaking, EAs mostly have very similar frameworks. They first
start the optimisation process by creating an initial set of random, trial solutions
for a given problem. This random set is then iteratively evaluated by objective
function(s) of the problem and evolved to minimise or maximise the objective(s).
Although this framework is very simple, optimisation of real world problems
requires considering and addressing several issues of which the most important
ones are: local optima, expensive computational cost of function evaluations,
constraints, multiple objectives, and uncertainties.
Real problems have mostly unknown search spaces that may contain many
sub-optimal solutions. Stagnation in local optima is a very common phenomenon
when using EAs. In this case, the algorithm is trapped in one of the local solu-
tions and assumes it to be the global solution. Although the stochastic operators
of EAs improve the local optima avoidance ability compared to deterministic
mathematical optimisation approaches, local optima stagnation may occur in
any EAs as well.
EAs are also mostly population-based paradigms. This means they iteratively
evaluate and improve a set of solutions instead of a single solution. Although
this improves the local optima avoidance as well, solving expensive problems with
EAs is not feasible sometimes due to the need for a large number of function
evaluations. In this case, different mechanisms should be designed to decrease
the required number of function evaluations. Constraints are another difficulty
of real problems, in which the search space may be divided into two regions: fea-
sible and infeasible. The search agents of EAs should be equipped with suitable
mechanisms to avoid all the infeasible regions and explore the feasible areas to
find the feasible global optimum. Handling constraints requires specific mecha-
nisms and has been a popular topic among researchers.
Real engineering problems often also have multiple objectives. Optimisation
in a multi-objective search space is quite different and needs special considera-
tions compared to a single-objective search space. In a single-objective problem,
there is only one objective function to be optimised and only one global solution
to be found. However, in multi-objective problems there is no longer a single so-
lution for the problem, and a set of solutions representing the trade-offs between
the multiple objectives, the Pareto optimal set, must be found.
Last but not least, another key concept in the optimisation of real engineer-
1. Introduction 3
both methods start with hypotheses. However, any idea needs to be tested,
evaluated, compared, and verified to reliably and confidently prove that it is
beneficial. This process is done in different branches of optimisation including
global optimisation, dynamic optimisation, interactive optimisation, and so on
as well.
Despite the advances in all of the above-mentioned branches and importance
of considering uncertainties during optimisation of real problems, there is signifi-
cant room for further improvements in the area of robust optimisation, especially
combined with multi-objective approaches. There are few works in this field as
I will show in the next chapter. Finding optimal solutions that are less sensitive
to perturbations needs a systematic design approach. Therefore:
There is a need for a systematic design process in the field of robust optimi-
sation to better and conveniently alleviate the drawbacks of the current robust
optimisation techniques and/or propose new ones.
Consequently, the aim of this study is to investigate and fill one of the most
substantial current gaps in robust heuristic optimisation techniques, with em-
phasis on development of tools that could assist in real-world applications. The
main research question can be stated as follows:
Single-objective optimisation
Multi-objective optimisatoin
Chapter 2: related works
Robust optimisation
Chapter 3: Analysis
Robust multi-objective optimisation
Coverage measure
Chapter 5: Performance measures
Success ratio
Future works
Figure 1.1: Organisation of the thesis (Purple: literature review and related
works, Red: analysis of the literature and current gaps, Green: proposed sys-
tematic robust algorithm design process, Blue: results on the test beds and real
case study, and Orange: conclusion and future works)
Chapter 2
Related work
11
12 2. Related work
cording to this theorem, all algorithms perform equally when averaged across all
possible optimisation problems. Therefore, one algorithm can be very effective
in solving one set of problems and not effective on a different set of problems.
This is the foundation of many works in this field.
Despite the popularity and simplicity of evolutionary algorithms, optimi-
sation using these techniques requires several considerations and has its own
challenges. There are also different types of optimisation in this field, of which
the most important ones are single-objective, multi-objective, unconstrained,
constrained, dynamic, robust, and interactive optimisation. Single-objective op-
timisation is the simplest and fundamental expression of the optimisation pro-
cess. It deals with varying parameters, seeking to satisfy an objective. As such,
this kind of optimisation is the foundation for consideration of new, generally
applicable methods and ideas. This thesis concentrates on development of effec-
tive robust optimisation and as a starting point, must consider single-objective
approaches.
In the real world, most optimisation problems have multiple, often compet-
ing objectives. For the methods proposed to be useful, widely applicable and
effective, they must take into consideration this multi-objective nature of the
majority of problems to be addressed. Multi-objective optimisation deals with
extending and developing approaches to solve these kind of problems. So, for the
contributions of this thesis to be broadly applicable, methods must be developed
and tested for single-objective optimisation and multi-objective optimisation.
This chapter reviews the literature of single-objective optimisation, multi-
objective optimisation, robust single-objective optimisation, and robust multi-
objective optimisation. Due to the scope of the thesis, a large portion of this
chapter covers robust optimisation.
Variables Objective
(inputs) System (output)
Constraints Feasibility
Other inputs of a system that may affect its output are operating (environ-
mental) conditions. Such inputs are considered as secondary inputs that are
defined when a system is operating in the simulated/final environment. Exam-
ples of such conditions are: temperature/density of fluid when a propeller is
turning or the angle of attack when an aircraft is flying. These types of inputs
are not optimised by the optimisers but definitely have to be considered during
optimisation since they may have significant impacts on the outputs.
Without loss of generality, a single-objective optimisation can be formulated
as a minimisation problem as follows:
Figure 2.2: Example of a search landscape with two variables and several con-
straints
It may be observed in Fig. 2.2 that the search space can have multiple local
optima, but one of them is the global optimum (or some of them in case of a
flat landscape). The constraints create gaps in the search space and occasionally
split it into various separated regions. In the literature, infeasible regions refer to
the areas of the search space that violate constraints. The search space of a real
16 2. Related work
problem can be very challenging. Some of the difficulties of the real search spaces
are discontinuity, a massive number of local optima, high infeasibility, global
optimum located on the boundaries of constraints, deceptive valleys toward local
optima, and isolation of the global optimum.
When formulating a problem, an optimiser would be able to tune its variables
based on the outputs and constraints. As mentioned in the introduction of this
chapter, one of the advantages of evolutionary algorithms is that they consider a
system as a black box. Fig. 2.3 shows that the optimisers only provide the system
with variables and observe the outputs. The optimisers then iteratively and
stochastically change the inputs of the system based on the feedback (output)
obtained so far until satisfaction of an end criterion. The process of changing
the variables based on the history of outputs is defined by the mechanism of
an algorithm. For instance, PSO saves the best solutions obtained so far and
encourages new solutions to relocate around them.
Operating/environmental conditions
Objective
Variables
System
(black box) Feasibility
Optimiser
solution, and it is iteratively improved over the iterations. In the latter case, a
set of solutions (more than one) is created and improved during optimisation.
These two families are called individual-based and population-based algorithms
and illustrated in Fig. 2.4.
There are several advantages and disadvantages for each of these families.
Individual-based algorithms need less computational cost and function evalua-
tion but can suffer from premature convergence. Premature convergence refers
to the stagnation of an optimisation technique in local optima, which prevents
it from convergence towards the global optimum. Fig. 2.4 shows that the single
candidate solution becomes entrapped in the local optimum which is very close
the the global optimum. In contrast, population-based algorithms have a greater
ability to avoid local optima since a set of solutions are involved during opti-
misation. Fig. 2.4 illustrates how the collection of candidate solutions result in
finding the global optimum. In addition, information can be exchanged between
the candidate solutions and assist them to overcome the above-mentioned diffi-
culties of search spaces. However, high computational cost and the need for more
function evaluations are two major drawbacks of population-based algorithms.
The well-known algorithms in the individual-based family are: Tabu Search
(TS) [70, 77, 78], hill climbing [41], Iterated Local Search (ILS) [117], and Sim-
18 2. Related work
ulated Annealing (SA) [102, 25]. TS is an improved local search technique that
utilises short-term, intermediate-term, and long-term memories to ban and trun-
cate unpromising/repeated solutions. Hill climbing is also another local search
and individual-based technique that starts optimisation from a single solution.
This algorithm then iteratively attempts to improve the solution by changing its
variables. ILS is an improved hill climbing algorithm to decrease the probability
of entrapment in local optima. In this algorithm, the optimum obtained at the
end of each run is retained and considered as the starting point in the next iter-
ation. Initially, the SA algorithm tends to accept worse solutions proportionally
to a variable called the cooling factor. This assists SA to promote exploration of
the search space and prevents it becoming trapped in local optima when it does
search them.
Although different improvements of individual-based algorithms promote lo-
cal optima avoidance, the literature shows that population-based algorithms
are better in handling and alleviating this problem. Regardless of the differ-
ences between population-based algorithms, the common characteristic is the
separation of the optimisation process into two, conflicting goals: exploration
versus exploitation [62]. Exploration encourages candidate solutions to change
abruptly and stochastically. This mechanism improves the diversity of solutions
and causes greater exploration of the search space. In PSO, for instance, the
inertial weight maintains the tendency of particles toward their previous direc-
tions and emphasises exploration. In GA, a high probability of cross-over causes
more combination of individuals and is the main mechanism for exploration.
In contrast, exploitation aims at improving the quality of solutions by lo-
cally searching around the promising solutions obtained in the exploration. In
exploitation, candidate solutions are obliged to change less suddenly and search
locally. In PSO, for instance, a low inertial rate causes low exploration and
a higher tendency toward the best personal/global solutions obtained. There-
fore, the particles converge toward best points instead of churning around the
search space. The mechanism that brings GA exploitation is mutation. Muta-
tion causes slight random changes in the individuals and local search around the
candidate solutions.
Exploration and exploitation are two conflicting goals where promoting one
generally results in degrading the other [4]. A correct balance between these
two goals can guarantee a very accurate approximation of the global optimum
2. Related work 19
However, most real problems have more than one objective to be optimised.
A single-objective algorithm can be applied to such problems, but multiple ob-
jectives must first be aggregated to a single objective, and then a single objective
optimisation algorithm needs to be run multiple times to find the best trade-offs
between objectives. Another drawback of this method is that it cannot solve all
types of multi-objective problems as will be discussed and explained in detail in
the following sections.
2. Related work 21
There are different challenges in solving real engineering problems, which need
specific tools to handle them. One of the most important characteristics of real
problems is multi-objectivity. A problem is called multi-objective if there is
more than one objective to be optimised. There are two common approaches for
handling multiple objectives: a priori versus a posteriori [119, 22].
The former class of optimisers combines the objectives of a multi-objective
problem to a single-objective with a set of weights (provided by decision makers),
which defines the importance of each objective, and employs a single-objective
optimiser to solve it. The unary-objective nature of the combined search spaces
allows finding a single solution as the optimum. In contrast, a posterior meth-
ods maintain the multi-objective formulation of the multi-objective problems,
allowing exploration of the behaviour of the problems across a range of design
parameters and operating conditions compared to a priori approaches [42]. In
this case, decision makers will eventually choose one of the solutions obtained
based on their needs. There is also another way of handling multiple objectives
called the progressive method, in which decision makers’ preferences about the
objectives are considered during optimisation [21].
In contrast to single-objective optimisation, there typically might be no sin-
gle solution when considering multiple objectives as the goal of the optimisation
process. In this case, a set of optimal solutions, which represents various trade-
offs between the objectives, is the “solution” of a multi-objective problem [31].
Before 1984, mathematical multi-objective optimisation techniques were popu-
lar among researchers in different fields of study such as applied mathematics,
operation research, and computer science. Since the majority of the conven-
tional approaches (including deterministic methods) suffered from stagnation in
local optima, however, such techniques were not as widely applicable as they are
nowadays.
In 1984, a revolutionary idea was proposed by David Schaffer [32]. He intro-
duced the concept of multi-objective optimisation in stochastic (including evolu-
tionary and heuristic) optimisation techniques. Since then a significant number
of researches has been dedicated to developing and evaluating multi-objective
evolutionary/heuristic algorithms. The advantages of stochastic optimisation
techniques such as their gradient-free mechanism and local optima avoidance
22 2. Related work
made them readily applicable to real problems as well. Nowadays, the applica-
tion of multi-objective optimisation techniques can be found in many different
fields of studies: e.g. mechanical engineering [100], civil engineering [118], chem-
istry [133], and other fields [30].
Without loss of generality, multi-objective optimisation can be formulated as
a minimisation problem as follows:
Definition 2.2.1 (Pareto Dominance): Suppose that there are two vectors such
as: ~x = (x1 , x2 , ..., xk ) and ~y = (y1 , y2 , ..., yk ).
2. Related work 23
∀i ∈ (1, 2, ..., o)
Fig. 2.5 illustrates the concept of Pareto dominance. It can be seen in this
Minimise
f2
f1
Minimise
figure that the circles dominate some of the other solutions (squares) since they
show lesser values on both of the objectives. However, a circle shows a lesser
value on one objective and greater value on another, compared to other circles,
meaning that it cannot dominate them. The definition of Pareto optimality is
as follows [125]
Definition 2.2.3 (Pareto optimal set): The set of all Pareto-optimal solutions:
Definition 2.2.4 (Pareto optimal front): A set containing the value of objective
functions for Pareto solutions set:
∀i ∈ (1, 2, ..., o)
P F := {fi (~x)|~x ∈ P S}
The Pareto optimal set and Pareto optimal front of Fig. 2.5 are shown in
Fig. 2.6.
Minimise
x2
f2
x1
f1
Minimise
Pareto set: { , , , , , , } Pareto front: { , , , , , , }
• In order to find the Pareto optimal front, different weights with a proper
distribution should be employed. However, an even distribution of the
weights does not necessarily guarantee finding Pareto optimal solutions
with an even distribution.
• This method is not able to find the non-convex regions of the Pareto op-
timal front because negative weights are not allowed and the sum of all
2. Related work 25
the weights should be constant. In other words, the convex sum of the
objectives is usually used in conventional aggregation methods
There are some works in the literature that tried to improve this method. For ex-
ample Parsopoulos and Vrahatis used two dynamic weighted aggregations [131]:
Local front Infeasible area Pareto front Local front Infeasible area Pareto front
Initial population
Pareto solutions
Information exchange
Quick movement
Maximise
Maximise
f2
f2
f1 f1
Maximise Maximise
Initial start points
in each run
and the objective functions, respectively. This order is due to the compar-
ison of all individuals based on all the objective functions together (M N 2 )
and sorting in non-dominated levels (M N 2 × N ). In the new method a hi-
erarchical model of non-dominated levels has been proposed to reduce this
computational cost whereby we do not need to compare all the dominated
individuals after the first non-dominated level (O(M N 2 )). There are two
counters for each individual which show how many individuals dominate it
and how many individual it dominates. These counters help to build the
domination levels.
2. Type B: in this case the parameters of the problem may change. One of
the major sources of this kind of uncertainty is manufacturing tolerance.
3. Type C: in this case the system itself produces noises. The uncertainty
of the outputs of a system is caused by Type A and Type C uncertainties.
It might be due to sensory measurement errors or randomised simulations.
Time-varying (dynamic) systems are also considered as having type C un-
certainty.
It should be noted that computer models (e.g. CFD) do produce errors, but,
being deterministic, they do not produce noisy outputs. The source for these
errors can be found either in the failure to consider uncertain parameters of type
A or B during simulation, or to errors in the models, caused by a number of
issues. Real world systems do produce noisy outputs, but these again are the
effects of type A and B uncertainties. Fig. 2.8 shows where these three types of
uncertainties happen during and after optimisation.
Another very important classification is between aleatory (i.e. random) and
epistemic uncertainty (i.e. due to lack of knowledge) [165].
Parameters
Type A: operating Type B: parameters
conditions
Type C: outputs
Environment
System
Type D: constraints
Δ’
f
Δ
Robust solution
Non-robust
solution
δ δ δ δ
is its lesser sensitivity to δ error in the variable x compared to the left valley.
Note that a robust solution should be an acceptable solution as well. Fig. 2.9
0
clearly shows that δ in the parameter causes ∆ and ∆ in the left and right
0
valleys, respectively. What makes the right valley robust is that ∆ > ∆ . In ro-
bust optimisation such solutions are fruitful. The same concepts are valid when
considering uncertainties in operation conditions. In such circumstances, per-
turbations in the operating conditions might cause greater change (∆) or lesser
0
change ∆ in the output of the system.
Handling uncertainties in parameters is mostly undertaken, in the literature,
by investigating the behaviour of a neighbourhood of solutions in the the objec-
tive space. In the literature of population-based stochastic optimisation tech-
niques, the robustness of each individual should be verified in every iteration.
There are two main approaches proposed so far in order to require population-
based stochastic algorithm to handle uncertainties in parameters as follows [44]
The first two approaches are discussed with their recent developments in the
following subsections. It should be noted that there are numerous examples of
optimisation in the presence of uncertainty being dealt with by solving multi-
objective (expectation vs. variance) optimisation problems as well [98, 76, 146,
59, 107].
where Bδ (~x) shows the δ-radius neighbourhood of the solution ~x and |Bδ (~x)|
indicates the hypervolume of the neighbourhood.
It should be noted here that in the literature, the uncertainty sets and sce-
nario sets are referred to in order to explicitly acknowledge the potentially asym-
metric, and location dependent, nature of the uncertainty at any design location.
Due to the existence of Bδ (~x) in Equation 2.13, however, this definition does not
allow for this and is therefore limited to a subset of design problems.
It may be inferred from Equation 2.13 that the expectation measure is the
analytical integration of the main objective function over the maximum possible
perturbation in the parameters. This equation is applicable to problems with
known integration of the search space. For real problems with unknown search
space, however, the analytical integration of the search space is impossible to
calculate. In this case, the integration is approximated by the Monte Carlo
method as follows:
H
1 X
E(~x) = f (~x + δ~i ) (2.14)
H i=1
34 2. Related work
δ δ δ δ
Figure 2.10: Search space of an expectation measure versus its objective function
Deb and Gupta named this method “Type I” robust optimisation [44]. In
this case, the robust optimisation starts by creating a set of random candidate
solutions for a particular problem. Every candidate solution is evaluated by the
average of H generated random solutions around it. The random solutions are
created in the hypervolume of δmax around the solutions where δmax indicates
the maximum possible perturbation.
In the literature, there are also other different expectation measures for im-
proving the performance of robust meta-heuristics [43, 74, 94, 106, 159, 9]. Ex-
pectation measures have been optimised mostly instead of the main objective
function. Some studies [135, 115, 5], however, consider the expectation measure
2. Related work 35
where F (~x) can be selected as the effective mean or worst function value
among the H selected solutions and η is a vector of thresholds in [0,1]. Note
that || || in this equation can be any norm measure.
It may be seen in the Equation 2.16 that the normalised fluctuation of the
objective is considered as the constraint. The robust solutions are favoured as η
decreases. Fig. 2.11 illustrates the effects of the variance measure on the search
space as a constraint. This figure shows that the regions of the search space
that show greater fluctuations are considered as infeasible. Therefore, a solution
becomes infeasible if it enters such regions.
36 2. Related work
p
δ δ δ δ
Deb and Gupta first named methods that utilise variance measures as “Type
II” robust optimisation. After the proposal of this type of robust optimisa-
tion [44], different variance measures were proposed in order to improve the
performance of robust meta-heuristics [74, 94]. The disadvantage of this method
is that a robust meta-heuristic should be equipped with a constraint handling
method to be able to find the robust optimum.
In summary, both measures quantify the robustness of solutions during op-
timisation. This assists optimisation techniques to quantitatively compare the
solutions and favour robust ones. The expectation measures change the shape
of the search space and smooth out the global optimum. Some of them use
Monte Carlo approximation to investigate the landscape around a solution. The
advantage of these methods is replacement of the main objective by the expecta-
tion measure and separation of robust measure from the optimisation algorithm.
However, they may change the shape of the search landscape and affect the
expected behaviour of an algorithm. Considering expectation measure as an
additional objective gives us solutions with different levels of robustness but in-
creases the difficulty of optimisation due to the additional objective. Note that
2. Related work 37
there are also other, cheaper approximations for expectation than Monte Carlo in
the literature such as Polynomial Chaos [173, 59], Collocation Methods [172, 63],
etc.
Variance measures do not change the search space but bring an additional
constraint. This means that an algorithm should be equipped with a constraint
handling technique to be able to work with a variance measure. A variance
measure has the potential to make a search space with dominated infeasible
regions, which will definitely need special considerations. A search space with a
large infeasible portion is very likely to result in having many infeasible search
agents in each iterations. The problem here is that by default, most of the meta-
heuristics discard infeasible solutions and only rely on feasible solutions to drive
the search agents towards optimal solution(s). Therefore, a powerful constraint
handling method should be utilised when solving such problems.
Despite the success of both expectation and variance measures in assisting
algorithms for finding robust solutions, they both suffer from the need for ad-
ditional function evaluation. All of the current measures become unreliable as
the number of additional sampled points decreases. This is the main gap in the
literature at present. In addition, most of the current work only concentrates
on single-objective search spaces and there should be more work in the litera-
ture about computationally cheap robust optimisation in multi-objective search
spaces.
Comparison of solutions with these two metrics are different when considering
single or multiple objectives. In a single-objective search space, there is only one
objective, so the solutions can be compared easily with the value of robustness
indicators. Due to the nature of single-objective problems, there is one global
robust optimum.
In a multi-objective problem, however, the solutions cannot be compared with
the robustness indicators across only one objective due to the presence of multiple
objectives. In this case robustness should be calculated across all objectives and
then the solutions can be compared using Pareto dominance operators. Due to
the nature of such problems, there is a set of robust solutions (robust Pareto
optimal solutions) as the robust designs. Considering robustness and multiple
objectives make the whole optimisation process very challenging. The following
section presents the preliminaries and reviews the literature of robust multi-
objective optimisation approaches.
38 2. Related work
M inimise : F (~x + ~δ) = {f1 (~x + ~δ), f2 (~x + ~δ), ..., fo (~x + ~δ)} (2.17)
Definition 2.4.1 (Robust Pareto Dominance): Suppose that there are two vec-
tors such as: ~x = (x1 , x2 , ..., xk ) and ~y = (y1 , y2 , ..., yk ).
Vector ~x dominates vector ~y (denote as ~x ≺ ~y ) iff:
∀i ∈ (1, 2, ..., o)
[fi (~x + ~δ) ≤ fi (~y + ~δ)] ∧ [∃i ∈ (1, 2, ..., o) : fi (~x + ~δ) < fi (~y + ~δ)]
A set containing all the non-dominated robust solutions (robust Pareto opti-
mal solutions) is the robust answer to a multi-objective problem and defined as
follows [81]:
Definition 2.4.3 (Robust Pareto optimal set): The set of all Pareto-optimal
solutions:
The projection of the robust Pareto optimal set in the objective space is
called the robust Pareto optimal front and defined as follows:
Definition 2.4.4 (Robust Pareto optimal front): A set containing the value of
objective functions for Pareto solutions set:
∀i ∈ (1, 2, ..., o)
RP F := {fi (~x)|~x ∈ RP S}
As may be inferred from the first two definitions, a solution is able to robustly
dominate another if and only if it is compared to all perturbations and found
to be better or equal under all of them. The concepts of robustness in a multi-
objective search space are illustrated in Fig. 2.12.
Minimize (f1)
S1
Robust solution
S2
f2
x2
S4
S3
x1 f1
Minimize (f2)
Subject to : ~x ∈ S (2.22)
42 2. Related work
Minimise
Minimise
f2
f2
Robust front
Pareto front
f1 f1
Minimise Minimise
Minimise
f2
f2
Robust front
Pareto front
Pareto front
Robust front
f1 f1
Minimise Minimise
c) A part of Pareto front is robust, but there are other robust d) The Pareto front is not robust at all, so the robust front
solutions consists the local front(s)
Figure 2.13: Four possible robust Pareto optimal fronts with respect to the main
Pareto optimal front
H
1 X
where : fief f = fi (~x + δ~j ) (2.26)
H j=1
Subject to : ~x ∈ S (2.28)
where S is the feasible search space, Bδ (~x) is the neighbourhood of the solution
~x within δ radius, and |Bδ (~x)| is the hypervolume of the neighbourhood.
The second method of handling type B uncertainties adds an extra constraint
(variance measure) to a problem as follows:
Subject to : ~x ∈ S (2.30)
where S is the feasible search space, f p (x) can be selected as the effective mean
or worst function value among H selected solutions in the neighbourhood, η is
a threshold that defines the level of robustness for solutions, o is the number of
objectives, ~x is the set of parameters, o is the number of objective functions, m
is the number of inequality constraints, p is the number of equality constraints,
[lbi , ubi ] are the boundaries of the i-th variable.
In these equations, robust solutions are favoured as η decreases. Equa-
tion 2.31 is called the variance measure because it defines the normalised varia-
tion of the objectives in a neighbourhood. The η threshold can be chosen as a
single value for all objectives or a different value for each objective according to
decision makers’ preferences.
where η is a threshold in [0,1], S shows the feasible search space, and F (~x) can
be selected as the effective mean or worst function value among the H selected
solutions.
This constraint is called the variance measure because it defines the deviation
of the objective function(s) in the neighbourhood of a solution.
In 2003, Jin and Sendhoff used the current information of individuals in each
iteration to estimate the robustness of the individuals of the next iteration, so
there were no additional fitness evaluations (re-sampling) [94]. Their work is
formulated to permit different uncertainty levels in design space. After estimat-
ing the robustness, they used it as another objective function to be optimised
in addition to the main objective function(s). Eventually the Pareto optimal
front provided the trade-offs between the performance and robustness. They
proposed two methods to optimise the original function and minimise the vari-
ance. The difference between this method and other methods is that it considers
the variances of design variables in addition to the variance of objective func-
tions. The variances were calculated on the members of a neighbourhood around
the individuals within distance δ. In both methods the robustness of individuals
were calculated by dividing the standard deviation of neighbourhood individual’s
46 2. Related work
N
1 X σfj
Vi (~x) = (2.36)
N j=1 σxj
where x~j shows the j-th solution, N is the number of desirable solutions in
the archive within radius δ from the solution ~x, w(x~j ) ∼ pdf (~δ) is a weighting
function that weights the importance of the previously sampled points in terms
of their distributions within δ.
The sampled points that had not been used for a pre-defined period of time
should be removed from the archive in this method.
In 2011, Saha et al. improved the handling of type B uncertainties proposed
by Deb in terms of reducing the number of function evaluations [143]. In this
method the vicinity of a point in the search space was defined based on the
2. Related work 47
where M is the number of objectives, Fm (x) is the mean value of m-th ob-
jectives of neighbouring solutions, and η is a threshold in [0, 1].
In 2008, Gaspar-Cunha and Covas proposed two measures for handling type
B uncertainties [74]. The efficiency of combining both previously proposed met-
rics of type I and II robustness handling was also investigated. The measures
proposed were [74, 69]:
Type I:
N
|f˜(x~j ) − f˜(~
P
xi )|
j=0
xi ) = (1 −
E(~ )f (~
xi ) (2.39)
N
Type II:
N
1 X f˜(x~j ) − f˜(~xi )
V (~
xi ) = | | , di,j < dmax (2.40)
N j=0 x~j − x~i
where di,j is the euclidean distance between agent i and j, N is the number of
those agents having distances less than dmax , f˜(~
xi ) = ffmax
(x~i )−fmin
−fmin
for maximisation,
and f˜(~ f ( x
~ )−f
xi ) = 1 − fmax −fmin for minimisation.
i min
The authors also investigated the efficiency of these robust metrics in finding
robust frontiers for multi-objective problems. In order to do this, it was suggested
that the metrics for each of objective functions be calculated one by one. Two
48 2. Related work
methods for calculating the final robustness measurement in this case are effective
mean and worst function value:
M
1 X
V 1(~
xi ) = Vm (~
xi ) (2.41)
M m=1
V 2(~
xi ) = max Vm (~
xi ) (2.42)
m=1,...,M
2. Single distributed evaluation: In this method the disturbed input was eval-
uated instead of evaluating multiple samples.
3. Re-evaluating just the best solutions: In this method all the solutions
were evaluated once without perturbations. The robustness of some of
the best solutions was then evaluated using re-sampling several times. So
the computational time was less in this method, and time was not wasted
computing robustness of useless solutions.
4. Using sampled points over past iterations: In this method the weighted
mean of the previously sampled points around a solutions over the course
of iterations was considered as the robustness measure. This method did
not entail additional computational cost, but needed memory to save the
sampled points. Note that this method is called implicit averaging in some
references.
Branke concluded that previously sampled points are able to provide very use-
ful information about the robustness of solutions during optimisation. Therefore,
an algorithm is able to find the robust optimum without the need for extra func-
tion evaluations subject to proper use of previously sampled points. However,
this method is not reliable due to the stochastic nature of stochastic algorithms.
For improving the reliability of such techniques, one of the current mechanism
in the literature is to generate new neighbouring solutions and evaluate them by
true function evaluations. However, true function evaluations directly increase
the computational cost of an algorithm, which is a vital issue when solving real
expensive problems.
This section showed that the process of considering and handling uncertain-
ties in a multi-objective search space is very challenging. This difficulty is per-
haps one of the reasons of the lesser popularity of this field.
The literature review of this section and its preceding indicated that there
are two main robustness measures in the literature: expectation and variance.
50 2. Related work
On one hand, expectation measures do not add additional borders to the opti-
miser but may change the shape of the search space. In addition, considering
expectation measure as an additional objective would increase the difficulty of
problem. On the other hand, a variance measure maintains the original shape of
the search space, but requires a suitable constraint handling method. It should
be noted that a variance measure can be considered as an objective as well.
Despite the effectiveness of both of methods, they suffer from unreliability in
the case of using previously sampled points and high computational cost in the
case of using new sampled points. Currently, the literature lacks (simultaneously)
reliable and cheap robust multi-objective optimisation approaches in both single
and multi-objective search spaces.
In addition, the literature lacks specific test functions and performance met-
rics for robust optimisation. Suitable benchmark problems allow us to compare
different algorithms effectively. performance metrics are useful for quantifying
the performance of algorithms. Without performance metrics all the analysis can
only be made from qualitative results, which are not as accurate as quantitative
results. In the field of robust optimisation, there is a negligible number of test
functions and literally no specific performance metrics. This is the motivation of
proposing several test problems and performance metrics in this thesis. The next
two sections review the current benchmark functions and performance metrics
in both global and robust optimisation fields.
search spaces in one benchmark problem. This section reviews the benchmark
functions in the fields of single- and multi-objective optimisation with a focus
on those for robust optimisation.
Generally speaking, the design process of a test problem has two goals.
Firstly, a test problem should be simple and modular in order to allow researchers
to observe the behaviour of meta-heuristics and benchmark their performance
from different perspectives. Secondly, a test function should be difficult to be
solved in order to provide a challenging environment similar to that of real search
spaces for meta-heuristics. These two characteristics are in conflict where over-
simplification makes a test function readily solvable for meta-heuristics and the
relevant comparison inefficient. In contrast, although a very difficult test func-
tion is able to effectively mimic real search spaces, it may be very difficult to
solve so that the performance of algorithms cannot be clearly observed and com-
pared. These two conflicting issues make the development of test problems very
challenging.
In single-objective problems there are several important characteristics for an
algorithm: exploration, exploitation, local optima avoidance, and convergence
speed. Unimodal [57], multimodal [122], and composite [116] test functions have
been designed in order to benchmark these abilities. These three types of test
functions have been extensively utilised in the literature.
Basically, the conceptual approach of benchmark design is creating different
difficulties of real search spaces to challenge an algorithm. The single-objective
test problems are relatively simple due to the existence of only one global opti-
mum. They are mostly equipped with obstacles to benchmark the accuracy of an
algorithm in finding the global optimum, and its convergence speed. Challenging
test functions with multiple local solutions are able to benchmark the accuracy
of an algorithm due to the high likeliness of local optima entrapment. However,
test functions with no local solutions but different slopes and saturations are
able to test the convergence speed of a single-objective algorithm.
In multi-objective optimisation, however, there are different performance
characteristics for an algorithm. In addition to the above-mentioned charac-
teristics for single-objective optimisation, other important performance features
are convergence towards the global front, and diversity (coverage) of the Pareto
optimal solutions obtained. In order to benchmark the first characteristic, a test
problem should have a Pareto optimal set located in a unimodal, multi-modal, or
52 2. Related work
composite search space. In addition, a test function should have different shapes
of Pareto optimal front such as linear, convex, concave, and discontinuous in
order to benchmark the coverage capability of a multi-objective meta-heuristic.
Due to the concepts of Pareto optimality and multi-optimal nature of multi-
objective search spaces, developing multi-objective test problems is significantly
more challenging than single-objective test functions.
Since the proposal of evolutionary multi-objective optimisation by David
Schaffer in 1984 [32], a significant number of test functions were developed up
to 1998. Some of them are as follows (classified based on the shape of the true
Pareto optimal front):
• Concave Pareto optimal front: Fonseca and Fleming in 1993 [72] and in
1995 [71], and Murata and Ishibuchi in 1995 [124]
In 1998, Van Veldhuizen and Lamont argued that the majority of the test
functions proposed until then could not be considered as standard test prob-
lems [161]. They sifted the test functions and chose three of them as standard
test functions due to their large search space, high dimensionality, multiple ob-
jectives, and global optimum composed of a shape of bounded complexity. A
generic framework for creating a two-objective test function was first proposed
by Deb in 1999 [46]. The mathematical model of this framework is as follows:
The main idea of this framework was to break up a test function into differ-
ent controllable components to systematically benchmark multi-objective algo-
rithms. In this framework, f1 (~x) controls the distribution of true Pareto optimal
solutions and benchmarks the coverage ability of an Evolutionary Multi-objective
2. Related work 53
• Identical global and robust optima: in this case the robust and global
2. Related work 55
f(x,y)
f(x,y)
f(x,y)
f(x,y)
y y x
y x y
x x
TP1 f(x,y) TP2 TP3 TP4
f(x,y)
f(x,y)
f(x,y)
y y
x x y
x y
x
TP5 TP6 TP7 TP8
f(x,y)
f(x,y)
f(x,y)
f(x,y)
y y y y
x x x x
TP9 TP10 TP11 TP12
Figure 2.14: Collected current test functions in the literature for robust single-
objective optimisation. The details can be found in Appendix A.
• Neighbouring global and robust optima: The global and robust optima are
at the same peak (valley).
• Local-global robust and global optima: The global and robust optima are
at different peaks (valleys), and the robust optimum is a local optimum.
As may be seen in Fig. 2.14, the test functions are very simple. For instance,
TP8 and TP11 have a stair-shaped search space and the robust optimum of TP10
is very wide. Simplicity can also be observed in the other test functions. Another
major drawback is lack of scalability. The majority of these test functions cannot
be scaled to more than 2 or 5 dimensions. The number of variables is one of
the key factors for increasing the difficulty and effectively benchmarking the
performance of meta-heuristics. The majority of the current test functions have
56 2. Related work
few non-robust local optima as well. It may also be noticed that there are
no deceptive or flat test functions. The last gap here is the lack of specific test
functions with alterable parameters for defining the degree of difficulty. All these
drawbacks make the current test functions inefficient and readily solvable by
robust meta-heuristics. Therefore, the performance of the robust meta-heuristics
cannot be benchmarked effectively.
3 2.5
2
2 1.5
f1, f2
f1, f2
f2
f2
1
1
0.5
0 0
0 0.5 1 0 0.5 1
f1 f1
Parameter space Objective space Parameter space Objective space
RMTP1 RMTP2
1.5
1
1
f1, f2
f1, f2
f2
f2
0.5
0.5
0 0
0 0.5 1 0 0.5 1
f1 f1
Parameter space Objective space Parameter space Objective space
RMTP3 RMTP4
10 15
f1, f2, f3
f1, f2, f3
5 10
f3
f3
0 1 5 1
0 0.5 0 0.5
0.5 0.5
f2 1 0 f1 f2 1 0 f1
Figure 2.15: Test problems proposed by Deb and Gupta in 2006 [44]
Deb and Gupta noted that the analytical robust front of each of these bench-
mark functions is known, so they can be utilised to benchmark both type I
and II robustness handling methods. Although this set of test functions is able
to simulate different types of robust Pareto optimal front with respect to the
global Pareto optimal front, there are other issues when solving real problems,
such as discontinuous robust/global Pareto optimal fronts, convex/concave ro-
bust/global Pareto optimal fronts, and multi-modality.
These issues have been discussed and addressed to some extent by Gaspar-
Cunha et al. in 2013 [75]. Their five new benchmark problems are illustrated
58 2. Related work
in Fig. 2.16. Note that the robustness curves in this figure (red lines) are the
cumulative value for the robustness of f 1 and f 2 and that it is plotted for given
values of f 1 (i.e. a point on the Pareto front has its robustness plotted at the
same x position).
1 1
0.8 0.8
0.6 0.6
f1, f2
f2
f1, f2
f2
0.4 0.4
0.2 0.2
0 0
0 0.5 1 0 0.5 1
f1 f1
Parameter space Objective space Parameter space Objective space
RMTP7 RMTP8
1 1
0.8 0.8
0.6 0.6
f1, f2
f1, f2
f2
f2
0.4 0.4
0.2 0.2
0 0
0 0.5 1 0 0.5 1
f1 f1
Parameter space Objective space Parameter space Objective space
RMTP9 RMTP10
1
0.8
0.6
f1, f2
f2
0.4
0.2
0
0 0.5 1
f1
As can be seen in this figure, the shapes of the test functions are very different
to that of Deb and Gupta [44]. The robust regions of the main Pareto optimal
front are on the convex section of RMTP7, whereas the robust areas lie on
concave regions of the Pareto optimal front in RMTP9. RMTP10 and RMTP11
were proposed in order to design separated robust regions in the robust Pareto
optimal fronts. As the robustness curves in Fig. 2.16 suggest (the red lines), the
robustness of the separated regions decreases from left to right in RMTP10, while
the robustness is equal for the three discontinuous parts in RMTP11. Note that
the robustness of the Pareto optimal front is calculated by averaging re-sampled
points in the neighbourhood, so a low value in the robustness curve shows low
2. Related work 59
gorithms. This is because such test functions are not seeking to test robustness
of the solutions obtained. There might not even be a robust optimum in a test
function that has been design for testing a global optimiser. It was also observed
that what specific test functions there are for testing robust algorithms are very
simple and limited. They mostly have few local solutions, symmetric search
spaces, and low numbers of variables. On one hand, they allow us to observe
some of the behaviours of a robust algorithm. On the other hand, they are read-
ily solvable by most of the algorithms. The robust multi-objective benchmark
problems also suffer from the same drawbacks despite their use in different stud-
ies. The current gaps for both robust single-objective and multi-objective test
functions are lack of other difficulties such as bias, deceptiveness, flatness, large
number of local solutions (fronts), and large number of variables.
indicators (accuracy and convergence speed) can be utilised. This includes the
accuracy of the robust optimum obtained and convergence speeds towards the
robust optimum instead of the global optimum. In addition, the sensitivity of the
solution obtained to the possible uncertainties could be considered as a specific
metric of robustness. This can be highlighted by solving the problem with both
global and robust optimisers to show the robustness of both solutions obtained.
The main purpose of a performance indicator in Evolutionary Multi-Objective
Optimisation (EMOO) is to quantify the performance from a specific point of
view. Generally speaking, the ultimate goal in EMOO is to find a very accurate
approximation and large number of the true Pareto optimal solutions with uni-
form distribution across all objectives [181]. Therefore, the current performance
measures can be classified into three main categories: convergence, coverage, and
success metrics. The first class of performance measures quantifies the closeness
of the solutions obtained to the true Pareto front [142, 141], and the second class
of metrics defines how well the solutions obtained “cover” the range of each of
the objectives [66]. In addition, the number of Pareto optimal solutions obtained
is important [184] (the success ratio), which provides decision makers with more
designs from which to choose.
Another classification in the literature is between unary [160, 170] and bi-
nary [182, 84] performance indicators. The former class of metrics only accepts
one input and provides a real value, whereas the latter metrics have two in-
puts and one output. According to Zitzler et al. [184], each of these types have
their own disadvantages. The drawback of the unary performance indicators is
that there should be more than one measure to assess the performance of the
algorithms, and it has been proven by Zitzler et al. that designing an effective
general-purpose unary performance measure to evaluate the overall performance
of an algorithm (convergence, coverage, and success ratio) is impossible [184]. In
addition, binary performance measures provide n(n − 1) different values when
comparing n algorithms, whereas unary metrics provide n values. As the main
drawback, this makes the interpretation, analysis, and presentation of the binary
measures more challenging. It should be noted that an important characteris-
tic of a performance metric is Pareto-compliance [73]. A performance metric
is Pareto-compliant if it does not contradict the order enforced by the Pareto
dominance relation.
Despite the limitations of the unary performance indicators, there is no doubt
62 2. Related work
that they have been the most popular performance assessors in the literature.
This is probably due to their simplicity and ease of analysis. This thesis con-
centrates on the unary performance measures, but addressing the three, major
aspects of performance already specified. In the following subsections a review of
the current convergence, coverage, and success ratio (number of Pareto optimal
solutions obtained) metrics is provided.
This metric was proposed by Veldhuizen in 1998 [161]. GD calculates the dis-
tance of Pareto optimal solutions obtained from a selected reference set in the
Pareto optimal front. The mathematical formulation is as follows:
pPno
2
i=1 di
GD = (2.45)
n
where no is the number of obtained Pareto optimal solutions and di indicates the
Euclidean distance between the i-th Pareto optimal solution obtained and the
closest true Pareto optimal solution in the reference set. Note that the Euclidean
distance is calculated in the objective space.
As the name implies, this metric is the inverse of the hypervolume metric, in
which the area/volume of the objective space that is not dominated by the Pareto
optimal front obtained is calculated with respect to a reference set [112].
The spacing metric was first proposed by Schott in 1995 [144]. The main idea of
this metric was to calculate the variance of the Pareto optimal solutions obtained.
The mathematical expression of SP is as follows [28]:
v
u n
u 1 X
SP , t (d¯ − di )2 (2.48)
n − 1 i=1
64 2. Related work
where d¯ is the average of all di , n is the number of Pareto optimal solutions ob-
xi ) − f1 (x~j )|+|f2 (~
tained, and di = minj (|f1 (~ xi ) − f2 (x~j )|) for all i, j = 1, 2, ..., n.
A low value for this measure shows a greater number of, and more equally
spread solutions along the Pareto optimal front obtained.
Another similar metric was proposed by Deb et al. in 2002 [45]. This method
averages the Euclidean distance between the neighbouring Pareto optimal solu-
tions obtained as the spread of the solutions. Note that this metric is calculated
with respect to at least two extreme solutions that define the maximum extent of
each objective based on the true Pareto optimal front. This metric is as follows:
P −1 ¯
df + dl + N i=1 |di − d|
∆= (2.49)
df + dl + (N − 1)d¯
where N is the number of Pareto optimal solutions obtained, d¯ is the average
of Euclidean distances, di is the Euclidean distance of the i-th solution and
its consecutive solution in the Pareto optimal set obtained, and df , dl are the
Euclidean distance between the boundary of solutions obtained and the extreme
solutions.
This metric was proposed by Lewis et al. in 2009 [112] where the objective space
is divided into radial sectors originating from the origin. Then the number of
segments that are occupied by at least one Pareto optimal solution obtained is
calculated as the coverage of an algorithm. The mathematical expression of this
metric is as follows:
Pn
ψi
Ψ = i=1 (2.50)
N
1 (Pi ∈ P F ∗ ) ∧ αi−1 ≤ tan f1 (x) ≤ αn
f2 (x)
ψi = (2.51)
0 otherwise
where o is the number of objectives, and d() calculates the Euclidean distance,
ai is the maximum value in the i-th objective, and b is the minimum value in
the i-th objective
As this equation shows, this metric defines a hyperbox/hypercube using the
Pareto optimal front obtained and finds the maximum diagonal distance.
This metric counts the number of Pareto optimal solutions obtained that belong
to the set of true Pareto optimal solutions and divides it by the total number of
solutions found. The formulation of this metric was proposed by Veldhuizen in
1999 [160] as follows:
Pn
ei
ER = i=1 (2.53)
n
0 P ∈ P F ∗
i
ei = (2.54)
1 otherwise
where n is the number of Pareto optimal solutions obtained and Pi is the i-th
Pareto optimal solution obtained.
The lower value of this measure shows the better approximation of the true
Pareto optimal solutions.
This measure was proposed by Sierra and Coello Coello [148]. This measure
counts the number of solutions obtained that are members of the true Pareto
optimal set. The mathematical formula proposed is as follows:
n
X
SCC = si (2.55)
i=1
66 2. Related work
1 P ∈ P F ∗
i
si = (2.56)
0 otherwise
where n is the number of Pareto optimal solutions obtained and Pi is the i-th
Pareto optimal solution obtained.
In contrast to ER, a high value for this measure shows better performance.
The above-mentioned multi-objective performance indicators have been widely
used in the literature to compare different algorithms. Regardless of the advan-
tages and disadvantages of unary and binary measures, they are all highly suit-
able for the quantitative analysis of results from different perspectives. However,
none of them are able to measure the performance of robust multi-objective al-
gorithms effectively. There is literally no specific performance metric to measure
the robustness of the Pareto optimal solutions obtained. Therefore, the compari-
son of the current robust multi-objective algorithms are qualitative and obviously
not accurate enough, while this is essential when evaluating the performance of
algorithms. This is the motivation of the work in this thesis where several spe-
cific performance metrics will be proposed to quantify the performance of robust
multi-objective algorithms for the first time.
2.7 Summary
This chapter first provided a comprehensive review of the evolutionary single-
objective optimisation methods. Different types of such optimisation techniques,
drawbacks, advantages, and the state-of-the-art were discussed in detail. After
that, multi-objective optimisation using evolutionary algorithms was discussed
as one of the most practical and popular branches in this field. The essential
definitions, recent advances, different techniques, difficulties, most popular al-
gorithms, benchmark problems, and performance metrics in this field were the
main discussions.
The chapter also included two further sections: single-objective robust opti-
misation and multi-objective robust optimisation. The former section was dedi-
cated to the literature review of the current robust single-objective optimisation
techniques, benchmark problems, and robustness measures in single-objective
search spaces. This section also identified diverse types of uncertainties and
2. Related work 67
their impacts on the real systems/problems. In the latter section, the literature
of robust multi-objective optimisation was reviewed in detail. Similarly to robust
single-objective optimisation section, preliminaries, essential definitions, current
robust multi-objective optimisation methods, performance metrics, benchmark
problems, and robustness measures were covered.
Chapter 3
Analysis
The preceding chapter reviewed the literature of robust optimisation and re-
lated fields. Similarly to global optimisation, a robust optimisation process in-
cludes four main phases: benchmark development or preparation, development
or preparation of performance metrics, proposing and testing an algorithm or
improvement, and applying the algorithm to an actual industrial problem to be
optimised. This chapter analyses each of these phases and identifies their current
gaps.
68
3. Analysis 69
space. This assists us to test the abilities of an algorithm dealing with isolated
difficulties.
In the literature of global optimisation, there is a significant number of test
suits, test cases, and frameworks for testing algorithms. The number, quality,
and popularity of the test functions in this field assure a designer that compar-
isons on them are reliable. In order to design a robust algorithm, there is a need
for adequate and proper test functions as well. Some of the challenges that an
algorithm may face when searching for robust solutions in a real search space
are: non-robust local optimal solutions, robust local optimal solutions, slow con-
vergence, deceptive non-robust optimal solutions, robust optimal solutions close
to the boundary of the search space, and so on. Unfortunately, a limited number
of these difficulties have been implemented in test functions.
The current test problems are very simple and limited, so they are not ef-
ficient in benchmarking the ability of robust algorithms. They mostly have
few local solutions, symmetric search spaces, and a low number of variables. Al-
though such test problems allow us to observe some of the behaviours of a robust
algorithm, they are readily solvable by most of the algorithms. The robust multi-
objective benchmark problems also suffer from the same drawbacks despite their
use in different studies. The current gaps for both robust single-objective and
multi-objective test functions are lack of difficulties such as bias, deceptiveness,
flatness, large number of local solutions (fronts), and large number of variables.
In addition, there is no framework with alterable parameters to allow a designer
to generate new test functions based on their needs. Therefore:
We must design more challenging test functions, and frameworks to alter their
difficulties.
There have been two main approaches proposed for reducing the compu-
tational costs (true function evaluations) of robustness handling methods, as
follows:
Another disadvantage of surrogate models is that they mostly use one or more
than one [162, 151] model for the entire search space, while a real search space
usually has regions with diverse shapes. This is because surrogate models are
constructed from limited (local) information about the search space, which works
well for some regions but may provide deceptive information to the optimiser
about other areas of the search space. Therefore, surrogate-assisted algorithms
may become unreliable because they can be deceived by the surrogate models
in some regions of the search space. Due to inaccuracy and unreliability, these
techniques are not investigated in this thesis and considered out of scope, so
interested readers are referred to the comprehensive review by Jin [95].
The archive-based methods, which are the focus of this work, rely on pre-
viously evaluated solutions during robust optimisation. The main advantage of
these methods compared to surrogate-assisted approaches is the use of the real
search space. Utilising a real search space prevents an optimiser from making
unreliable comparisons between robust and non-robust solutions. It is worth
mentioning here that the reliability of archive-based methods can be improved
by making more true function evaluations around the solution. This process be-
comes computationally cheaper every year with the improvement of hardware.
It seems both surrogate-assisted and archive-based algorithms have their own
advantages and drawbacks. On one hand, surrogate-assisted algorithms do not
solve the real search space, have intrinsic errors, and rely on local information ex-
tracted from the search space. However, they are cheap, so additional function
evaluations have no substantial additional cost. On the other hand, archive-
based algorithms solve the actual search space but suffer from unreliability be-
cause of the stochastic nature of meta-heuristics. The solutions contained in
the archive are the product of the algorithms’ operations, not a systematic sam-
pling of the search space. Due to the importance of solving the actual search
space (and not the surrogate model), the archive-based methods are investigated
and improved in this thesis. The unreliability of such methods is targeted for
improvement as the only disadvantage.
The usefulness of archive-based methods was investigated and confirmed by a
number of studies [18, 94, 58, 143]. They experimentally proved that previously
sampled points could reduce the number of true function evaluations significantly
and are able to provide good information about the robustness of solutions.
According to the central limit theorem, in addition, a large number of samples
3. Analysis 73
from a population is able to yield a mean equal to that of the population itself.
Therefore, the average of a large number of sampled points in the neighbourhood
of a solution gives us the average of the real search landscape to determine the
robustness.
Although previously evaluated points can provide very useful information
about the robustness of new solutions without the need for additional evalu-
ation [18], the stochastic nature of meta-heuristics prevents this method from
providing the highest reliability. In effect, the reliability of the archive-based
methods is decreased as the number of archive members and true function eval-
uations are reduced. Deb et al. studied the effects of neighbourhood solutions
in terms of finding the analytical robust front and found that finding the ro-
bust front becomes more challenging when the number of sampled solutions (or
neighbouring solutions) decreases [43]. In addition, archive-based methods that
only use previously sampled points are very unstable in the initial steps of opti-
misation due to fewer sampled points.
All these reasons reduce the confidence of designers and decision makers in
the performance of the archive-based methods and the quality of robust de-
signs obtained. What make these methods unreliable are: lack of sufficient
previously sampled points in the archive, lack of a good distribution of sampled
points around particular solutions, and lack of appropriate sampled solutions
in a certain radius around particular solutions. One might think that the un-
reliability of the archive-based approaches is resolved after the initial steps of
meta-heuristic optimisation, but the stochastic nature of meta-heuristics and
the unknown shape of the search space prevent the archive-based methods from
making confident decisions throughout the whole optimisation process.
Fig. 3.1 shows an example of an archive-based method providing misleading
information when relying on previously sampled points. In this figure the solu-
tion S2 is more robust than S1. In the archive, however, there is one sampled
point for S1 and three for S2 to confirm the robustness. Since the sampled
solution and S1 are very close, the robustness measure indicates high robust-
ness. However, the robustness measure for S2 show less robustness due to the
distribution of the solutions around S2 in parameter and objective spaces. In
this case, an archive-based method assumes that S1 is more robust than S2,
while it is not. Such circumstances can happen throughout robust optimisation,
which results in guiding the search agent of meta-heuristics toward misleading,
74 3. Analysis
Previously
Sampled points
S1
x2
f2
S2
x1 f1
lack of confidence that we have in the sampled points inside the archive. There-
fore, if a method improves our confidence in the values of the archive members,
we can design more reliable algorithms. This is what the literature lacks and
the specific motivation of this research, in which a novel method is proposed to
measure the confidence level of search agents during optimisation to alleviate the
unreliability of the archive-based algorithms. It should be noted here that we
cannot achieve 100% reliability because a method that improves our confidence
is only an estimate, as it is derived from samples.
The current robust test problems suffer from simplicity and are not able to
test robust algorithms efficiently. There is no performance metric for evaluating
the ability of robust algorithms. In addition, the majority of current methods
suffer from significant additional computational costs due to the need for addi-
tional function evaluations to confirm the robustness of solutions, making them
impractical for solving real problems [96].
To fill these gaps, I propose the systematic design process in the following
chapters. This includes designing challenging robust test problems to compare
algorithms, performance metrics to measure how much a robust algorithm is
better than another, and computationally cheap robust algorithms to find robust
solutions for optimisation problems. As Fig. 3.2 shows, the first two phases of this
systematic process, the development of test functions and performance metrics,
are prerequisite to the third phase, algorithm development.
Robust test
function
design
Robsut
performance
metric
design
Robust
algorithm
design
Figure 3.2: Test functions and performance metrics are essential for systematic
algorithm design
ously sampled points in the parameter space in order to improve the reliability
of robustness measures. Confidence-based relational operators and confidence-
based Pareto optimality/dominance are then proposed using both robustness
and confidence metrics in order to make confident and reliable comparison be-
tween solutions in both single- and multi-objective search spaces. Two novel
and cheap approaches called Confidence-based Robust optimisation (CRO) and
Confidence-based Robust Multi-objective optimisation (CRMO) are established.
The proposed approach improves the reliability of archive-based methods with-
out additional computational costs and assists designers to confidently rely on
points previously evaluated during optimisation. In addition, the proposed ap-
proach allows designing different confidence-based methods for finding robust
Pareto solutions.
The objectives of this thesis are:
The gaps that are going to be filled by the above-mentioned objectives and
consequently where the contributions of this thesis fit are illustrated in Fig. 3.3.
Single-objective optimisation
2 4
1 7
Robust optimisation
3 5
Multi-objective optimisation 6
Confidence measure
Confidence-based relational
Implicit mathods
operators
CRPSO
Single-objective robust Confidence-based robsust
Population-based stochastic robust
Explicit methods
optimisation optimisation
CRGA
optimisaiton methods
Benchmark problems
Confidence-based Pareto
optimality
Implicit methods
Confidence-based robust
CRMOPSO
multi-objective optimisation
Explicit methods
Multi-objective robust
optimisation
Benchmark problem
Performance metrics
This thesis also proposes several challenging frameworks and test functions for
single-objective and multi-objective robust algorithms. Due to the lack of such
difficult test functions in the literature, these contributions can be considered
as one of the seminal attempts in designing standard single- and multi-objective
robust test problems. The thesis also considers the proposal of two specific
novel performance measures for comparing the robust multi-objective algorithms.
There are no performance metrics in the field of robust multi-objective optimi-
sation, so the proposed metrics are very important and can be used to quantify
the performance of robust multi-objective algorithms for the first time. In addi-
tion, the proposed systematic robust algorithm design process allows designers
to reliably and confidently propose new algorithms or improve the current ones.
The systems designed by computer scientists usually have very powerful com-
putational foundations, but there is often little focus on real application. The
investigated case studies of this thesis are real problems, so there will be an em-
phasis on real applications in addition to theoretical works. Several propellers
will be optimised by MOPSO for the first time. In addition, this thesis is the
first research which tries to design a robust propeller.
3. Analysis 81
3.8 Summary
The discussions of this chapter showed that the state-of-the-art in the field of
single-objective and multi-objective evolutionary optimisation are very mature
since there is considerable research in these two fields. There are many al-
gorithms, benchmark problems, performance metrics, and constraint handling
techniques. The applications of optimisation techniques in both fields can be
found widely in different branches of science and industry. Although most of
the real problems have multiple objectives, the importance of single-objective
optimisation should not be underestimated. Such techniques are very essential
in solving and analysing real problems with one objective. Robust optimisation
is also important in both areas. It does not matter if we look for one solution
or a set of solutions, the presence of uncertainties in real environments is always
a substantial threat for the stability and reliability of the optimal solution(s)
obtained. Robust optimisation in a multi-objective search space seems to be
more challenging and critical than a single-objective search space although it is
essential when solving real problems of any type.
Finding optimal solutions that are less sensitive to perturbations requires a
highly systematic robust optimisation algorithm design process. This includes
designing challenging robust test problems to compare algorithms, performance
metrics to measure how much a robust algorithm is better than another, and
computationally cheap robust algorithms to find robust solutions for optimisa-
tion problems. The first two phases of a systematic algorithm design process,
developing test functions and performance metrics, are prerequisite to the third
phase, algorithm development.
Benchmark functions provide test beds for challenging and testing different
algorithms. They are the foundation of a systematic algorithm design and with-
out them benchmarking of the algorithms is not possible. Despite the large
number of test functions proposed for benchmarking global optimisers, there
are inadequate benchmark functions to effectively test the performance of robust
algorithms.
Also, comparing algorithms on a benchmark function requires qualitative and
quantitative metrics. Despite the significant advancements in multi-objective
performance metrics, it was observed that the literature substantially lacks per-
formance metrics to quantify the performance of robust multi-objective algo-
82 3. Analysis
Robsut
performance
metric
design
Chapter 4 Robust
algorithm
design
Benchmark problems
84
4. Benchmark problems 85
The design process of a test problem includes two goals. On one hand,
a test problem should be simple and modular in order to allow researchers to
observe the behaviour of meta-heuristics and benchmark their performances from
different perspectives. On the other hand, a test function should be difficult to
be solved in order to provide challenging environments similar to those of real
search spaces for meta-heuristics. These two characteristics are in conflict where
greater simplicity makes a test function readily solvable for meta-heuristics and
the relevant comparison inefficient. In contrast, although a very difficult test
function is able to effectively mimic the real search space, it may be very difficult
to be solved so that the performance of algorithms cannot be clearly observed and
compared. These two conflicting issues make the development of test problems
challenging.
In this chapter several frameworks are proposed and utilised to design test
functions for benchmarking robust single- and multi-objective meta-heuristics.
For designing the frameworks and test functions, the guidelines suggested by
Whitley et al. [166] are followed for creating standard test suites:
• Standard test sets should contain test problems that are resistant to simple
optimisation methods.
• Standard test sets should include test problems with non-linear, non-separable,
and non-symmetric search spaces.
• Standard test sets should have test problems with scalable evaluation cost.
• Standard test sets should contain test problem that are of canonical form,
meaning that they should be independent of problem representation.
These essentials were extended by Bäck and Michalewicz [8, 10] (e.g. having
few unimodal and highly multi-modal test functions). However, these guidelines
are very generic and mostly applicable to single-objective test problems. In
the literature, Deb et al. suggested specific recommendations for making multi-
objective problems as follows [10, 55]:
• The exact shape and location of the Pareto optimal front should be known
and easy to understand.
• The Pareto optimal set in the parameter space should also be known and
understandable.
• There should be different shapes for the Pareto optimal front and dis-
continuity in order to benchmark the ability of an algorithm in finding
well-distributed Pareto optimal solutions.
A suitable test suite is one that provides different test functions with a variety
of the above-mentioned features. However, capturing all the possible combina-
tions of these features is impractical, as discussed by Huband et al. [92] . In the
following subsections, therefore, these features are captured as much as possible
within various frameworks. The focus is on single-objective and multi-objective
test problems due to the difficulty of many-objective test problems and scope of
the thesis. In addition, the proposed frameworks only generate unconstrained
test problems. Due to the standard modularity of the proposed frameworks,
however, any kind of constraints in the multi-objective test problems proposed
in [52] and other works in the literature can easily be integrated in the proposed
test functions.
4.1.1 Framework I
This framework is for creating a bi-modal parameter space with two optima.
One of the optima is robust and the other is not robust. The mathematical
formulation of this test function is as follows:
1 x−1.5 2 2 x−0.5 2
f (x) = √ e−0.5( 0.5 ) + √ e−0.5( α ) (4.1)
2π 2π
where α defines the width (robustness) of the global optimum.
This function is illustrated in Fig. 4.1. This figure shows how parameter α
defines the shape of the global valley without changing the fitness values of both
local and global optima.
0
-0.01
-0.02
f(x)
-0.03
= 0.01
= 0.05
-0.04 = 0.1
= 0.15
-0.05 = 0.2
= 0.25
= 0.3
-0.06
0 0.5 1 1.5 2
x
Figure 4.1: Proposed function with adjustable local optima robustness parame-
ter. The parameter α changes the landscape significantly
Pn 2
2 ! Pn 2
2 !
i=1 (xi −1.5) i=1 (xi −0.5)
1 −0.5 0.5 1 −0.5 α
M inimise : f (~x) = √ e + √ e
2π 2π
88 4. Benchmark problems
f(x,y)
f(x,y)
f(x,y)
y x y x y x
f(x,y)
f(x,y)
f(x,y)
y x y x y x
f(x,y)
f(x,y)
f(x,y)
y x y x y x
(4.3)
where : 0 ≤ xi ≤ 2 (4.4)
in Equation 4.3 that this framework is able to generate scalable test functions
with a desirable number of variables. The characteristics of the test functions
generated by this framework are summarised as follows:
• The robustness of the global optimum does not affect the optimal values
of both local and global optima.
• Both local and global optima have the potential to be the robust optimum
based on the value of the parameter α.
4.1.2 Framework II
The second framework generates a desirable number of local non-robust optima.
In other words, a multi-modal search space with one global optimum, one robust
optimum, and several local non-robust optima can be created by this framework.
The mathematical formulation of this framework is as follows:
2
e−2x sin λ × 2π(x + π
4λ
) − xβ
where : H(x) = + 0.5 (4.6)
3
PN
i=3 xi
G(~x) = 1 + 10 (4.7)
N
0 ≤ xi ≤ 1 (4.8)
λ>0 (4.9)
90 4. Benchmark problems
optima
f(x1,x2)
x2 x1
Figure 4.3: Shape of the search landscape with controlling parameters con-
structed by framework II
β>0 (4.10)
As may be seen in Fig. 4.3, this framework allows generating (λ + 1)2 local
optima in the search space. The effect of this parameter on the shape of the
search space can be observed in Fig. 4.4. This figure shows that the search space
becomes more challenging as λ increases.
Another characteristic of this test framework is its parameter scalability. The
function G(~x) is responsible for supporting three or more variables. Since G(~x)
is a kind of penalty function, an algorithm should find zero values for variables
x3 − xn in order to find the best robust optimum.
The characteristics of the test functions generated by this framework are
summarised as follows:
• The last, worst local optimum is the most robust optimum and has the
highest distance from the global optimum.
4. Benchmark problems 91
f(x1,x2)
f(x1,x2)
f(x1,x2)
x2 x2 x2
x1 x1 x1
f(x1,x2)
f(x1,x2)
f(x1,x2)
x2 x2 x2
x1 x1 x1
It should be noted here that test problems (including those proposed in this
thesis) that are framed using one subset of decision variables to move around a
surface (or front) and another subset that vary distance to that surface (front)
are effectively separable as these sub-components can be solved separately, and
are therefore biased toward algorithms which propagate this sub-vectors (and, if
these are all at a boundary, those which truncate at boundaries). This is why
rotation matrices are incorporated into the DTLZ problems used in the CEC
multi-objective test suite. A modification with a rotational matrix is required
to convert the resulting problems into non-separable ones.
This framework was inspired by some of the current test functions in the field
of global optimisation. It divides the search space into four sections and allows
defining different functions in each section. The mathematical formulation is as
92 4. Benchmark problems
follows:
f1 (x, y) (x ≤ 0) ∧ (y ≥ 0)
(x ≥ 0) ∧ (y ≤ 0)
f (x, y)
2
M inimise : f (x, y) = (4.11)
f3 (x, y) (x > 0) ∧ (y > 0)
f4 (x, y) (x < 0) ∧ (y < 0)
Any type of functions with robust and non-robust optima can be utilised as
f1 to f4 . For instance, Fig. 4.5 shows a search space constructed using spherical,
Ackley, Rastrigin, and pyramid-shaped functions. It is evident from the figure
that the spherical function has the most robust optimum.
f(x,y)
y
f(x,y)
y
x
x x
Figure 4.5: An example of the search space that can be constructed by the
framework III
In order to provide scalability for this framework, there can be two possibili-
ties. Each of the sub-functions can be chosen with a different number of variables
or the function G(~x), which was integrated with the second proposed framework,
can be multiplied by the results of each function as follows:
f1 (x, y) × G(~x) (x ≤ 0) ∧ (y ≥ 0)
f (x, y) × G(~x) (x ≥ 0) ∧ (y ≤ 0)
2
M inimise : f (x, y) = (4.12)
f 3 (x, y) × G(~
x ) (x > 0) ∧ (y > 0)
f4 (x, y) × G(~x) (x < 0) ∧ (y < 0)
PN
i=3 xi
where : G(~x) = 1 + 10 (4.13)
N
4. Benchmark problems 93
The majority of robust test problems are of low dimension. In addition, the
robust optimum is moved when the dimensions change (for instance TP1, TP2,
TP3 and TP4 in Fig. 2.14. This sub-section is inspired by the method of adding
multiple variables when designing multi-objective test problems proposed by
Deb et al. [45] and Zitzler et al. [181]. In this method a function called G(~x) is
employed to handle all the variables except x1 and x2 as follows:
N
!
X
G(~x) = 50x2i + 1 (4.14)
i=3
This equation allows defining the shape of the search space by f (~x) and han-
dling multiple variables by G(~x). To find the optimum of F (~x), an optimisation
algorithm should find the optimal values for x1 and x2 because these two parame-
ters define the search space of the f (~x) function in Equation 4.15. Then, it has to
find the optimal values for x3 and xN which are all equal to 0. It should be noted
that this method works for minimisation problems, yet the negation/inverse of
G(~x) function can make it applicable for maximisation problems.
In fact, the function G(~x) defines a similar search space to the f (~x) above the
main landscape. A set of 10,000 random solutions is generated for a 10-variable
version of TP1 using Equation 4.15 and illustrated in Fig. 4.7 . It may be seen
that the size of search space (range of variables) increases proportional to the
number of variables without changing the shape of the search landscape. In
other words, an unlimited number of parallel layers (surfaces) with shape similar
to f (~x) are constructed above it.
Figure 4.7: Search space becomes larger proportional to N and without any
change in the main search landscape
The advantage of this method is the ease of applicability to any test functions
without changing their shape.
4. Benchmark problems 95
where p indicates the point, the bias is defined toward or away from it, and θ
defines the density.
Equation 4.16 shows that the density of solutions in the search space can be
adjusted by a parameter called θ. The density is uniform when θ = 1, towards
the point p when θ > 1, and away from the point p when θ < 1. The effect of θ
can be observed in Fig. 4.8.
= 0.14286 =1 x 10
4
=5
1.4 10 10
Density
1.2 Uniform density
8 8
1
0.8 6 6
B(x)
B(x)
B(x)
0.6 4 4 Density
0.4
2 2
0.2
0 0 0
-10 0 10 -10 0 10 -10 0 10
x x x
θ P 0 x < 0
P2 N 2 i
where B(~x) = i=1 |xi | , G(~x) = i=3 50x i +1, and H(x i ) =
1 otherwise
Fig. 4.9 illustrates the transformation of the shape of the search space. This
figure shows that the boundaries of the search space are first extended and then
multiplied by the function B(~x). This maintains the original position and shape
of the robust optimum while having bias away from it. It should be noted here
the range of the search space changes due to the multiplication of the function
for bias as shown in the objective function in Equation 4.17.
In a deceptive search space, a large region of the search space favours the unde-
sirable optimum. In global optimisation, a deceptive search space tends towards
local solutions. In this case, the search agents of meta-heuristics are deceived
and converge toward the local optima [48, 167]. There is no deceptive robust
test function in the literature, so the first one is proposed in this subsection.
For constructing a deceptive function, generally speaking, there should be
at least two optima: a deceptive optimum versus a true optimum [46]. The
shape of the search space should be designed to favour the deceptive optimum.
4. Benchmark problems 97
Uniform density
High density
Low density
Figure 4.10: 50,000 randomly generated solutions reveal there is low density
toward the robust optimum in the biased test function, while the density is
uniform in the un-biased test function
N
!
X
G(~x) = 50x2i +1 (4.20)
i=3
Multiple local solutions are another type of difficulty for test problems. Real
search spaces often have a massive number of local solutions that make them
very hard to optimise. In the literature of global evolutionary optimisation,
98 4. Benchmark problems
Robust optima
Local density
optima
Global optima
density
Figure 4.11: Proposed deceptive robust test problem
multi-modal test functions are very popular. There are different test functions
in this field with exponentially increasing numbers of local optima [116, 122, 155,
174, 175, 57]. Since there is no robust test function with many local optima, one
is proposed in this subsection as follows:
M
2 x−(0.6+0.02i) 2
−( x−0.5 )2 −( x−0.02i ) ( )
X
−
where : H(x) = 1.5 − 0.5e 0.04 − 0.8e 0.004 + 0.8e 0.004
i=0
(4.22)
N
X
G(~x) = 50x2i (4.23)
i=3
approximate the single robust optimum. Note that for very small values of δ,
the middle panel is no longer robust in Fig. 4.12. So, the δ > 0.05 should be
considered for this test function.
Robust optimum
Global optima
Local
optimum
In non-improving or flat test beds, very little information about the possible
location of the optimum solution can be extracted from the search space. A flat
search space might wrongly be assumed very simple because of the very small
number of local solutions. However, it is not very simple since the majority
of meta-heuristics fail in solving such problems, especially if the first random
individuals are all located on the flat regions. It this case, all the individuals
are assigned equal fitness values, so evolutionary operators become ineffective.
For instance, the PSO algorithm fails to update gBest and pBest effectively for
guiding the particles. This deteriorates when searching for the robust optimum
because of the high and consistent robustness level of the flat regions. Therefore,
a robust algorithm may mistakenly assume the flat regions to be the robust
optimum, while the best robust optimum can be somewhere else in the search
space.
There is no robust test problem with a flat search space, so the first one is
proposed as follows:
x−0.95 2 x−0.05 2
where : H(x) = 1.2 − 0.2e−( 0.03
)
− 0.2e−( 0.01
)
(4.25)
100 4. Benchmark problems
N
X
G(~x) = 50x2i (4.26)
i=3
The shape of the search space constructed by Equations 4.24, 4.25, and 4.26
is illustrated in Fig. 4.13 This figure shows that there are only four optima near
the corners of the search space. One of them (located at [0.05, 0.05]) has the least
robustness, while the optimum located at [0.95, 0.95] has the highest robustness.
The robustness of the optima positioned on [0.05, 0.95] and [0.95, 0.05] have high
robustness along x2 and x1 respectively. It should be noted that the fitness value
of all these optima are equal to 0. This function is deliberately required to have
such optima to challenge robust algorithms in finding optima with equal fitness
but different degree of robustness. In addition, the flat search space proposed
provides very little information about the location of optima.
With the proposed framework and difficulties, desirable test functions can
be constructed. Several test functions are created for the purpose of this thesis,
which can be found in Appendix A. Note that TP1 to TP9 are taken from the
literature and TP10 to TP20 are proposed by the above discussed frameworks
and difficulties.
This set of test functions provides very challenging environments for robust
algorithms. The proposed test functions may be theoretically effective for bench-
marking the performance of robust meta-heuristics due to the following reasons:
• The proposed scaling method does not change the shape of test functions,
so it is readily applicable to any test function.
• Multi-modal test functions are highly suitable for benchmarking the per-
formance of robust algorithms in terms of avoiding local and less robust
solutions. Since the worst local optimum is the most robust, the bench-
mark functions of this class are very challenging.
• Flat test functions provide the search agents of robust algorithms with very
little information about the robust optimum, so meta-heuristics should
search for the robust optimum without relying on the information provided
by the flat regions.
• Several optima with equal fitness but different degree of robustness can
challenge robust algorithms in finding partial or complete robust optimal
solutions.
4.2.1 Framework 1
This framework is for creating a bi-modal parameter space and a bi-frontal ob-
jective space. The core part of this framework is the following mathematical
function:
1 x−1.5 2 2 x−0.5 2
f (x) = √ e−0.5( 0.5 ) + √ e−0.5( α ) (4.27)
2π 2π
where α defines the width (robustness) of the global optimum.
This function is identical to the function utilised in the previous section and
may be seen in Fig. 4.1. This function is employed to propose a multi-objective
framework. A similar framework to that of Deb in 1999 [46] is utilised, which
consists of different controllable components. Without loss of generality, the
framework can be formulated as follows:
1 x−1.5 2 2 x−0.5 2
where : H(x) = √ e−0.5( 0.5 ) + √ e−0.5( α ) (4.30)
2π 2π
N
X
G(~x) = 50x2i (4.31)
i=3
α>0 (4.33)
where α and β are two parameters for adjusting the robustness of the global
Pareto optimal front and the shape of the fronts, and ω indicates a threshold for
moving f2 (~x) (Pareto optimal fonts) up and down.
4. Benchmark problems 103
1.5
G(x)
G(x)
1
α 0.5 G(x)
f2
0
-0.5
0 0.5 1 1.5 2
f1
Figure 4.14: Search space and objective space constructed by proposed frame-
work 1
It may be observed in this figure that the proposed method is able to provide
two Pareto optimal fronts: global and robust. The robustness of the global
front can be defined by α in H(x). The interesting characteristic of the function
H(x) is that the Pareto optimal front does not move when the robustness of
the left valley in the search space is varied. Therefore, the specific behaviour
of robust meta-heuristics while changing the robustness of the global Pareto
optimal front can be observed. The other component of this framework, G(~x),
is responsible for providing an unlimited number of variables for this framework.
As the formulation of this function shows, a weighted addition of all variables
except x1 and x2 is calculated by this function and multiplied by H(x) in the
framework. By increasing the value of H(x), both Pareto optimal fronts are
converted to local fronts. In other words, the G(~x) function drives both fronts
away from their optimal positions. A meta-heuristic has to find zeroes for x3 to
xN in order be able to reach the best trade-offs between f1 (~x) and f2 (~x).
Fig. 4.15 shows the changing shape of the search space with altered values for
α. This figure shows the global valley has the potential to be even more robust
than the local valley with some values of the proposed parameter.
104 4. Benchmark problems
f(x1,x2)
f(x1,x2)
f(x1,x2)
f(x1,x2)
f(x1,x2)
α = 0.01 α = 0.1 α = 0.2 α = 0.3 α = 0.4
Figure 4.15: Effect of α on the robustness of the global Pareto optimal front’s
valley
f(x1,x2)
1
1
1
0.5
0.5
f2
f2
f2
0
0.5
0
-0.5
0 -0.5 -1
0 0.5 1 1.5 2 0 0.5 1 1.5 2 0 0.5 1 1.5 2
f1 f1 f1
( ) ( ) ( )
Figure 4.16: Shape of parameter space and Pareto optimal fronts when: β = 0.5,
β = 1, and β = 1.5. Note that the red curve indicates the robustness of the
robust front and black curves are the front.
1.6 2
1.4 0
1.2 -2
1 -4
f2
f2
0.8 -6
0.6 -8
0.4 -10
0.2 -12
0 0.5 1 1.5 2 0 0.5 1 1.5 2
f1 f1
Figure 4.17: Changing the shape of global and robust Pareto optimal front with
β
to the value of β. Fig. 4.17 shows how the parameter β allows having different
shapes with different robustness for both global and robust Pareto optimal fronts.
With the first proposed framework, a designer is able to create a bi-modal
search space with an unlimited number of parameters that maps to a two-
dimensional objective space with various linear, concave, and convex global and
robust Pareto optimal fronts. However, the proposed framework generates global
and robust Pareto front with similar shapes. In order to provide a more flexible
framework, it is necessary to create different shapes for each of the fronts. The
generalised version of the framework is formulated as follows:
H(x ) × {G(~x) + S (x )} + ω if x2 < 0.8
2 1 1
M inimise : f2 (~x) = (4.35)
H(x2 ) × {G(~x) + S2 (x1 )} + ω if x2 ≥ 0.8
1 x−1.5 2 2 x−0.5 2
where : H(x) = √ e−0.5( 0.5 ) + √ e−0.5( α ) (4.36)
2π 2π
N
X
G(~x) = 50x2i (4.37)
i=3
where α defines the robustness of the global valley, β1 defines the shape of the
global Pareto optimal front, β2 defines the shape of the robust Pareto optimal
front, and ω indicates a threshold for moving f2 (~x) (Pareto optimal front) up
and down.
As can be inferred from these equations, the parameter space is divided into
two parts x2 < 0.8 and x2 ≥ 0.8. Since the global valley is located in x2 < 0.8, the
global Pareto optimal front’s shape follows S1 (x). In addition, the local/robust
valley is in x2 ≥ 0.8 and obeys S2 (x). The parameters β1 and β2 allow adjustment
of the shape of the global and robust Pareto optimal fronts independently. With
this mechanism, nine different combinations of linear, convex and concave global
and robust Pareto optimal fronts can be constructed.
This combination allows designers to investigate the behaviour of different
robust meta-heuristics dealing with different shapes of global and robust fronts.
Benchmark functions with different shapes for robust and global optima would
generally be more difficult to solve because a robust algorithm needs to adapt
to a very different Pareto optimal front with different shape when transferring
from the global/local Pareto optimal front to the robust Pareto optimal front(s).
This was recommended by Deb for multi-objective benchmark problems [46].
4.2.2 Framework 2
In order to provide a more challenging multi-modal test set, another framework is
also proposed in this work. Generally speaking, multi-modal test problems have
a large number of local optima (local Pareto optimal solutions), which make them
suitable for benchmarking the exploration and local optima avoidance ability of
an algorithm. The framework 1 is modified to propose a new framework that
allows designers to construct a search space with some desired number of local
Pareto optimal fronts. The multi-modal framework is formulated as follows:
2
e−x cos(λ × 2πx − x)
where : H(x) = + 0.5 (4.42)
γ
N
X
G(~x) = 50x2i (4.43)
i=3
γ ≥ 1.3 (4.45)
λ≥1 (4.46)
0.6
f2
Local
0.4 fronts
0.2
Global
0 front
0 0.2 0.4 0.6 0.8 1
f1
Figure 4.18: Shape of the parameter space and its relation with the objective
space constructed by the framework 2
Fig. 4.18 shows that the search space has an incremental wave-shaped cur-
vature along x2 . The proposed exponential-based equation of H(x) allows each
valley to have different robustness. The robustness is proportional to the value
108 4. Benchmark problems
of x2 , in which the first value is the least robust valley and the last valley has
the best robustness. The objective space illustrated in Fig. 4.18 shows that each
of the valleys corresponds to a front, so the robustness of the fronts also increase
from bottom to top along f2 . There are four control parameters for this frame-
work: β, ω, λ, and γ. The role of β and ω are identical to those of the first
framework, in which β defines the shape of the Pareto optimal front and ω is
a threshold for moving f2 (~x) (Pareto optimal fronts) up and down and locating
them in a desirable range.
The λ parameter defines the number of valleys in the search space and con-
sequently the number of local Pareto optimal fronts in the objective space. This
control parameter allows designers to provide a multi-modal search space with
λ − 1 local fronts. It should be noted that the robustness of local fronts increases
as they become farther from the global Pareto optimal front. The effect of this
control parameter on parameter and objective spaces is shown in Fig. 4.19.
f(x1,x2)
f(x1,x2)
f(x1,x2)
f(x1,x2)
f(x1,x2)
1 1 1 1 1
f2
f2
f2
f2
0 0 0 0 0
0 0.2 0.4 0.6 0.8 1 0 0.5 1 0 0.2 0.4 0.6 0.8 1 0 0.2 0.4 0.6 0.8 1 0 0.5 1
f1 f1 f1 f1 f1
Figure 4.19: Effect of λ on both parameter and objective spaces. Note that the
red curve indicates the robustness of the robust front and black curves are the
fronts.
The last control parameter, γ, defines the distance of the robust Pareto op-
timal front from the line f2 = 1. The greater the value of γ, the flatter the
shape of the robust Pareto optimal front, and the greater the distance of local
and robust fronts from the global Pareto optimal front. In addition, this control
parameter controls the distance between fronts, as fronts become closer as γ is
increased. The effects of this parameter are illustrated in Fig. 4.20.
The second proposed framework allows control of the multi-modality of bench-
mark problems and the number of local Pareto optimal fronts. The shapes of
the fronts are similar in this framework. However, as with the first proposed
4. Benchmark problems 109
1 1 1 1 1
f2
f2
f2
f2
0 0 0 0 0
( ) ( ) ( ) ( ) ( )
Figure 4.20: Effect of γ on the fronts. Note that the red curve indicates the
robustness of the robust front and black curves are the fronts.
framework, there is the possibility of changing the shape of each front using
Equation 4.35. This capability has not been integrated with this framework in
order to maintain its simplicity. It is worth mentioning that cos(λ × 2πx) in
H(x) can be replaced with cos(deλ e × 2πx) in order to have an exponentially
increasing number of local fronts.
4.2.3 Framework 3
In addition to the shape of Pareto optimal fronts and multi-modality, there is
another issue when solving real engineering problem called discontinuity. This
refers to possible gaps (dominated regions) in the Pareto optimal fronts of a
problem that provide more complexity compared to continuous Pareto optimal
fronts. In this work the ZDT test functions introduced by Deb et al. are modified
to propose a framework that allows creation of a desired number of discontinuous
robust regions in the main Pareto optimal front. The mathematical formulation
of this framework is as follows:
r
x1 x1
M inimise : f2 (~x) = G(~x) × 1 − − sin(ζ × 2πx1 ) H(x2 ) + ω
G(~x) G(~x)
(4.48)
2
e−2x sin(λ × 2π(x + π
4λ
)) −x
where : H(x) = + 0.5 (4.49)
γ
110 4. Benchmark problems
N
X
G(~x) = 50x2i (4.50)
i=3
γ≥1 (4.51)
λ≥1 (4.52)
f(x1,x2)
f(x1,x2)
f(x1,x2)
4 4 4 4 4
3 3 3 3
3
2 2 2 2
2
f2
f2
f2
f2
f2
1 1 1 1
1
0 0 0 0
0 -1 -1 -1 -1
0 0.5 1 0 0.2 0.4 0.6 0.8 1 0 0.2 0.4 0.6 0.8 1 0 0.2 0.4 0.6 0.8 1 0 0.2 0.4 0.6 0.8 1
f1 f1 f1 f1 f1
Figure 4.21: Effect of ζ on the parameter and objective spaces. Note that the
red curve indicates the robustness of the robust front and black curves are the
fronts.
This figure shows that there are ζ discontinuous efficient regions in the main
Pareto optimal front. By adjusting λ ad ζ, different test functions can be con-
structed. The key point of this framework is that the robustness of discontinuous
regions decreases from left-top to right-bottom. Therefore, the most robust front
4. Benchmark problems 111
is the last local front and its leftmost region is the most robust area. As Fig. 4.22
shows, the search space is very challenging; a robust meta-heuristic should avoid
optimal regions of the search space and move toward the most non-optimal areas,
which are robust.
3
Least robust regions
2
f2
1
0
valleys Most Robust
regions -1 Global front
0 0.2 0.4 0.6 0.8 1
f1
Robustness
Figure 4.22: Parameter and objective spaces constructed by the third framework.
The red curve indicates the robustness of the robust front and black curves are
the fronts.
Note that all the functions proposed in this subsection are separable. As
mentioned above, a rotational matrix is required to make them non-separable
similar to those in the DTLZ problems used in the CEC multi-objective test
suite.
(a) (b)
Figure 4.23: A non-biased objective space versus a biased objective space (50,000
random solutions). The proposed bias function requires the random points to
cluster away from the Pareto optimal front.
In order to prevent such issues, the following equation is introduced for the
function g(~x) in the test functions to bias the search space away from the robust
front:
Pn ψ
i=2 xi
g(~x) = 1 + 10 + (4.53)
n−1
4. Benchmark problems 113
where ψ defines the degree of bias (ψ < 1 causes bias away from the PF) and n
is the maximum number of variables.
In Equation 4.53, ψ is responsible for defining the bias level of the search
space. Fig. 4.23 (b) shows 50,000 random solutions in the same search space of
Fig. 4.23 (a) but with ψ = 0.3. This figure shows that density of solutions is very
low close to the robust Pareto front, and increased further from the front. This
behaviour in a test function effectively assists benchmarking the performance of
robust algorithms in approximating the robust Pareto optimal solutions.
To further observe the effect of ψ on the density of solutions in the search
space, 50,000 random solutions with different values for ψ are illustrated in
Fig. 4.24. This figure shows that the search space is biased inversely proportional
to ψ. In other words, the density of solutions is increased as ψ decreases.
According to Deb [46], there are at least two optima in a deceptive search space:
the deceptive optimum and the true optimum. The search space should be
designed in such a way to entirely favour the deceptive optimum. Such prob-
lems are very challenging for evolutionary algorithms since the search agents
are directed automatically towards the deceptive optimum by the search space
while the global optimum is somewhere else [48, 167]. To date there has been
no deceptive robust multi-objective test problem: the first is proposed in this
subsection.
The proposed mathematical formulation for generating deceptive test func-
tion is as follows:
N
X
G(~x) = 50x2i (4.57)
i=3
S(x) = 1 − xβ (4.58)
It may be observed that the framework is similar to those of ZDT [181] and
DTLZ [55, 56]. However, the function H is modified as shown in Fig. 4.25.
This figure shows that the proposed H function has two non-robust deceptive
local optima, two non-robust global optima, and one true robust (which is local)
optimum. The element sin(πx) at the end of this function causes deceptiveness
of the search space, in which the entire search space deceptively favours the
non-robust optima.
1.5
1.4 Robust non-robust optimum
1.3
1.2
1.1
H(x2)
Figure 4.25: There are four deceptive non-robust optima and one robust opti-
mum in the function H(x)
function S is also modified to define the shape of non-robust and robust fronts.
The newly added parameter β defines the shape of the fronts. It may be ob-
served that the fronts are concave when β < 1 and convex when β > 1. Fig. 4.26
also shows that the proposed test function has two overlapped non-robust global
fronts, two non-robust local fronts, and one robust local front. The entire search
space favours the non-robust regions, so the non-robust fronts are highly decep-
tive.
f(x1,x2)
f(x1,x2)
f(x1,x2)
Figure 4.26: Different shapes of Pareto fronts that can be obtained by manipu-
lating β
The deceptive non-robust fronts are very attractive for the search agents of
meta-heuristics. Therefore, these test problems have the potential to challenge
robust algorithm significantly. The ability of an algorithm to avoid deceptive
non-robust regions can be benchmarked. Also, the performance of an algorithm
in approximating robust fronts with convex, linear, and non-convex shapes is
benchmarked.
Although the first two hindrances introduced can mimic the difficulties of real
search spaces and challenge robust algorithms, there is another important charac-
teristic called multi-modality. Real search spaces may have many local solutions
that make them very challenging to solve. In the field of evolutionary single-
objective and multi-objective optimisation, there is a considerable number of
116 4. Benchmark problems
test problems with local optima. However, there is no multi-modal robust multi-
objective test problem in the literature. The following test function is proposed
in order to fill this gap:
M
2 x−(0.6+0.02i) 2
−( x−0.5 )2 −( x−0.02i ) ( )
X
−
where : H(x) = 1.5 − 0.5e 0.04 − 0.8e 0.004 + 0.8e 0.004
i=0
(4.61)
N
X
G(~x) = 50x2i (4.62)
i=3
S(x) = 1 − xβ (4.63)
1.6
1.4
1.2
H(x2)
1
0.8
0 0.5 1
x2
Figure 4.27: H(x) creates one robust and 2M global Pareto optimal fronts
f(x1,x2)
Figure 4.28: Parameter space and objective space of the proposed multi-modal
robust multi-objective test problem
the local front is the robust front, which should be approximated by robust
algorithms.
This set of test functions provides non-robust fronts as hindrances for robust
multi-objective test functions. The search agents of robust algorithms should
avoid all the local fronts to entirely approximate the robust front.
118 4. Benchmark problems
f(x1,x2)
f(x1,x2)
f(x1,x2)
Figure 4.29: Different shapes of Pareto fronts that can be obtained by manipu-
lating β
x−0.95 2 x−0.05 2
where : H(x) = 1.2 − 0.2e−( 0.03
)
− 0.2e−( 0.01
)
(4.66)
N
X
G(~x) = 50x2i (4.67)
i=3
S(x) = 1 − xβ (4.68)
4. Benchmark problems 119
1.3
1.2
H(x2)
0.9
Robust optimum
0.8
0.05 0.95
x2
Figure 4.30: H(x) makes two global optima close to the boundaries
Again the test function is equipped with a parameter called β which is respon-
sible for defining the shape of the fronts. Three variations of this test function are
constructed as shown in Fig. 4.31. Therefore, different shapes for both fronts are
also another challenge for robust algorithms when solving these test functions.
All the proposed robust multi-objective test functions are provided in Ap-
pendix B.
4.3 Summary
This section tackled the lack of suitable and challenging test functions in the
literature of robust optimisation as the first phase of a systematic robust op-
timisation process. Three frameworks were first proposed to generate different
single-objective test functions. The frameworks allowed creation of test func-
120 4. Benchmark problems
f(x1,x2)
f(x1,x2)
f(x1,x2)
Figure 4.31: H(x) makes two global optima close to the boundaries
tions with desired levels of difficulty: optima with alterable robustness level
(Framework I), multiple local non-robust solutions (Framework II), and alter-
able number of variables (Framework III). These frameworks allow us to create
test functions with diverse difficulties to challenge robust algorithms. A frame-
work is more beneficial than a single test function because it allows creating new
test functions. It is easy to use, reliable, and creates test functions with single
feature varying in degree of difficulty. In other words, frameworks assist testing
a specific ability of an algorithm at different levels of difficulty.
In addition, diverse difficulties were integrated with the test functions: de-
sired number of variables, biased search space, deceptive non-robust local solu-
tions, multiple non-robust local solutions, and flat search space. The charac-
teristics of each test function and difficulty were investigated theoretically, by
generating random solutions, and observing the shape of search space. A set
of 11 test functions were proposed including very challenging test functions as
the first test suite in the literature of robust single-objective optimisation. The
details of these test functions can be found in Appendix A (TP9 to TP20).
The second part of this chapter covered the proposal of three multi-objective
frameworks for creating robust multi-objective benchmark functions and integra-
tion of several hindrances with the current test functions. There is no framework
in the literature, so these three frameworks are the first. Framework 1 allowed us
to create test functions with a robust front with different degree of robustness.
4. Benchmark problems 121
Robsut
performance
metric
design
Chapter 5 Robust
algorithm
design
Performance measures
122
5. Performance measures 123
S1 S2 S3 S4 S5
S6
Robust
segments
S7
Pareto
front
S8 Robustness
S9
Minimise (f2)
Minimum
robustness
S10
Minimise (f1)
space is divided into several, equal sectors from the origin in the proposed mea-
sure. The sectors are divided into two groups: robust and non-robust sectors.
Needless to say, the robustness of the entire regions of the Pareto front should be
known in order to identify the robust and non-robust sectors. After defining the
number of robust segments, the number of segments that contain at least one
Pareto optimal solutions obtained should be counted and divided by the total
number of robust segments. The mathematical formulation is as follows (note
that this performance measure only works for bi-objective problems):
n
1 X
Φ= φn (5.1)
N n=1
1 ∃~x ∈ P S, α f1 (~
x)
n−1 ≤ tan f2 (~
x)
≤ αn , R(Pn ) ≤ Rmin
φn = (5.2)
0 otherwise
indicates the closet true Pareto optimal solution to ~x, and Rmin is a minimum
robustness value defined by the user. Note that there should be an exception
when there is no robust segment in order to prevent division by zero.
The accuracy of this measure is increased by the number of segments. It
should be noted that some of the segments can be partially robust and partially
non-robust (S8 in Fig. 5.1). It is assumed that a segment is robust if and only
if it is completely robust even if the solution obtained lies on the robust part of
the segment.
In the example of Fig. 5.1, there are 10 segments and 9 Pareto optimal
solutions obtained. The segments S1, S8, S9, and S10 are not robust, whereas
S2 to S7 segments are robust. Among the six robust sectors, three of them are
occupied by at least one solution. Therefore, the coverage measure is Φ = 63 =
0.5, meaning that 50% of the robust segments (approximately the robust Pareto
optimal front) are covered by the given Pareto optimal solutions obtained.
Some comments on the theoretical effectiveness of the proposed measure are
written in the following paragraphs. Note that the grey line determines the ro-
bustness of points on the front; it gives no details regarding points away from the
front. It also assumes one-to-one mappings, whereas in many real-world prob-
lems there are many-to-one mappings, which may have different robustnesses.
In the following figures, a segment bounded on at least one side with a red line
is not robust. By contrast, a segment with two blue lines is robust.
• The greater the number of occupied robust segments, the higher the cov-
erage (see Fig. 5.2):
N
X N
X
φ1n > φ2n −→ Φ1 > Φ2 (5.3)
n=1 n=1
= 0.33333 = 0.58333
1 1
0.8 0.8
0.6 0.6
f2
f2
0.4 0.4
0.2 0.2
0 0
0 0.2 0.4 0.6 0.8 1 0 0.2 0.4 0.6 0.8 1
f1 f1
Figure 5.2: Effect of the number of occupied robust segments on the proposed
coverage measure
0.8 0.8
0.6 0.6
f2
f2
0.4 0.4
0.2 0.2
0 0
0 0.2 0.4 0.6 0.8 1 0 0.2 0.4 0.6 0.8 1
f1 f1
Figure 5.3: Zero effect of occupied non-robust segments on the proposed coverage
measure
• Since the segments with partially robust regions are omitted, they have no
effect on the final value of Φ (see Fig. 5.4).
• Since Φ counts the number of segments (not the solutions in the segments),
a large number of solutions obtained does not necessarily increase Φ. For
example, an algorithm that finds a large set of solutions in a single segment
shows low coverage by the proposed measure.
=0
1
0.8
0.6
f2 0.4
0.2
0
0 0.2 0.4 0.6 0.8 1
f1
Figure 5.4: Segments that are partially robust do not count when calculating Φ
= 0.33333 = 0.090909
1 1
0.8 0.8
0.6 0.6
f2
f2
0.4 0.4
0.2 0.2
0 0
0 0.2 0.4 0.6 0.8 1 0 0.2 0.4 0.6 0.8 1
f1 f1
Figure 5.5: The accuracy of the proposed coverage measure is increased propor-
tional to the number of segments
• The coverage measure cannot be calculated when Rmin < min (R(robustness curve)),
and this is considered an exceptional case (see Fig. 5.6).
=0
1
0.8
0.6
f2 0.4
Rmin
0.2
0
0 0.2 0.4 0.6 0.8 1
f1
Figure 5.6: Effect of the minimum robustness on the number of robust segments
and Φ
= 0.73684
1
0.8
0.6
Rmin
f2
0.4
0.2
0
0 0.2 0.4 0.6 0.8 1
f1
Figure 5.7: All segments are converted to robust and counted when Rmin >
max (R(robustness curve))
Robust segments
S1 S2 S3 S4 S5 S6 S7 S8 S9
Pareto
front
Robustness
Maximise (f2)
Minimum
robustness
Maximise (f1)
very similar to that of the coverage measure. The objective space is divided into
vertical segments. The reason of choosing vertical sectors instead of diagonal
segments is the importance of the robustness of each robust optimal solution
obtained. In other words, a vertical line is drawn from each point to intersect
the robustness curve. If the intersection point lies below the minimum desired
robustness, the solution is robust. When calculating the Φ measure, the coverage
of a given set of solutions is important no matter how far they are from the true
robust Pareto optimal solutions. So a solution can be non-robust itself but in a
robust segment due to its corresponding robust reference point. An example of
this phenomenon is illustrated in Fig. 5.9. This figure shows that the solution
obtained is located in a robust segment when calculating Φ and in a non-robust
segment when defining the Γ measure.
After vertically dividing the objective search into N segments, the division
of the number of solutions obtained in the robust sectors by the number of
solutions obtained in the non-robust sectors is calculated as the success ratio of
an algorithm. The success ratio is mathematically expressed as:
γR
Γ= (5.4)
1 + γN R
M
X
where : γR = pR
m (5.5)
m=1
130 5. Performance measures
Pareto Pareto
front front
Robustness Robustness
Maximise (f2)
Maximise (f2)
Minimum Minimum
robustness robustness
1 ∃~x ∈ P S, β
R m−1 ≤ f1 (~
x) ≤ βm , R(Pn ) ≤ Rmin
pm = (5.6)
0 otherwise
M
X
γN R = pN
m
R
(5.7)
m=1
1 ∃~x ∈ P S, β
m−1 ≤ f1 (~
x) ≤ βm , R(Pn ) > Rmin
pN
m
R
= (5.8)
0 otherwise
• The success ratio of an algorithm equals zero if there are no robust solutions
found (see Fig. 5.10):
γ R = 0 −→ Γ = 0 (5.9)
=0
1
0.8
0.6
f2
0.4 Rmin
0.2
Figure 5.10: Success ratio is zero if there is no robust solution in the set of
solutions obtained
γ N R = 0 −→ Γ = γ R (5.10)
=4
1
0.8
0.6
f2
0.4 Rmin
0.2
Figure 5.11: Example of the success ratio for a set that contains only robust
solutions
• The greater the number of robust solutions obtained, the higher the success
ratio (see Fig. 5.12):
=1 = 1.5
1 1
0.8 0.8
0.6 0.6
f2
f2
0.4 0.4 Rmin
0.2 0.2
0 0
Figure 5.12: Success ratio increases proportional to the number of robust solu-
tions obtained
• The fewer non-robust solutions obtained, the higher the success ratio (see
Fig. 5.13):
(γ1R = γ2R ) ∧ (γ1N R > γ2N R ) −→ Γ1 < Γ2 (5.12)
= 0.6 = 0.75
1 1
0.8 0.8
0.6 0.6
f2
f2
0.2 0.2
0 0
• The success ratio equals zero when Rmin < min (R(robustness curve)) (see
Fig. 5.14).
=0
1
0.8
0.6
f2
0.4
Rmin
0.2
Figure 5.14: Effect of minimum robustness on success ratio when Rmin <
min (R(robustness curve))
• The Γ measure counts all Pareto optimal solutions obtained and converts
to a normal success measure (all the segments are assumed as robust) when
Rmin > max (R(robustness curve)) (see Fig. 5.15).
It should be noted the proposed performance measures are designed for the-
oretical studies, in which the robust Pareto front of test functions are known.
However, these measures can also be employed to quantify the performance of
algorithms in solving real problems subject to availability of a known true Pareto
optimal front. If the true Pareto optimal front is unknown (which is generally
the case in real problems), the following steps should be completed to be able to
use the proposed performance measures:
• Solve the problem with a robust algorithm using the maximum possible
number of search agents, iterations, and sampling points to find an accurate
approximation of the true robust Pareto optimal front.
134 5. Performance measures
= 16
1
0.8
0.6 Rmin
f2 0.4
0.2
Figure 5.15: Effect of minimum robustness on success ratio when Rmin >
max (R(robustness curve))
• Calculate the robustness of each obtained solution in the first step by re-
sampling n perturbed solutions around it.
5.3 Summary
Due to the lack of performance measures in the field of robust optimisation, we
cannot systematically design a robust algorithm. As the second phase of a sys-
tematic robust algorithm design process, this chapter proposed two performance
measures for quantifying the performance of robust multi-objective algorithms
for the first time. Both proposed measures are able to evaluate the performance
of a robust multi-objective algorithm from different perspectives quantitatively.
For one, the coverage measure proposed quantifies the distribution of Pareto op-
timal solutions obtained by algorithms along the robust front. For another, the
proposed success ratio allows calculating the number of robust and non-robust
solutions obtained.
For each of the proposed performance indicators, several tests were conduced
on manually created robust Pareto optimal fronts. The tests showed that: the
coverage and success measures are able to quantify the spread of the robust
Pareto optimal solutions across the robust regions and the number of robust and
non-robust solutions obtained.
There is no specific performance measure in the field of robust multi-objective
optimisation, so the proposed measures in this chapter fill this substantial gap.
Without these measures, we can only observe which algorithm is better in a
qualitative sense. However, the proposed measures allow us to reliably investi-
gate and confirm how much better an algorithm is. In addition, they are helpful
in determining the extent to which changes in algorithms are beneficial. With
such measures, therefore, systematic robust algorithm design process is possible
not only in this thesis but also in other works in future.
The remarks for each of the measures suggested that the proposed measures
allow designers to benchmark their algorithms effectively and quantitatively.
They will be the main comparison measures in this thesis as well. In Chapter 8,
experimental results will demonstrate the effectiveness of the proposed measures
in practice.
Robust test
function
design
Robsut
performance
metric
design
Chapter 6 Robust
algorithm
design
Improving robust
optimisation techniques
The last phase of a systematic robust design process is algorithm design. With-
out benchmark problems and performance metrics, we cannot compare and find
out which ideas are better than others quantitatively. The proposed phases in
Chapter 4 and 5, benchmark problems and performance metrics, allow us to
reliably and confidently compare and evaluate new ideas.
Although algorithm evaluation/verification requires test functions and per-
formance metrics, the algorithm design itself includes several steps. An algorithm
design or improvement process starts with new ideas. An idea might be to hy-
bridise algorithms, to integrate new operators in an algorithm, or to propose a
novel approach.
Currently, the two approaches of robust optimisation in this field, explicit
versus implicit, suffer from two main drawbacks: high computational cost and
low reliability. Therefore, they are less applicable to real problems with computa-
tionally expensive cost function(s). In addition, unreliability of implicit methods
prevents us from finding robust solution(s) confidently, which is very critical for
real problems. In order to alleviate such shortcomings, this chapter proposes
several new ideas and establishes novel approaches for finding robust solution(s)
in both single-objective and multi-objective search spaces reliably without the
need for additional true function evaluations.
Firstly, a confidence measure is proposed to define the degree of robustness of
solutions during optimisation when using meta-heuristics. Secondly, confidence-
based relational operators are proposed to establish Confidence-based Robust
136
6. Improving robust optimisation techniques 137
sP
n ¯ di )2
i=1 (d −
σ= (6.2)
n−1
138 6. Improving robust optimisation techniques
+Std.y
P1
P2
-Std.x +Std.x
r
P3
Current solution
Sampled solution
Average point
Std. boundaries
-Std.y
Figure 6.1: Confidence measure considers the number, distribution, and distance
of sampled point from the current solution
where n ≥ 2, d¯ is the average of the distance between the current solution and all
the sampled points within the neighbourhood, and di is the Euclidean distance
of the i-th sampled point to the current solution.
Note that due to the stochastic nature of meta-heuristics, we assume that the
distribution of sampled points within r radius around a solution is approximately
uniform. Therefore, if the sampled points are closer to the solution, they give
better confidence about the robustness. The concepts of the proposed metric
and components involved are illustrated in Fig. 6.1.
This figure shows that the proposed confidence measure defines and assumes
a neighbourhood with radius r around every solution during optimisation. Defin-
ing this radius allows investigating different level of perturbations in the param-
eter space. By assuming a neighbourhood, an algorithm is able to differentiate
between the confidence level of neighbouring solutions closer to or farther from
the main solutions. Obviously, previously sampled solutions closer to the main
solutions are able to better assist us in confirming the robustness of the main
solution. The proposed confidence measure formulation considers this fact by
dividing the factors of number of solutions and distribution by r.
Fig. 6.1 also shows that the proposed confidence measure considers the num-
ber of previously sampled points in the neighbourhood as well as their distri-
bution. This figure shows that considering both of these factors is essential
6. Improving robust optimisation techniques 139
because the number of solutions is not able to show the status of neighbouring
solutions in terms of distribution. One solution may have more sampled points
but without broad distribution. As may be seen in Fig. 6.1, the distance of each
previously sampled solution is first calculated with respect to the main solu-
tion. The standard deviation is then employed to indicate the dispersion of the
previously sampled solutions.
Technically, in order to calculate the confidence measure there is a need to
calculate the Euclidean distance of a particular position and each previously
sampled point for finding those in a desirable distance. Therefore, the computa-
tional complexity of the proposed metric is of O(ns d) where ns is the number of
previously sampled points and d indicates dimension. This is the computational
complexity of calculating Euclidean distance between each previously sampled
solutions and the main solution.
To see how the proposed C metrics can be theoretically efficient some remarks
are:
n = 0 =⇒ C = 0 (6.3)
• The confidence level of those solutions with the same number of neigh-
bouring samples evaluated within equal radii are differentiated based on
the dispersion:
Start
Initial random
population
Evaluate solutions
Calculate confidence
level of solutions
No
Confidently
No
better?
Yes
Discard non-
Modify/evolve/ confident solutions
combine solutions and/or randomly
modify solutions
Termination
condition
satisfied?
Yes
End
Figure 6.2: Flow chart of the general framework of the proposed confidence-based
robust optimisation
is subject to one condition. Since operating conditions are secondary inputs for
a system and not usually considered as a parameter to be optimised (they are
always considered as a fixed value), they have to be parametrised first and then
may be optimised by a confidence-based robust optimiser. This means that the
operating conditions will be changed and optimised by the optimiser in addition
to other parameters.
In this thesis, the PSO algorithm is chosen as the first case study. Later,
the CRO method will be applied using GA. With the proposed metric, generally
speaking, there would be two metrics to find robust solutions: robustness and
confidence metrics. The former metric defines the robustness of the search agent
of meta-heuristics, whereas the latter metric defines how confident we are in the
robustness of the solution.
In the simple robust PSO algorithm, the particles are compared as follows:
where R(.) is the robustness indicator. The gBest and pBests are updated when a
particle finds a better solution in the search space. With the proposed confidence-
based operators, however, two new Confidence-based Robust PSOs (CRPSO) are
proposed as follows:
In CRPSO1, a particle is replaced with the best particle obtained so far if
and only if it is confidently better than the best solution as follows:
For minimisation:
where ~x is a particle.
For maximisation:
where ~x is a particle.
In order to implement this the confidence of the best solution obtained so
far (gBest) is stored in a new variable called cgBest. It is clear that the gBest
is updated if and only if the confidence level and robustness indicator are both
better in CRPSO1.
In CRPSO2, the personal best solutions (pBest) found so far are also updated
(in addition to the gBest) based on the confidence level of solutions as follows:
144 6. Improving robust optimisation techniques
For minimisation:
For maximisation:
A new vector called cpBest is defined to store the best confidence level of
particles over the course of iterations.
As may be inferred from these equations, two confidence-based update proce-
dures were designed to perform CRO using CRPSO. The two CRPSOs proposed
show how the proposed confidence-based operators establish a new way of robust
optimisation using meta-heuristics.
E := Ii ⇐⇒ ∀k | Ii ≤c Ik (6.16)
These two components are incorporated in the simple RGA and named
as Confidence-based Robust Genetic Algorithms (CRGA). In the first version,
CRGA1, the best individual in each generation is selected by using Equation 6.16
and moved directly to the next generation based on the confidence-based elitism
component. Note that if two solutions are better than the elite, the first one re-
places the elite. If the second solution has a higher confidence and better fitness,
the elite is updated again.
In the second version, CRGA2, the individuals of each generation are allowed
to move to the next generation subject to Equation 6.17.
∀i ∈ (1, 2, ..., o)
∀i ∈ (1, 2, ..., o)
Definition 6.3.5 (Confidence-based Pareto front): The set containing the value
of objective functions for confidence-based Pareto solutions:
∀i ∈ (1, 2, ..., o)
Note that Definition 6.3.5. means that it is possible to have a ‘thick’ front
(as in e.g. probabilistic domination).
Some comments on the proposed Pareto optimality concepts are:
• If two solutions are non-dominated with respect to each other, they are
also confidently non-dominated with respect to each other:
• The confidence-based Pareto solution set contains all the confident solu-
tions and none of them can confidently dominate another.
6.4 Summary
This chapter began with the proposal of a novel confidence measure for calcu-
lating the confidence level of robust solutions. Five new confidence-based rela-
tional operators were defined using the confidence measure. In addition, a new
approach of robust optimisation called CRO was established in order to employ
the confidence measure and relational operators for performing confident robust
heuristic optimisation. The novel approach was employed to design two new
variants of robust PSO and GA. Several theoretical comments and discussions
were made about the potential success of the proposed concepts in finding robust
optimal solutions at the end of the first part.
The second part of this chapter was dedicated to the proposal of confidence-
based Pareto optimality concepts using the confidence metric. The confidence-
based Pareto dominance was integrated with the archive update mechanism of
RMOPSO as a case study. The main contribution of the second part was the
proposal and establishment of a novel perspective called CRMO. Similarly to
the first part, several remarks were discussed to investigate and theoretically
prove the effectiveness of the proposed CRMO approach in finding robust Pareto
optimal solutions without extra function evaluations.
In summary, the proposed concepts in this chapter have the potential to as-
sist different algorithms in finding robust solutions in single- and multi-objective
search spaces reliably and without extra function evaluations. In the follow-
ing chapters the CRPSO, CRGA, and CRMOPSO algorithms are employed to
solve the test functions proposed in Chapter 4 as well as a real problem. These
algorithms are compared with a diverse range of algorithms in the literature
quantitatively and qualitatively using the proposed benchmark problems and
performance metrics in Chapter 4 and 5.
Chapter 7
Confidence-based robust
optimisation
The algorithm design is the last phase in a systematic design process. It can be
considered as the main phase since the other two phases are essential for starting
the last phase. Although an idea can be expressed in this phase without the need
for the other two phases, we need test functions and performance metrics to
investigate and prove the usefulness of the idea. To prove the merits of the ideas
proposed in Chapter 6, a number of experiments are systematically undertaken
in this chapter and the next chapter utilising tools proposed in chapters 4 and
5.
150
7. Confidence-based robust optimisation 151
with some of the benchmark functions. The experiment was conducted using five
particles over 100 iterations. The other initial parameters were as follows:
• C1 = 2
• C2 = 2
• Initial velocity: 0
• Maximum velocity: 6
The radii of neighbourhoods were considered fixed and equal to the maximum
perturbation in parameters. The maximum perturbation of each benchmark
problem is available in Appendix A. Note that a simple expectation measure,
which is based on the mean of the neighbouring solutions, is calculated as the
robustness indicator.
The first column of Fig. 7.1 and Fig. 7.2 shows the benchmark function,
the search history, and the optimum finally obtained. The number of sampled
points detected around each particle in each iteration is provided in the second
column. The Cbests, expectation measure (mean of particle and its neighbour-
ing solutions), and convergence curves are provided in the last three columns.
Other results demonstrated are available in Table 7.1 which were obtained by
CRMOPSO1 when the maximum number of iterations was increased to 500.
This table shows the number of times that the confidence operators and confi-
dence measures were triggered over the course of iteration. This table reveals
how many times the proposed measure and operators assisted CRPSO1 to make
confident decisions (updated gBest) during optimisation.
In Fig. 7.1 and Fig. 7.2, the second columns of Test Problems (TP) show
that the number of sampled points increased over the course of iterations. As
mentioned in the previous chapters, the confidence measure is highly propor-
tional to the number of neighbouring solutions. Moreover, the PSO algorithm
tends to search mainly around the global best solution obtained so far. These
are the reasons for the incremental behaviour in cBest curves of TP1 to TP10.
However, this incremental trend stopped for a period of iterations occasionally.
This shows that there was no confidence to update gBest in those iterations.
152 7. Confidence-based robust optimisation
5 150
400 1 1
x2 0 100
200 0.5 0.5
-5 50
-10 0 0 0 0
-10 0 10 0 50 100 0 50 100 0 50 100 0 50 100
x1 iteration iteration iteration iteration
2 1.4 1.4
1000
x2 0 50 1.2 1.2
500
-2 1 1
-4 0 0 0.8 0.8
-4 -2 0 2 4 0 50 100 0 50 100 0 50 100 0 50 100
x1 iteration iteration iteration iteration
-2 0 0 0.25
-2 0 2 0 50 100 0 50 100 0 50 100 0 50 100
x1 iteration iteration iteration iteration
200 10000
x2 0 -1.5 -1.5
100 5000
-0.5 0 0 -2 -2
-0.5 0 0.5 0 50 100 0 50 100 0 50 100 0 50 100
x1 iteration iteration iteration iteration
Figure 7.1: Behaviour of CRPSO1 finding the robust optima of TP1, TP2, TP3,
TP4, and TP5
For instance, the Cbest curve of TP1 shows that the confidence of gBest
remains unchanged in almost half of the iterations. This shows that the PSO
algorithm did not confidently find better robust solutions to replace at least one
of them with gBest from nearly the 25-th to 75-th iterations. The same pattern
can be observed in the figures for TP2, TP3, TP6, TP7, TP8, TP9, and TP10.
The results of Table 7.1 show that gBest was confidently updated 29, 23, and 34
7. Confidence-based robust optimisation 153
TP7 4
History of search Samples in neighbourhoods x 10 Cbest Robust measure Convergence curve
1 300 3 -0.5 -0.5
0.5
200 2 -1 -1
x2 0
100 1 -1.5 -1.5
-0.5
-1 0 0 -2 -2
-1 0 1 0 50 100 0 50 100 0 50 100 0 50 100
x1 iteration iteration iteration iteration
4
History of search Samples in neighbourhoods x 10 Cbest Robust measure Convergence curve
TP8 1 150 15 -0.2 -0.2
-0.4 -0.4
100 10
x2 0.5 -0.6 -0.6
50 5
-0.8 -0.8
0 0 0 -1 -1
0 0.5 1 0 50 100 0 50 100 0 50 100 0 50 100
x1 iteration iteration iteration iteration
TP9 History of search Samples in neighbourhoods Cbest Robust measure Convergence curve
300 300 0 0
10
-1 -1
5 200 200
x2 -2 -2
0 100 100
-3 -3
-5
0 0 -4 -4
-5 0 5 10 0 50 100 0 50 100 0 50 100 0 50 100
x1 iteration iteration iteration iteration
2
200 2 1.5 1.5
x2 0
100 1 1 1
-2
-4 0 0 0.5 0.5
-4 -2 0 2 4 0 50 100 0 50 100 0 50 100 0 50 100
x1 iteration iteration iteration iteration
Figure 7.2: Behaviour of CRPSO1 finding the robust optima of TP6, TP7, TP8,
TP9, and TP10
times on TP1, TP2, and TP3, respectively. The third column of Table 7.1 shows
that 130, 69, and 180 times the confidence of a particle became better than that
of gBest on the same benchmark functions, but there was no superiority in terms
of the robustness.
It may be noted that the third column of the Table 7.1 shows that the confi-
dence measure assists the CRPSO1 algorithm to confidently guide its particles in
154 7. Confidence-based robust optimisation
Table 7.1: Number of times that the confidence operators and confidence measure
triggered
Test function Pi ≤c gBest C(Pi ) > C(gBest)
TP1 29/500 130/500
TP2 23/500 69/500
TP3 34/500 180/500
TP4 63/500 272/500
TP5 9/500 434/500
TP6 12/500 166/500
TP7 13/500 243/500
TP8 21/500 85/500
TP9 20/500 217/500
TP10 9/500 95/500
solving all the benchmark functions approximately 190 times (on average). This
is almost 40% of iterations on average for all the benchmark functions, showing
that a normal PSO algorithm was prone to make non-confident decisions on these
iterations when not using a confidence metric. This suggests the merit of the
proposed confidence measure, confidence-based operators, and CRO in guiding
search agents of CRPSO (or any other meta-heuristics) toward robust optima
confidently.
Another interesting point that can be inferred from Table 7.1 is the correla-
tion between making a confident decision and the number of local robust/non-
robust optima. The third column of Table 7.1 shows that TP4, TP6, TP8, TP9,
and TP10 have the least number of confident decisions compared to TP1, TP2,
TP3, and TP5. Other points worth noting can be observed in TP6, TP7, and
TP9 of Fig. 7.2. The convergence curves on these benchmark functions show
that the CRPSO1 algorithm faces degrading fitness functions over the course of
iterations. In TP6, TP7, and TP9 this phenomenon happens near iteration 40,
30, and 10 respectively. This shows that the proposed method assists CRPSO1
to converge to robust optima located at non-global valleys/peaks. In addition,
the search agents in CRPSO1 are able to diverge from non-robust solutions by
using the proposed confidence measure.
7. Confidence-based robust optimisation 155
example, if the best algorithm is CRPSO1, the pairwise comparison is done be-
tween CRPSO1/CRPSO2, CRPSO1/IRPSO, and CRPSO1/ERPSO. The same
approach is followed throughout the thesis.
As the results of Table 7.2 and Table 7.3 suggest, generally speaking the
proposed CRPSO algorithms are able to successfully outperform IRPSO on the
majority of benchmark functions. The statistical results show that this superior-
ity is statistically significant in some of the cases. This shows that the reliability
of the IRPSO can be improved with the concepts of confidence measure pro-
posed. The results of CRPSO algorithms are very competitive compared to
ERPSO as the algorithm with highest reliability, which again show the merits
of the proposed algorithms.
It is worth discussing the higher performance of CRPSO1 compared to CRPSO2.
Table 7.2 show that CRPSO1 outperforms CRPSO2 on the majority of test
functions. The results of these two algorithms are very close on some of the
test problems. This shows that the idea of just confidently updating the global
best obtained so far is better than confidently updating global best and personal
bests found so far. A possible reason for this is that systematic exploration is
weakened when CRPSO2 is required to confidently update the pBests obtained
so far as well as the gBest. However, CRPSO1 allows the particles to update
their pBests normally and only restricts update of the gBest. This assists the
particles to search freely and find promising regions of a search space while they
have to find a highly confident solution in order to be able to update gBest.
However, it is very hard for particles of CRPSO2 (especially in initial iterations)
to find confident solutions and update their pBests, so the particles tend to ran-
domly fly around the search space rather being somewhat anchored around their
pBests.
In order to provide further analysis in terms of exploration and exploitation,
the benchmark functions can be divided into two groups: uni-modal versus multi-
modal. The uni-modal functions have one global robust optimum and are highly
suitable for benchmarking the exploitation of the robust algorithm. However,
multi-modal functions have many local optima (robust or non-robust) with the
number increasing with dimension, making them suitable for benchmarking the
exploration of the robust algorithm. Note that TP1, TP2, TP5, and TP7 are
uni-modal, whereas the rest of benchmark functions are multi-modal.
The results on uni-modal benchmark problems show that the proposed algo-
7. Confidence-based robust optimisation 157
Table 7.2: Statistical results of the RPSO algorithms over 30 independent runs:
(ave ± std(median))
Algorithm TP1 TP2 TP3
CRPSO1 0.516 ± 0.011(0.519) 1.01 ± 0.0224(0.999) 0.525 ± 0.126(0.603)
CRPSO2 0.521 ± 0.0138(0.527) 0.992 ± 0.0525(0.967) 0.526 ± 0.127(0.604)
IRPSO 0.52 ± 0.0138(0.523) 1.01 ± 0.0238(1) 0.526 ± 0.126(0.603)
ERPSO 0.251 ± 0.0888(0.24) 1.02 ± 0.0374(1.01) 0.565 ± 0.185(0.63)
Algorithm TP4 TP5 TP6
CRPSO1 0.47 ± 0.0673(0.477) −11.7 ± 1.32(−11.9) −9.29 ± 1.39(−8.87)
CRPSO2 0.504 ± 0.0914(0.569) −6.44 ± 1.41(−6.23) −3.99 ± 0.937(−4.04)
IRPSO 0.525 ± 0.0534(0.522) −6.63 ± 1.28(−6.75) −3.07 ± 1.42(−2.75)
ERPSO 0.485 ± 0.0708(0.476) −6.22 ± 1.24(−6.74) −3.42 ± 1.32(−3.28)
Algorithm TP7 TP8 TP9
CRPSO1 −3.88 ± 1.23(−4.1) −0.652 ± 0.0584(−0.626) −30.4 ± 1.6(−30.3)
CRPSO2 −3.97 ± 0.483(−3.94) −0.838 ± 0.0523(−0.84) −29.7 ± 1.24(−30)
IRPSO −3.65 ± 0.658(−3.55) −0.623 ± 0.00589(−0.623) −30 ± 2.3(−29.8)
ERPSO −9.41 ± 1.13(−9.38) −0.599 ± 0.0806(−0.623) −27 ± 1.6(−27.1)
Algorithm TP10 TP11 TP12
CRPSO1 2.23 ± 0.52(2.48) 6358.8 ± 3553.3(7267.6) 9.5E3 ± 1.9E3(8.8E3)
CRPSO2 2.47 ± 0.63(2.56) 5027.1 ± 2433.2(4720.1) 1.3E4 ± 3.5E3(1.4E4)
IRPSO 1.85 ± 0.26(1.86) 7092.8 ± 2982.7(7447.5) 1.1E4 ± 3.8E3(9.0E3)
ERPSO 2.81 ± 0.65(2.67) 3326.7 ± 3599.5(1082.7) 2.2E5 ± 1.4E5(2.3E5)
Algorithm TP13 TP14 TP15
CRPSO1 35.1 ± 1.27(35) 50.2 ± 19.1(45.1) 434.89 ± 5.006(433.33)
CRPSO2 36.1 ± 2.45(36.1) 43.4 ± 2.02(43.3) 458.70 ± 55.61(433.81)
IRPSO 50.5 ± 49.2(35) 60.3 ± 49.8(44.3) 433.08 ± 0.99(433.22)
ERPSO 36 ± 0.05(36) 44.3 ± 1.11(44.3) 449.44 ± 30.30(435.08)
Algorithm TP16 TP17 TP18
CRPSO1 1.4 ± 0.342(1.31) 0.558 ± 0.333(0.458) 0.107 ± 0.068(0.0858)
CRPSO2 1.55 ± 0.493(1.42) 4.4 ± 11.8(0.768) 1.06 ± 2.9(0.133)
IRPSO 1.56 ± 0.551(1.42) 0.645 ± 0.518(0.72) 0.127 ± 0.0802(0.0863)
ERPSO 1.14 ± 0.139(1.07) 0.86 ± 0.62(0.857) 0.35 ± 0.123(0.403)
Algorithm TP19 TP20
CRPSO1 −0.27 ± 0.088(−0.29) −1.81 ± 0.898(−1.43)
CRPSO2 −0.28 ± 0.060(−0.29) −0.589 ± 2.04(−0.711)
IRPSO −0.26 ± 0.049(−0.23) −0.498 ± 1.76(−0.906)
ERPSO −0.37 ± 0.0021(−0.39) −1.38 ± 2.01(−1.76)
158 7. Confidence-based robust optimisation
rithms are not significantly better than IRPSO. This originates from the fact the
unreliabilty of the IRPSO is a bonus in a unimodal search space since this algo-
rithm quickly converges towards the global optimum, which is most of the cases
the robust optimum as well. Obviously, this leads the IRPSO algorithm towards
a local solution in a multi-modal search space. However, the proposed CRPSO
algorithm are limited in terms of updating global best and personal bests, which
slows down the convergence speed of these algorithms. This originates from the
proposed confidence measure that prevents CRPSO1 and CRPSO2 from prema-
ture exploitation and consequently stagnation in local robust/global optima.
To sum up, firstly, the results demonstrated that the proposed confidence
measure is able to assist optimisation algorithms and improves their reliabil-
ity. Secondly, the results of CRPSO1 and CRPSO2 revealed the merits of the
proposed confidence-based relational operators and new CRO approach in find-
ing robust optima. Thirdly, the results on the uni-modal benchmark functions
showed that both proposed algorithms show slow convergence behaviour, pre-
venting them from easily stagnating in local solutions (robust or non-robust)
especially in the initial iterations. Finally, the results on the multi-modal func-
tions indicate that the proposed methods are able to avoid local robust and
non-robust optima as well.
In the next section the proposed confidence measure and relational operators
are applied to a GA in order to further investigate the applicability of these novel
concepts to different meta-heuristics.
160 7. Confidence-based robust optimisation
Table 7.4: Statistical results of the RGA algorithms over 30 independent runs:
(ave ± std(median))
Algorithm TP1 TP2 TP3
CRGA1 2.36 ± 0.799(2.15) 2.02 ± 0.256(2.07) 1.43 ± 0.38(1.42)
CRGA2 0.443 ± 0.359(0.278) 1.17 ± 0.162(1.13) 0.682 ± 0.128(0.722)
IRGA 2.35 ± 0.96(2.17) 2.09 ± 0.217(2.04) 1.56 ± 0.317(1.8)
ERGA 1.92 ± 0.332(1.92) 2.05 ± 0.175(2.04) 1.6 ± 0.23(1.67)
Algorithm TP4 TP5 TP6
CRGA1 0.817 ± 0.158(0.788) −9.06 ± 1.39(−8.67) −5.32 ± 1.14(−5.24)
CRGA2 0.468 ± 0.0682(0.443) −12.5 ± 1.01(−12.3) −10.9 ± 1.65(−10.6)
IRGA 0.74 ± 0.167(0.712) −9.44 ± 1.49(−9.3) −6.64 ± 0.67(−6.49)
ERGA 0.712 ± 0.124(0.692) −8.98 ± 1.65(−8.49) −5.29 ± 1.66(−5.23)
Algorithm TP7 TP8 TP9
CRGA1 −5.13 ± 1.15(−5.02) −0.876 ± 0.0323(−0.879) −27.6 ± 1.61(−27.8)
CRGA2 −9.81 ± 0.461(−9.64) −0.615 ± 0.113(−0.617) −23.2 ± 1.72(−23.2)
IRGA −6.62 ± 0.844(−6.23) −0.564 ± 0.137(−0.531) −23.8 ± 2.51(−24.1)
ERGA −5.87 ± 1.33(−5.46) −0.532 ± 0.146(−0.494) −23.3 ± 2.06(−22.3)
Algorithm TP10 TP11 TP12
CRGA1 4.27 ± 1.44(3.98) 1.3E4 ± 7.9E3(1.3E4) 2.2E6 ± 2.8E6(1.3E6)
CRGA2 1.76 ± 0.421(1.61) 1.2E4 ± 7.8E3(1.2E4) 2.2E8 ± 1.8E8(1.6E8)
IRGA 4.17 ± 1.03(4.35) 1.8E4 ± 9.6E3(1.9E4) 4.1E8 ± 2.8E8(3.6E8)
ERGA 3.87 ± 1.77(3.67) 5.1E2 ± 3.9E2(3.3E2) 2.4E8 ± 8.8E7(2.2E8)
Algorithm TP13 TP14 TP15
CRGA1 81.5 ± 10.9(81.7) 254.97 ± 40.81(264.21) 673.37 ± 66.86(671.51)
CRGA2 218 ± 43.2(211) 226.76 ± 51.62(219.80) 665.84 ± 35.17(674.43)
IRGA 259 ± 46.3(270) 239.51 ± 59.23(229.67) 704.03 ± 67.23(708.44)
ERGA 224 ± 49.5(236) 88.15 ± 10.87(91.28) 510.41 ± 25.51(518.36)
Algorithm TP16 TP17 TP18
CRGA1 340.99 ± 89.11(343.56) 341.21 ± 150.36(313.37) 238.01 ± 43.49(260.77)
CRGA2 245.17 ± 65.42(250.17) 257.94 ± 96.35(259.78) 39.53 ± 14.49(38.61)
IRGA 229.27 ± 117.56(208.46) 287.02 ± 97.76(271.76) 284.93 ± 107.78(300.44)
ERGA 36.88 ± 13.41(38.67) 31.01 ± 15.60(28.53) 223.70 ± 42.16(228.25)
Algorithm TP19 TP20
CRGA1 −0.368 ± 0.0608(−0.395) +2.53 ± 1.16(2.49)
CRGA2 −0.21 ± 0.181(−0.228) +2.3 ± 0.958(2.2)
IRGA −0.211 ± 0.151(−0.233) +2.38 ± 0.97(2.25)
ERGA −0.216 ± 0.151(−0.232) −1.32 ± 0.64(−1.45)
162 7. Confidence-based robust optimisation
uated points may give satisfactory knowledge about the robustness of search
space, but does not guarantee having reliable neighbouring solutions with uni-
form distribution during optimisation. This is the reason for the poor results
of IRGA, in which individuals might crossover with non-robust individuals and
evolve toward non-robust regions of search space. An individual might become
very robust in case of a small number of neighbouring solutions since an average
of the neighbourhood is considered as the fitness of individual in IRGA. The
comparison of individuals based on effective mean fitness becomes more unfair
when the neighbouring solutions are very close to the individual. These phenom-
ena happened quite often during optimisation especially in the initial generations
when there are few evaluated points.
Table 7.4 and Table 7.5 show that the results of CRGA1 and CRGA2 are
better than those of IRGA on the majority of benchmark functions. Generally
speaking, the superiority of the results is due to the confidence measure em-
ployed. The confidence measure assists CRGA1 and CRGA2 to consider not
only the robustness measure but also the number of neighbouring solutions, the
radius of the neighbourhood, and the distribution of neighbouring solutions in
distinguishing robust solutions. Therefore, the possibility of favouring a non-
robust solution due to few and close neighbouring solutions is much less than
the previous methods. The role of the confidence measure in driving the indi-
viduals toward robust areas of search space is significant, especially in the initial
generations when the individuals have very few neighbouring solutions to prove
their robustness.
As may be observed in Table 7.4 and Table 7.5, the CRGA2 algorithm out-
performs CRGA1 on the majority of the benchmark problems. The CRGA1
algorithm has a confidence-based elitism component, in which the most robust
and confident individual obtained so far is saved and allowed to move without
any modification to the next generation(s). The reason for better results of this
algorithm compared to IRGA is that the confidence-based component assists
CRGA1 to favour robust solutions. The confidence-based elitism also prevents
the best individual (elite) from corruption by the mutation operator. The ad-
vantage of this method is that there is no significant loss of exploration since
all the individuals except one are selected without considering the confidence
measure. However, the confident and robust elite is only one individual in the
population so it is able to crossover with one of the other individuals. There are
164 7. Confidence-based robust optimisation
Table 7.6: Number of times that CRGA2 makes confident and risky decisions
over 100 generations
Function Child < P arent Confident decision Risky decision
TP1 125 78 47
TP2 120 54 66
TP3 130 59 71
TP4 133 58 75
TP5 145 82 63
TP6 64 58 6
TP7 133 57 76
TP8 96 45 51
TP9 156 59 97
TP10 77 73 4
Figure 7.3: Search history of GA and IRGA. GA converges towards the global
non-robust optimum, while IRGA failed to determine the robust optimum
Figure 7.4: Search history of CRGA1 and CRGA2. The exploration of CRGA2
is much less than that of CRGA1
provided. Note that the fourth benchmark function is chosen as the test bed, so
there are one global, two local, and one robust optima. The history of evaluated
points in the GA shows that this algorithm is able to find the global optimum
with satisfactory exploration of the search space. Fig. 7.3 shows individuals of
IRGA tend to move toward the robust region of the fourth benchmark function.
However, the best robust solutions found are near local optimum on the right.
This figure shows that there is no guarantee of finding robust optima despite
the tendency of individuals to move toward the robust regions of search space
in IRGA. The search history of CRGA1 and CRGA2 are shown in Fig. 7.4. It
may be seen that the exploration of CRGA2 is much less than that of CRGA1.
166 7. Confidence-based robust optimisation
7.5 Summary
This chapter experimentally investigated the performance of the proposed confidence-
based robust optimisation. The CRPSO and CRGA were employed to solve the
challenging test beds proposed in Chapter 4. The results demonstrated the value
of the proposed concepts:
The results of this chapter also showed the merits of the proposed challenging
test problems when comparing different algorithms. Due to the similar charac-
teristics of the proposed test functions compared to the real search spaces, the
performance of the CRO algorithms were verified and confirmed confidently.
Also, it can be stated that the CRO techniques are able to find robust optima
of real problems.
Chapter 8
Confidence-based robust
multi-objective optimisation
The previous chapter proved the merits of the confidence measure and confidence-
based robust optimisation techniques systematically. This chapter presents and
discusses the results of CRMOPSO as the first CRMO technique when solving
the current and proposed test functions in Chapter 4. The performance metrics
proposed in Chapter 5 are employed to compare the algorithms. Therefore, this
chapter systematically investigates and proves the merits of the confidence mea-
sure and confidence-based robust optimisation in multi-objective search spaces.
168
8. Confidence-based robust multi-objective optimisation 169
in Fig. 8.1, 8.2 and 8.3. Note that the results on some of the test functions
are illustrated in this chapter, but all the results are presented in Appendix
C. The results are not compared with other meta-heuristics since the different
mechanisms of the algorithms would prevent us from distinguishing whether
the superior results of one algorithm were due to CRMO or the algorithm’s
underlying mechanism.
CRMOPSO IRMOPSO ERMOPSO
3 3 3
2 2 2
f2
f2
f2
RMTP1 1 1 1
0 0 0
-1 -1 -1
0 0.5 1 0 0.5 1 0 0.5 1
f1 f1 f1
CRMOPSO IRMOPSO ERMOPSO
1 1 1
f2
f2
RMTP7 0.4 0.4 0.4
f2
f2
f2
f2
0 0 0
0 0.5 1 0 0.5 1 0 0.5 1
f1 f1 f1
Figure 8.1: Robust fronts obtained for RMTP1, RMTP7, RMTP9, and
RMTP27, one test case per row.
Table 8.2: P-values of Wilcoxon ranksum test for the RMOPSO algorithms in
Table 8.1
Test function CRMOPSO IRMOPSO ERMOPSO
RMTP1 0.032983954 N/A 0.000246128
RMTP2 N/A 0.212293836 0.000182672
RMTP3 0.021133928 N/A 0.000329839
RMTP6 0.001706249 N/A 0.005795359
RMTP7 0.000439639 0.000439639 N/A
RMTP8 0.000182672 0.27303634 N/A
RMTP9 0.000246128 N/A 0.005390255
RMTP10 0.000246128 N/A 0.185876732
RMTP11 0.520522883 N/A 0.185876732
RMTP12 N/A 0.003447042 0.002827272
RMTP13 N/A 0.025748081 0.000246128
RMTP14 0.850106739 N/A 0.007284557
RMTP15 N/A 0.04515457 0.001706249
RMTP16 N/A 0.010410989 0.001402210
RMTP17 N/A 0.031209013 0.021133928
RMTP18 N/A 0.427355314 0.307489457
RMTP19 N/A 0.79133678 0.002827272
RMTP20 0.001706249 N/A 0.000850106
RMTP21 0.005707503 N/A 0.002202220
RMTP22 0.121224503 N/A 0.677584958
RMTP23 N/A 0.000768539 0.04515457
RMTP24 N/A 0.00058284 0.021224503
RMTP25 N/A 0.000439639 0.012293836
RMTP26 0.000182672 1.0000 N/A
RMTP27 N/A 0.049721889 0.000246128
RMTP33 0.000182672 N/A 0.000182672
RMTP34 0.000182672 N/A 0.000182672
RMTP35 0.000182672 N/A 0.000182672
RMTP36 N/A 0.049849977 0.017257456
RMTP37 N/A 0.020522883 0.003610514
RMTP38 N/A 0.79133678 0.009108496
RMTP39 N/A 0.017257456 0.000472675
RMTP40 N/A 0.001858767 0.000246128
RMTP41 N/A 0.677584958 0.04515457
RMTP42 0.384673063 N/A 0.000182672
RMTP43 0.427355314 N/A 0.001706249
RMTP44 0.472675594 N/A 0.000182672
8. Confidence-based robust multi-objective optimisation 173
Table 8.4: P-values of Wilcoxon ranksum test for the RMOPSO algorithms in
Table 8.3
Test function CRMOPSO IRMOPSO ERMOPSO
RMTP1 0.000329839 N/A 0.000246128
RMTP2 0.212293836 N/A 0.000182672
RMTP3 0.313298369 N/A 0.4621133928
RMTP6 0.317062499 N/A 0.4635795359
RMTP7 0.104109894 0.000439639 N/A
RMTP8 0.231826742 0.27303634 N/A
RMTP9 0.053902557 0.121224503 N/A
RMTP10 0.011329697 0.185876732 N/A
RMTP11 0.520522883 N/A 0.185876732
RMTP12 0.344704222 0.909721889 N/A
RMTP13 N/A 0.025748081 0.022886128
RMTP14 0.850106739 N/A 0.007284557
RMTP15 N/A 0.04515457 0.001706249
RMTP16 0.969849977 0.10410989 N/A
RMTP17 N/A 0.031209013 0.003763531
RMTP18 N/A 0.047584958 0.027355314
RMTP19 N/A 0.03133678 0.05108496
RMTP20 0.890465048 0.850106739 N/A
RMTP21 0.522022235 0.570750388 N/A
RMTP22 0.520522883 0.677584958 N/A
RMTP23 N/A 0.000768539 0.04515457
RMTP24 N/A 0.00058284 0.121224503
RMTP25 0.212293836 0.000439639 N/A
RMTP26 N/A 0.10410989 1.00000
RMTP27 0.909721889 0.16197241 N/A
RMTP33 0.018267235 N/A 0.0063344
RMTP34 0.001526336 N/A 0.003843352
RMTP35 0.000182672 N/A 0.000182672
RMTP36 0.969849977 N/A 0.911329697
RMTP37 N/A 0.520522883 0.003610514
RMTP38 0.009108496 N/A 0.79133678
RMTP39 0.017257456 N/A 0.472675594
RMTP40 0.053902557 0.185876732 N/A
RMTP41 0.05515457 N/A 0.677584958
RMTP42 0.384673063 N/A 0.000182672
RMTP43 0.427355314 N/A 0.001706249
RMTP44 0.046721824 N/A 0.472675594
8. Confidence-based robust multi-objective optimisation 175
Table 8.6: P-values of Wilcoxon ranksum test for the RMOPSO algorithms in
Table 8.5
Test function CRMOPSO IRMOPSO ERMOPSO
RMTP1 N/A 0.000329839 0.005795359
RMTP2 0.212293836 N/A 0.000182672
RMTP3 0.000329839 N/A 0.021133928
RMTP6 0.001706249 N/A 0.005795359
RMTP7 N/A 0.000439639 0.000182672
RMTP8 0.27303634 0.000182672 N/A
RMTP9 0.000246128 N/A 0.121224503
RMTP10 0.000246128 0.011329697 N/A
RMTP11 0.241321593 0.185876732 N/A
RMTP12 0.909721889 0.344704222 N/A
RMTP13 0.025748081 N/A 0.10410989
RMTP14 0.007284557 N/A 0.850106739
RMTP15 0.427355314 N/A 0.001706249
RMTP16 N/A 0.044022101 0.001041098
RMTP17 0.021133928 0.031209013 N/A
RMTP18 N/A 0.04273553 0.003074894
RMTP19 0.79133678 N/A 0.002827272
RMTP20 0.140465048 0.850106739 N/A
RMTP21 N/A 0.00220222 0.001750388
RMTP22 0.677584958 0.520522883 N/A
RMTP23 0.000768539 N/A 0.00220222
RMTP24 0.121224503 0.00058284 N/A
RMTP25 0.212293836 0.000439639 N/A
RMTP26 N/A 0.000182672 0.000104109
RMTP27 N/A 0.000246128 0.16197241
RMTP33 N/A 0.000182672 0.000768539
RMTP34 0.000182672 N/A 0.000182672
RMTP35 N/A 0.000273036 0.000182672
RMTP36 N/A 0.069849977 0.000017257
RMTP37 N/A 0.003610514 0.520522883
RMTP38 0.009108496 0.03108496 N/A
RMTP39 0.472675594 N/A 0.017257456
RMTP40 0.000246128 N/A 0.185876732
RMTP41 0.04515457 N/A 0.067758495
RMTP42 0.384673063 N/A 0.000182672
RMTP43 0.021133928 0.041706249 N/A
RMTP44 N/A 0.000182672 0.042675594
8. Confidence-based robust multi-objective optimisation 177
this algorithm had a greater ability to guide its solutions toward robust regions
of the Pareto optimal front. There is a gap between the solutions obtained on
the least robust region of the Pareto optimal front.
The behaviour of algorithms in terms of finding robust regions of the Pareto
optimal front can be observed more clearly with RMTP27. In fact, this test
function is arranged deliberately to have a stair-shaped Pareto optimal front
in order to benchmark the ability of algorithms in terms on converging toward
robust regions of the front and refraining from finding non-robust solutions.
The quantitative results of the algorithms on RMTP27 show that the results of
CRMOPSO are significantly better than IRMOPSO and ERMOPSO. The CR-
MOPSO algorithm only approximates the first two stairs, which are considered
the most robust regions of the Pareto optimal front, as shown in Fig. 8.1. How-
ever, the robust solutions obtained by IRMOPSO and ERMOPSO tend to be
distributed on other, less robust regions as well.
Test functions RMTP11 to RMTP19 are bi-modal. The local front is the ro-
bust front, so the behaviour of algorithms in favouring a robust front instead of a
global front can be investigated. These test functions are designed with different
shapes of robust and global fronts in order to extensively test the performance
of the algorithms. In RMTP13 the shape of both local and global fronts is
convex. In contrast to the quantitative results in Fig. 8.1, the CRMOPSO al-
gorithm showed the highest convergence on this test function. This shows that
although the convergence of the proposed method is slightly low, this may be
favourable when solving multi-modal test functions. The coverage and success
ratio of CRMOPSO were also very good on this test function. The best robust
solutions obtained for all algorithms are again illustrated in Fig. 8.2. It may be
seen that the best front obtained was from CRMOPSO. This algorithm did not
approximate even one non-robust solution for most of the test functions, proving
the merits of the proposed method. IRMOPSO was not able to approximate the
robust front over 30 runs for this problem showing the potential unreliability
of using previous samples without a confidence measure. The ERMOPSO algo-
rithm also mostly tended to approximate the global front, but there were some
solutions found on the robust front.
As Fig. 8.2 shows, RMTP15 and RMTP16 have identical linear global fronts
that make the approximation of the global front easy. Deliberately, these two
test functions have been arranged to have linear global fronts to observe what
8. Confidence-based robust multi-objective optimisation 179
1 1 1
f2
f2
f2
RMTP13
0.5 0.5 0.5
0 0 0
0 0.5 1 1.5 2 0 0.5 1 1.5 2 0 0.5 1 1.5 2
f1 f1 f1
CRMOPSO IRMOPSO ERMOPSO
1.5 1.5 1.5
1 1 1
f2
f2
f2
RMTP14
0.5 0.5 0.5
0 0 0
0 0.5 1 1.5 2 0 0.5 1 1.5 2 0 0.5 1 1.5 2
f1 f1 f1
CRMOPSO IRMOPSO ERMOPSO
1.5 1.5 1.5
1 1 1
f2
f2
f2
RMTP15
0.5 0.5 0.5
0 0 0
0 0.5 1 1.5 2 0 0.5 1 1.5 2 0 0.5 1 1.5 2
f1 f1 f1
CRMOPSO IRMOPSO ERMOPSO
1.5 1.5 1.5
1 1 1
f2
f2
f2
RMTP16
0.5 0.5 0.5
0 0 0
0 0.5 1 1.5 2 0 0.5 1 1.5 2 0 0.5 1 1.5 2
f1 f1 f1
CRMOPSO IRMOPSO ERMOPSO
1.5 1.5 1.5
1 1 1
f2
f2
f2
0 0 0
0 0.5 1 1.5 2 0 0.5 1 1.5 2 0 0.5 1 1.5 2
f1 f1 f1
Figure 8.2: Robust fronts obtained for RMTP13 to RMTP16 and RMTP19, one
test case per row. Note that the dominated (local) front is robust and considered
as reference for the performance measures.
times during the optimisation that fewer neighbouring solutions mislead an algo-
rithm toward non-robust regions. The archive-based mechanism of IRMOPSO
deteriorated on these problems, with cases in which a non-robust solution entered
the archive and was never dominated by robust solutions outside the archive. In
contrast ERMOPSO which has an explicit averaging mechanism never favoured
a non-robust solution. This caused a well-distributed robust front in Fig. 8.2
for ERMOPSO on RMTP15. The robust solutions obtained by CRMOPSO are
very competitive with those found by ERMOPSO. Tables 8.1 and 8.3 show that
CRMOPSO had superior convergence and coverage on RMTP15.
The approximate solutions of the algorithms on RMTP16 were somewhat
different; the coverage of solutions obtained by ERMOPSO was very low and
the CRMOPSO algorithm also found non-robust solutions. Generally speaking,
approximation of a concave front is more challenging, and this phenomenon can
be seen for the RMTP16 test function. The IRMOPSO algorithm again failed
to approximate the robust front, whereas CRMOPSO and ERMOPSO tended
to find robust solutions.
RMTP19 has two fronts with opposite shapes: a convex global front and
a concave robust/local front. The results of algorithms on this test function
show the difficulty of RMTP16 as none of the algorithms approximate the entire
robust front. In Fig. 8.2, some resolution of the robust front can be observed in
the solutions obtained by both CRMOPSO and ERMOPSO. The CRMOPSO
algorithm shows a slightly greater tendency to correct selection in this case.
Generally speaking, benchmark functions with different shapes for robust and
global optima would be more difficult to solve because a robust algorithm needs
to adapt to a very different Pareto optimal front with different level of robustness
when transferring from the global/local Pareto optimal front to the robust Pareto
optimal front(s). Deb asserted this for multi-objective benchmark problems [46],
and it is the reason for the poor performance of all algorithms on RMTP19.
The last two test functions, RMTP24 and RMTP25, have multiple discon-
tinuous local fronts. They provide the most challenging test cases for the al-
gorithms. It should be noted that the robustness increases from right to left
and bottom to top in the figure. The results of Tables 8.1, 8.3, and 8.5 and
Fig. 8.3 show that the proposed CRMOPSO algorithm was the best in terms
of favouring robust regions of the fronts. The Pareto optimal solutions obtained
by CRMOPSO for RMTP24 indicate that there is more coverage on the most
8. Confidence-based robust multi-objective optimisation 181
f2
f2
f2
RMTP21 0.4 0.4 0.4
0 0 0
0 0.5 1 0 0.5 1 0 0.5 1
f1 f1 f1
CRMOPSO IRMOPSO ERMOPSO
1 1 1
f2
f2
RMTP22 0.4 0.4 0.4
0 0 0
0 0.5 1 0 0.5 1 0 0.5 1
f1 f1 f1
CRMOPSO IRMOPSO ERMOPSO
3 3 3
2 2 2
f2
f2
f2
RMTP24 1 1 1
0 0 0
-1 -1 -1
0 0.5 1 0 0.5 1 0 0.5 1
f1 f1 f1
CRMOPSO IRMOPSO ERMOPSO
3 3 3
2 2 2
f2
f2
f2
1 1 1
RMTP25
0 0 0
-1 -1 -1
0 0.5 1 0 0.5 1 0 0.5 1
f1 f1 f1
Figure 8.3: Robust fronts obtained for RMTP21 to RMTP25 one test case per
row. Note that the worst front is the most robust and considered as reference
for the performance measures.
robust region of the local fronts. However, IRMOPSO and ERMOPSO do not
display such behaviours. The results of algorithms on RMTP25, which has more
discontinuous robust regions, were also consistent with those of RMTP24: the
CRMOPSO algorithm was able to find more robust solutions.
It should be noted that the quantitative results of RMTP26 to RMTP44 (bi-
ased test functions) in Tables 8.1, 8.3, and 8.5 as well as obtained robust Pareto
optimal fronts in Appendix C also show that the CRMOPSO algorithm outper-
forms IRMOPSO and ERMOPSO on the majority of the biased test functions.
Theses results are consistent with those of other test functions and confirm the
merits of the proposed CRMO approach in solving difficult problems.
Overall, CRMOPSO and IRMOPSO show better results on 18 and 16 test
182 8. Confidence-based robust multi-objective optimisation
functions respectively considering the IGD values. As the p-values show, how-
ever, the superiority was statistically significant in 13 cases for CRMOPSO but
10 cases for IRMOPSO. Another fact is that IRMOPSO shows better results
mostly on unimodal test functions, which is due to the unreliably high exploita-
tion and convergence speed of this algorithm. The proposed confidence measure
prevents CRMOPSO from converging towards the global front, which is advan-
tageous in multi-modal test functions.
Considering the results of the coverage measure, it may be seen that the CR-
MOPSO shows statistically better results in 7 out of 9 test functions. However,
the results of IRMOPSO are statistically better in 7 out of 16 test functions.
This is again due to the impacts of the confidence measure on movement of par-
ticles in CRMOPSO. It seems that the lesser movement of particles decreases the
coverage of solutions, but the results show that CRMOPSO is still competitive
compared to the IRMOPSO.
The results of algorithms on the success ratio measure show CRMOPSO
yields statistically better results in 12 out of 37 functions. The IRMOPSO is
better in 14 out of 37 test functions. However, the results were only statistically
significant in 9 cases. This shows that CRMOPSO is slightly more reliable in
terms of finding robust solutions and avoiding non-robust ones.
To further investigate the merits of the proposed method, Table 8.7 deter-
mines the number of times that a solution dominated an archive member but was
not allowed to enter the archive with the proposed confidence measure on some
of the test functions. This table shows that a significant number of times during
optimisation (approximately 19% of attempts) the proposed confidence-based
Pareto optimality concepts prevented solutions with low confidence from being
added to the archive. Firstly, these results demonstrate that often IRMOPSO
was likely to make unreliable decisions and consequently favour non-robust so-
lutions. Secondly, the proposed method is able to prevent unreliable decisions
throughout optimisation, providing more reliability than the current archive-
based robustness-handling methods.
In summary, the results show that the convergence of CRMOPSO is slower
than IRMOPSO because less confident solutions were ignored over the course of
iterations. Although this prevents CRMOPSO from providing superior results on
uni-modal test functions, it can be very helpful in optimising real problems since
CRMOPSO has less probability of premature convergence toward non-robust
8. Confidence-based robust multi-objective optimisation 183
Table 8.7: Number of times that the proposed confidence-based Pareto domi-
nance prevented a solution entering the archive
Test function (Pi ≺ Archivem ) ∧ (Pi 6≺c Archivem )
RMTP1 18,918 / 100,000
RMTP7 28,210 / 100,000
RMTP9 19,240 / 100,000
RMTP27 17,157 / 100,000
RMTP13 13,938 / 100,000
RMTP14 14,703 / 100,000
RMTP15 15,768 / 100,000
RMTP16 17,054 / 100,000
RMTP24 26,566 / 100,000
RMTP25 17,570 / 100,000
8.3 Summary
This chapter was dedicated to the results and discussion of the proposed CRMO
approach. The CRMOPSO algorithm was employed to solve the proposed chal-
lenging test functions in Chapter 4 and compared to other algorithms in the
literature qualitatively and quantitatively (using the performance measures pro-
posed in Chapter 5). The results showed that the novel approach proposed is
able to deliver very promising results in terms of convergence, coverage, and pro-
portion of robust Pareto optimal solutions obtained. The findings demonstrated
the value of the proposed concepts:
The results of this chapter also showed the merits of the proposed challenging
robust multi-objective test problems when comparing different algorithms. Due
to the different characteristics of the proposed test functions, the performance
of the CRMOPSO algorithm was observed and investigated in detail. It is also
worth mentioning here that the proposed performance measures allowed quan-
titatively comparing different algorithms in this chapter for the first time in the
literature.
Chapter 9
In the previous two chapters, the merits of the ideas proposed in Chapter 6 have
been investigated systematically. This chapter demonstrates the application of
the proposed CRMOPSO to the design of propellers using a reduced-order model.
The case study is a propeller design problem. In the following sections, this prob-
lem is first solved by a MOPSO algorithm. The results are then analysed mostly
in terms of the effects of uncertainties on structural parameters and operating
conditions. Finally, the CRMOPSO algorithm is employed to determine the
robust Pareto optimal front for this problem.
185
186 9. Real world applications
propeller and the second half behind the propeller. A propeller is rotated, which
swirls the outflow. The amount of this swirl is based on the rotation speed of the
motor and energy loss. Efficient propellers lose 1% to 5% of their power because
of swirl. The thrust of propellers is calculated as follows [26]:
π 2 ∆v
T = D (v + )ρ∆v (9.1)
4 2
where T is thrust, D is the propeller diameter, v is the velocity of the incoming
flow, ∆v is the additional velocity which is created by the propeller, and ρ is the
density of the fluid.
It may be seen in Equation 9.1 that the final thrust depends on the volume
of the incoming stream which has been accelerated per unit of time, the amount
of this acceleration, and the density of the medium.
Power is defined as force times distance per time. The required power to
drive a vehicle with a velocity of v using the available thrust is calculated as
follows:
Pa = T v (9.2)
Pa Tv
η= = (9.3)
Pengine Pengine
JKT (x)
η(x) = (9.4)
2πKQ (x)
Va
J= (9.5)
nD
where Va is the axial velocity, n is rotational velocity, and D is the diameter of
the propeller.
9. Real world applications 187
47
X P tn Ae un
KQ = CQn (J)sn ( ) ( ) (Z)vn (9.8)
n=1
D Ao
where P/D is the pitch ratio, Ae /Ao is the disk ratio of the propeller, Z is
the number of blades, and CTn , CQn , sn , tn , un , vn are corresponding regression
coefficients.
There is another issue in propellers called cavitation. When the blades of a
propeller move through water at high speed, low pressure regions form as the
water accelerates and moves past the blades. This can cause bubbles to form,
which collapse and can cause strong local shockwaves which result in erosion of
propellers. The sensitivity of the propeller to cavitation is calculated as follows:
(pa + pgh0.8 − pv )
σn,0.8 = (9.9)
0.5ρ(πnD)2
where pa is the atmospheric pressure, pv indicates the vapour pressure of water,
g is the acceleration due to gravity, and h0.8 shows immersion of 0.8 blade radius
when the blade is at the position of 12:00.
The ultimate goal here is to design a propeller with the highest efficiency and
the lowest cavitation sensitivity.
In order to find the final geometrical shape of the blade, standard National
Advisory Committee for Aeronautics (NACA) airfoils are selected as shown in
Fig. 9.1. It may be seen in this figure that two parameters define the shape of
the airfoil: maximum thickness and chord length. In this chapter ten airfoils are
considered along the blade, so the total number of parameters is 20.
The final parameter vector is as follows:
Maximum thickness
Chord length
Figure 9.1: Airfoils along the blade define the shape of the propeller (NACA
a = 0.8 meanline and NACA 65A010 thickness)
where Ti and Ci indicates the thickness and chord length of the i-th airfoil along
the blade.
Finally, the problem can be formulated as follows:
~ = (Ti , Ci ), i = 1, 2, ..., 10
Suppose : X (9.11)
~
M aximise : η(X) (9.12)
~
M inimise : Vc (X) (9.13)
2 meters
2. Observing the effect of the number of blades on the efficiency and cavitation
of the propeller
190 9. Real world applications
6. Post analysis of the result to extract the possible physical behaviour and
impacts of the parameters on the efficiency and cavitation of the propeller
The following subsections present and discuss the results for each of these
experiments.
7 blades
6 blades
5 blades
4 blades
3 blades
Figure 9.3: (left) Pareto optimal front obtained by the MOPSO algorithm (6
blades), (right) Pareto optimal fronts for different numbers of blades
9. Real world applications 191
gorithm was able to find a set of highly distributed estimations for the Pareto
optimal solutions across both objectives. The search history of points sampled
by particles during optimisation is illustrated by black points. The search history
also shows that the MOPSO algorithm explored and exploited the search space
efficiently, which results in obtaining this highly distributed and accurately con-
verged estimation for the Pareto optimal front. The accurate convergence of the
solutions obtained is due to the intrinsic high exploitation of the MOPSO algo-
rithm around the selected leaders, gBest and pBests, in each iteration. The uni-
form distribution originates from the selection mechanism of leaders in MOPSO.
Since particle guides were selected from the less populated parts of the archive,
there was always a high tendency toward finding Pareto optimal solutions along
the regions of the Pareto optimal front with lower distribution.
1. Finding the Pareto optimal front for the propeller at RPM increments of
10.
2. Parametrising the RPM and finding the optimal front for it using MOPSO.
The MOPSO algorithm was employed to find the Pareto optimal front for
the propeller at each of the 11 RPM varying from 150 to 250. The algorithm
was run 4 times on each case and the best Pareto optimal fronts obtained are
illustrated in Fig. 9.4, at left. This figure first shows that there is no feasible
Pareto optimal solution when RPM = 150, 160, or 250. For the remaining RPMs,
it may be observed that increasing RPM generally results in decreasing efficiency
and increasing cavitation. Although increasing the RPM seems to increase the
thrust, these results show that high RPM is not very effective and risks increased
damage to the propeller in long term use due to the high cavitation. The peak
of the high efficiency and low cavitation occurred between RP M = 170 and
RP M = 180. Therefore, such RPM rates can be recommended when using a
5-blade version of the ship propeller investigated.
RPM=180
RPM=190
RPM=200
RPM=210
RPM=220
RPM=230
RPM=240
Figure 9.4: (left) Best Pareto optimal fronts obtained for different RPM (right)
Optimal RPM
To find the optimal values for the RPM, this operating condition was parametrised
and optimised by MOPSO as well. The number of parameters increases to 21
when considering RPM as a parameter, but the same number of particles and
iteration were chosen to approximate the Pareto optimal front. The best Pareto
optimal front is illustrated in Fig. 9.4, at right.
9. Real world applications 193
The Pareto optimal front estimated shows that the approximations of Pareto
optimal solutions mostly tend to the best Pareto optimal front found for RP M =
170. Almost 20% of the solutions are distributed between the Pareto optimal
fronts for RP M = 170 and RP M = 180. The search history of the MOPSO
algorithm is also illustrated in Fig. 9.4 to make sure that all of the Pareto fronts
obtained in the previous experiment have been explored. The search history
clearly illustrates that the fronts have been found by MOPSO, but all of them
are dominated by the Pareto optimal front for RP M = 170 and the solutions
between RP M = 170 and RP M = 180. This can be seen in Fig. 9.5.
Figure 9.5: PF obtained when varying RPM compared to PFs obtained with
different RPM values
0.5
Normalized values
-0.5
-1
P1 P2 P3 P4 P5 P6 P7 P8 P9 P10 P11 P12 P13 P14 P15 P16 P17 P18 P19 P20 RPM Eff Cav
Parameters objectives
Since the Pareto optimal front obtained by MOPSO contains the best trade-
offs between cavitation and efficiency, some of the characteristics and physical
rules applied to the propeller can possibly be inferred. One of the best tools for
identifying and observing such behaviours is parallel coordinates (Fig. 9.6). The
first pattern that can be seen in the parallel coordinates is the relatively uniform
distribution of solutions over P 1 − P 6. These first pairs of parameters define the
shape of the first three airfoils starting from the shaft of the propeller. This shows
that the first three airfoils do not play very important roles in defining the final
efficiency and cavitation. In contrast, parameters P 7 − P 10 are not distributed
uniformly across the vertical lines. This shows that the shape of the fourth and
fifth airfoils is very critical for designing the propeller. This is also consistent with
the fact that the middle part of a blade has the greatest width and is consequently
significantly involved in generating thrust, and cavitation. Although similar
behaviour can be seen in the rest of the parameters, the distribution of P 7 −
P 10 is much less than others, showing again the importance of these structural
parameters. Other features evident in Fig. 9.6 also bear further investigation, a
topic for future work.
9. Real world applications 195
Figure 9.7: Pareto optimal solutions in case of (left) δRP M = +1 , (right) δRP M =
−1 fluctuations in RPM (right). Original values are shown in blue, perturbed
results in red.
As Fig. 9.7 (left) shows, the efficiencies of all the Pareto optimal solutions
obtained decrease when δRP M = +1 perturbations occur. The cavitation of
Pareto optimal solutions is also increased. A similar behaviour for the efficiency
can be observed in Fig. 9.7 (right). This figures shows that the efficiencies of
Pareto optimal solutions decrease when δRP M = −1. However, the cavitation
is decreased, which is obviously due to the lower rate of RPM. These results
shows that perturbations in RPM can have significant negative impacts on the
expected and desired efficiencies. The cavitation can also vary substantially with
196 9. Real world applications
uncertainties in RPM.
The trend is similar to the results of the preceding subsection, in that the
uncertainties in parameters also degrade the expected efficiency significantly. In
addition, the results show that the cavitation can vary dramatically in case of
uncertainties in parameters.
In summary, these results strongly show the remarkably negative impacts
of perturbations on the performance of marine propellers and emphasise the
9. Real world applications 197
-25.6
-25.65
-25.7
-cavitation
-25.75
-25.8
-25.85
Robust PF obtained
by CRMOPSO -25.9
-25.95
0.68 0.685 0.69 0.695
efficiency
Figure 9.9: Robust front obtained by CRMOPSO versus global front obtained
by MOPSO
occurs.
This table shows that the discrepancy of fuel consumption is much lower
for the designs obtained by CRMOPSO algorithm. The average, minimum,
and maximum of the excessive fuel consumption is almost half for the solutions
obtained by the CRMOPSO algorithm. Due to the difficulty of the propeller
design problem and importance of uncertainties, these results strongly evidence
the merits of the proposed confidence-based robust optimisation. It is worth
highlighting here that the robust Pareto optimal solutions are obtained without
even a single extra function evaluation.
To further investigate the effectiveness of the proposed CRMOPSO, this al-
gorithm is employed to find the robust optimal values for RPM as well. In the
previous subsections, the global Pareto optimal front for this problem was found
9. Real world applications 199
with parameterising the RPM. An equal number of particles and iterations are
employed to determine the robust front and robsut optimal values for RPM. The
results are illustrated in Fig. 9.10. This figure also shows the obtained Pareto
optimal front by the MOPSO algorithm for comparison.
-2
-4
-6
-8
-cavitation
-10
-12
-14
-20
0.685 0.69 0.695 0.7
efficiency
Figure 9.10: Global and robust Pareto optimal fronts obtained by MOPSO and
CRMOPSO when RPM is also a variable
As may be seen in Fig. 9.10 the major part of the robust Pareto front is
dominated by the Pareto optimal front. The distributions of both fronts are
almost identical. A small potion of robust front overlaps with global front.
Therefore, the robust front is of type C, which means that a part of Pareto front
is robust, but there are other robust solutions. To observe the range of RPMs
in both fronts, Fig. 9.11 is provided.
This figure shows that the range of RPM obtained by both algorithms is
evidently different. It may be seen that the range of RPM tends to be higher
in the Pareto optimal solutions obtained by CRMOPSO. Another interesting
pattern is that half of the Pareto optimal solutions obtained by MOSPO have
RPM of 170. However, the results of CRMOPO in Fig. 9.11 show that 170 is not
a robust RPM since a few of the solutions have this value for their RPMs. The
range of RPMs in CRMOPSO is wider than that of MOPSO. This is due to the
intrisic higher exploration of the CRMOPSO. Since less non-dominated solutions
200 9. Real world applications
190
CRMOPSO
188
MOPSO
186
184
RPM 182
180
178
176
174
172
170
0 20 40 60 80 100
Sorted Pareto optimal solutions
Figure 9.11: Optimal and robust optimal values for RPM (note that there are
98 robust Pareto optimal solutions and 100 global optimal solutions)
are alloweded to enter the archive, the exploitation is less and exploration is
higher than the normal MOPO, which results in finding a wider range of RPMs.
To observe the impacts of the perturbation on the Pareto optimal solutions
obtained by both of the algorithms, Table. 9.2 is provided. Note that the RPM
is only perturbed in this experiment and other parameters are fixed.
This table shows that the average, maximum, and minimum fuel consump-
tion discrepancy of the robust Pareto optimal solutions obtained by CRMOPSO
algorithm is better that those of MOPSO. These results again strongly prove the
merits of the proposed confidence-based robust optimisation approach in finding
robust solutions that are less sensitive to perturbations in parameters.
9.4 Summary
In this chapter, the shape of several ship propellers were optimised considering
two objectives: efficiency versus cavitation. MOPSO was first employed to find
9. Real world applications 201
the best approximation of the true Pareto optimal front for the propeller. It
was observed that the MOPSO algorithm showed very good convergence and
was able to find a uniformly distributed Pareto optimal front. The MOPSO
algorithm was then employed to undertake several experiments, investigating
the effect of the number of blades, RPM, and uncertainties in manufacturing
and operating parameters. The results of MOPSO were also analysed to identify
the possible physical behaviour of the propeller.
The results showed that the best efficiency and cavitation can be achieved by
having five or six blades, since any other number of blades significantly degrades
one of the objectives. The best RPM for the propeller was also found by the
MOPSO algorithm. It was observed that the best Pareto optimal front can be
obtained when the propeller is operating at RPM = 170 to 180. However, the
results of the impact of uncertainties on RPM show that the optimal RPM is very
sensitive to perturbation: efficiency and cavitation can be degraded significantly
by a small amount of uncertainty. Simulation of manufacturing perturbations
also revealed that both of the objectives for the Pareto optimal solutions obtained
also vary dramatically. In addition, post analysis of the results showed that the
most important parameters of the propeller are the maximum thickness and
chord length of the fourth and fifth NACA airfoils along the blade.
The results of CRMOPSO on the propeller design problem showed that this
algorithm is able to find robust optimal values for parameters and RPM that
are not sensitive to perturbations. It was observed that the fuel consumption
discrepancy is much less for robust designs obtained by CRMOPSO in case of
perturbations in parameters and RPM. Since the propeller design problem is a
challenging real problem with a large number of constraints, this chapter strongly
demonstates and supports the practicality of the proposed CRMO approach in
finding robust solutions for real problems.
Chapter 10
Conclusion
202
10. Conclusion 203
Robsut
performance
metric
design
Robust
algorithm
Proposal of confidence measure
design
Establishment of confidence-
based robust optimisation
Establishment of confidence-
based robust multi-objective Finding robust
optimisation design for marine
Proposal of CRGA and CRPSO propellers reliably
without extra
Proposal of CRMOPSO functions evaluation
robust MOPSO. In this algorithm the particles updated their positions normally,
but they were added to the archive if and only if they confidently dominated one
of the archive members or were confidently non-dominated compared to the
archive members.
The shape of a ship propeller was optimised by the proposed approach to find
its robust designs. The thesis first considered the investigation of this problem
in terms of the shape of Pareto optimal front, effect of number of blades, effect
of RPM, and negative impacts of uncertainties on efficiency and cavitation. The
proposed CRMOPSO was then employed to find the robust designs for the case
study.
The results of the proposed CRPSO and CRGA on the proposed test func-
tions first revealed the merits of the proposed confidence measure. It was ob-
served that the confidence measure prevents the algorithms from favouring non-
confident solutions and making risky decisions during optimisation. The results
showed that the confidence-based algorithms were able to outperform other ro-
bustness handling techniques in the literature. The results also proved the merits
of the proposed test functions, in that they provide very challenging test beds
and allow benchmarking the performance of different algorithms effectively.
The results of the proposed CRMOPSO proved that the confidence measure
and confidence-based robust multi-objective optimisation approach can provide
very promising results. It was observed that confidence-based Pareto domi-
nance prevents non-confident particles from entering the archive. This assisted
CRMOPSO to always have confident solutions in the archive as the leaders to
guide other particles toward robust regions of the search space. The qualita-
tive and quantitative results indicated that CRMOPSO was able to outperform
other robust algorithms in the literature. In addition, the comparative results
of algorithms on the proposed benchmark functions showed that the proposed
robust multi-objective test functions provide very challenging test beds with di-
verse characteristics and allow designers to benchmark and compare different
algorithms efficiently.
The results of the MOPSO algorithm on the real case study first showed
the optimal values for structural parameters and operating conditions. The
importance of uncertainties in the parameters was also revealed, in which both
efficiency and cavitation fluctuated noticeably. The results of CRMOPO on
the case study proved that this algorithms is able to find robust solutions for
206 10. Conclusion
• The proposed test functions provide very challenging test beds for bench-
marking the performance of roust algorithms.
Confidence measure
Confidence-based relational
Implicit mathods
operators
CRPSO
Single-objective robust Confidence-based robsust
Population-based stochastic robust
Explicit methods
optimisation optimisation
CRGA
optimisaiton methods
Benchmark problems
Confidence-based Pareto
optimality
Implicit methods CRMOPSO
Confidence-based robust
multi-objective optimisation
Explicit methods Real application
Multi-objective robust
optimisation
Benchmark problem
Performance metrics
Efficiency
Thrust
Robust multi-objective
Impacts of uncertainties
optimisation
• Due to the lack of standard test functions in the literature of robust single-
objective optimisation, the thesis also considers the collection/investigation
of the current robust test functions and the proposal of more challenging
ones.
• Several test functions were proposed, which can be considered as the first
standard set of test functions in the literature of single-objective robust
optimisation.
• Two standard test suites were proposed for the first time, to be used by
other researchers.
• With the proposal of the confidence measure, the relational operators and
Pareto dominance have been re-defined to confidently compare the algo-
rithms in search spaces with single and multiple objectives respectively.
The proposed operators were also a fresh contribution to the literature.
• The most significant contribution and achievement of this research was the
proposal of two novel robust optimisation approaches: confidence-based ro-
bust optimisation and confidence-based robust multi-objective optimisation.
These two approaches established two new research branches in the fields
of robust single-objective and multi-objective optimisation. The experi-
mental results presented in the thesis proved that these two concepts are
able to find robust solutions with a very high level of reliability without
addition function evaluations.
• Two novel robust algorithms based on PSO and GA were proposed, which
can be considered as the first two confidence-based robust algorithm in the
field of single-objective optimisation.
• Robust optimal values for propeller design parameters and RPM were
found by the proposed CRMOPSO for the first time.
[2] Hussein A Abbass, Ruhul Sarker, and Charles Newton. Pde: a pareto-
frontier differential evolution approach for multi-objective optimization
problems. In Evolutionary Computation, 2001. Proceedings of the 2001
Congress on, volume 2, pages 971–978. IEEE, 2001.
[5] M. Asafuddoula, H.K. Singh, and T. Ray. Six-sigma robust design opti-
mization using a many-objective decomposition-based evolutionary algo-
rithm. Evolutionary Computation, IEEE Transactions on, 19(4):490–507,
Aug 2015.
211
212 BIBLIOGRAPHY
[9] Thomas Bäck, Ulrich Hammel, and Dirk Wiesmann. Robust design of
multilayer optical coatings by means of evolution strategies. HT014601767,
1998.
[15] Leonora Bianchi, Marco Dorigo, Luca Maria Gambardella, and Walter J
Gutjahr. A survey on metaheuristics for stochastic combinatorial optimiza-
tion. Natural Computing: an international journal, 8(2):239–287, 2009.
[16] Christian Blum, Jakob Puchinger, Günther R Raidl, and Andrea Roli.
Hybrid metaheuristics in combinatorial optimization: A survey. Applied
Soft Computing, 11(6):4135–4151, 2011.
[17] Ilhem Boussaı̈d, Julien Lepagnot, and Patrick Siarry. A survey on opti-
mization metaheuristics. Information Sciences, 237:82–117, 2013.
[20] Jürgen Branke. Reducing the sampling variance when searching for robust
solutions. In Genetic and Evolutionary Computation Conference (GECCO
2001), pages 235–242, 2001.
[21] Jürgen Branke and Kalyanmoy Deb. Integrating user preferences into
evolutionary multi-objective optimization. In Knowledge incorporation in
evolutionary computation, pages 461–477. Springer, 2005.
[22] Jürgen Branke, Thomas Kaußler, and Harmut Schmeck. Guidance in evo-
lutionary multi-objective optimization. Advances in Engineering Software,
32(6):499–507, 2001.
[28] C.A.C. Coello, G.T. Pulido, and M.S. Lechuga. Handling multiple objec-
tives with particle swarm optimization. Evolutionary Computation, IEEE
Transactions on, 8(3):256–279, 2004.
[31] Carlos A Coello Coello, David A Van Veldhuizen, and Gary B Lamont.
Evolutionary algorithms for solving multi-objective problems, volume 242.
Springer, 2002.
[33] Carlos A Coello Coello and Maximino Salazar Lechuga. Mopso: A proposal
for multiple objective particle swarm optimization. In Evolutionary Com-
putation, 2002. CEC’02. Proceedings of the 2002 Congress on, volume 2,
pages 1051–1056. IEEE, 2002.
[34] Yann Collette and Patrick Siarry. Three new metrics to measure the con-
vergence of metaheuristics towards the pareto frontier and the aesthetic
of a set of solutions in biobjective optimization. Computers & operations
research, 32(4):773–792, 2005.
[36] Gérard Cornuéjols. Valid inequalities for mixed integer linear programs.
Mathematical Programming, 112(1):3–44, 2008.
[37] Carlos Cruz, Juan R González, and David A Pelta. Optimization in dy-
namic environments: a survey on problems, methods and measures. Soft
Computing, 15(7):1427–1448, 2011.
[41] Lawrence Davis. Bit-climbing, representational bias, and test suite design.
In ICGA, pages 18–23, 1991.
[45] K. Deb, A. Pratap, S. Agarwal, and T. Meyarivan. A fast and elitist mul-
tiobjective genetic algorithm: Nsga-ii. Evolutionary Computation, IEEE
Transactions on, 6(2):182–197, 2002.
[47] Kalyanmoy Deb, Shubham Gupta, David Daum, Jürgen Branke, Ab-
hishek Kumar Mall, and Dhanesh Padmanabhan. Reliability-based opti-
mization using evolutionary algorithms. Evolutionary Computation, IEEE
Transactions on, 13(5):1054–1074, 2009.
[48] Kalyanmoy Deb, Jeffrey Horn, and David E Goldberg. Multimodal decep-
tive functions. Complex Systems, 7(2):131–154, 1993.
[52] Kalyanmoy Deb, Amrit Pratap, and T Meyarivan. Constrained test prob-
lems for multi-objective evolutionary optimization. In Evolutionary Multi-
Criterion Optimization, pages 284–298. Springer, 2001.
[53] Kalyanmoy Deb, Ankur Sinha, Pekka J Korhonen, and Jyrki Wallenius.
An interactive evolutionary multiobjective optimization method based on
progressively approximated value functions. Evolutionary Computation,
IEEE Transactions on, 14(5):723–739, 2010.
[54] Kalyanmoy Deb, Ankur Sinha, and Saku Kukkonen. Multi-objective test
problems, linkages, and evolutionary methodologies. In Proceedings of the
8th annual conference on Genetic and evolutionary computation, pages
1141–1148. ACM, 2006.
[55] Kalyanmoy Deb, Lothar Thiele, Marco Laumanns, and Eckart Zitzler.
Scalable multi-objective optimization test problems. In Proceedings of
the Congress on Evolutionary Computation (CEC-2002),(Honolulu, USA),
pages 825–830. Proceedings of the Congress on Evolutionary Computation
(CEC-2002),(Honolulu, USA), 2002.
[56] Kalyanmoy Deb, Lothar Thiele, Marco Laumanns, and Eckart Zit-
zler. Scalable test problems for evolutionary multiobjective optimization.
Springer, 2005.
[58] C.E.J. Dippel. Using particle swarm optimization for finding robust op-
tima.
[59] Michael Dodson and Geoffrey T Parks. Robust aerodynamic design opti-
mization using polynomial chaos. Journal of Aircraft, 46(2):635–646, 2009.
[60] Russ C Eberhart and James Kennedy. A new optimizer using particle
swarm theory. In Proceedings of the sixth international symposium on
micro machine and human science, volume 1, pages 39–43. New York,
NY, 1995.
[65] Brenden Epps, Julie Chalfant, Richard Kimball, Alexandra Techet, Kevin
Flood, and Chrysssostomos Chryssostomidis. Openprop: an open-source
parametric design and analysis tool for propellers. In Proceedings of the
2009 Grand Challenges in Modeling & Simulation Conference, pages 104–
111. Society for Modeling & Simulation International, 2009.
[67] Marco Farina, Kalyanmoy Deb, and Paolo Amato. Dynamic multiobjective
optimization problems: Test cases, approximation, and applications. In
Evolutionary Multi-Criterion Optimization, pages 311–326. Springer, 2003.
[68] Marco Farina, Kalyanmoy Deb, and Paolo Amato. Dynamic multiobjec-
tive optimization problems: test cases, approximations, and applications.
Evolutionary Computation, IEEE Transactions on, 8(5):425–442, 2004.
218 BIBLIOGRAPHY
[70] Lawrence J Fogel, Alvin J Owens, and Michael J Walsh. Artificial intelli-
gence through simulated evolution. 1966.
[72] Carlos M Fonseca, Peter J Fleming, et al. Genetic algorithms for multiob-
jective optimization: Formulationdiscussion and generalization. In ICGA,
volume 93, pages 416–423, 1993.
[73] Carlos M Fonseca, Joshua D Knowles, Lothar Thiele, and Eckart Zitzler.
A tutorial on the performance assessment of stochastic multiobjective opti-
mizers. In Third International Conference on Evolutionary Multi-Criterion
Optimization (EMO 2005), volume 216, page 240, 2005.
[76] Tiziano Ghisu, Geoffrey T Parks, Jerome P Jarrett, and P John Clarkson.
Robust design optimization of gas turbine compression systems. Journal
of Propulsion and Power, 27(2):282–295, 2011.
[78] Fred Glover. Tabu search-part ii. ORSA Journal on computing, 2(1):4–32,
1990.
[79] Anupriya Gogna and Akash Tayal. Metaheuristics: review and application.
Journal of Experimental & Theoretical Artificial Intelligence, 25(4):503–
526, 2013.
BIBLIOGRAPHY 219
[80] Chi-Keong Goh and Kay Chen Tan. Robust evolutionary multi-objective
optimization. In Evolutionary Multi-objective Optimization in Uncertain
Environments, pages 189–211. Springer, 2009.
[81] Chi Keong Goh, Kay Chen Tan, Chun Yew Cheong, and Yew-Soon Ong.
An investigation on noise-induced features in robust evolutionary multi-
objective optimization. Expert Systems with Applications, 37(8):5960–
5980, 2010.
[82] D.E. Goldberg and J.H. Holland. Genetic algorithms and machine learning.
Machine Learning, 3(2):95–99, 1988.
[84] Michael Pilegaard Hansen and Andrzej Jaszkiewicz. Evaluating the quality
of approximations to the non-dominated set. IMM, Department of Math-
ematical Modelling, Technical Universityof Denmark, 1998.
[85] Marde Helbig and Andries P Engelbrecht. Benchmarks for dynamic multi-
objective optimisation. In Computational Intelligence in Dynamic and
Uncertain Environments (CIDUE), 2013 IEEE Symposium on, pages 84–
91. IEEE, 2013.
[86] Marde Helbig and Andries P Engelbrecht. Issues with performance mea-
sures for dynamic multi-objective optimisation. In Computational Intel-
ligence in Dynamic and Uncertain Environments (CIDUE), 2013 IEEE
Symposium on, pages 17–24. IEEE, 2013.
[90] John H Holland and Judith S Reitman. Cognitive systems based on adap-
tive algorithms. ACM SIGART Bulletin, (63):49–49, 1977.
[91] Holger H Hoos and Thomas Stützle. Stochastic local search: Foundations
& applications. Elsevier, 2004.
[92] Simon Huband, Philip Hingston, Luigi Barone, and Lyndon While. A
review of multiobjective test problems and a scalable test problem toolkit.
Evolutionary Computation, IEEE Transactions on, 10(5):477–506, 2006.
[93] Jonas Ide and Anita Schöbel. Robustness for uncertain multi-objective
optimization: a survey and analysis of different concepts. OR Spectrum,
38(1):235–271, 2016.
[96] Yaochu Jin, Markus Olhofer, and Bernhard Sendhoff. A framework for
evolutionary optimization with approximate fitness functions. Evolution-
ary Computation, IEEE Transactions on, 6(5):481–494, 2002.
[99] I.Y. Kim and OL De Weck. Adaptive weighted-sum method for bi-objective
optimization: Pareto front generation. Structural and Multidisciplinary
Optimization, 29(2):149–158, 2005.
[103] Jurgen Klockgether and Hans-Paul Schwefel. Two-phase nozzle and hollow
core jet experiments. In Proc. 11th Symp. Engineering Aspects of Magne-
tohydrodynamics, pages 141–148. Pasadena, CA: California Institute of
Technology, 1970.
[104] J.D. Knowles and D.W. Corne. Approximating the nondominated front
using the pareto archived evolution strategy. Evolutionary computation,
8(2):149–172, 2000.
[107] Apurva Kumar, Andy J Keane, Prasanth B Nair, and Shahrokh Shah-
par. Robust design of compressor fan blades against erosion. Journal of
Mechanical Design, 128(4):864–873, 2006.
[109] Ailsa H Land and Alison G Doig. An automatic method for solving discrete
programming problems. In 50 Years of Integer Programming 1958-2008,
pages 105–132. Springer, 2010.
[113] C Li, S Yang, TT Nguyen, EL Yu, X Yao, Y Jin, HG Beyer, and PN Sug-
anthan. Benchmark generator for cec 2009 competition on dynamic op-
timization. University of Leicester, University of Birmingham, Nanyang
Technological University, Tech. Rep, 2008.
[115] Lingbo Li, Mark Harman, Emmanuel Letier, and Yuanyuan Zhang. Robust
next release problem: handling uncertainty during optimization. In Pro-
ceedings of the 2014 conference on Genetic and evolutionary computation,
pages 1247–1254. ACM, 2014.
[116] JJ Liang, PN Suganthan, and K Deb. Novel composition test functions for
numerical global optimization. In Swarm Intelligence Symposium, 2005.
SIS 2005. Proceedings 2005 IEEE, pages 68–75. IEEE, 2005.
[117] Helena R Lourenço, Olivier C Martin, and Thomas Stützle. Iterated local
search. Springer, 2003.
[120] Jörn Mehnen, Günter Rudolph, and Tobias Wagner. Evolutionary opti-
mization of dynamic multiobjective functions. 2006.
[122] Marcin Molga and Czeslaw Smutnicki. Test functions for optimization
needs. Test functions for optimization needs, 2005.
[124] Tadahiko Murata and Hisao Ishibuchi. Moga: Multi-objective genetic al-
gorithms. In Evolutionary Computation, 1995., IEEE International Con-
ference on, volume 1, page 289. IEEE, 1995.
[125] Patrick Ngatchou, Anahita Zarei, and MA El-Sharkawi. Pareto multi ob-
jective optimization. In Intelligent Systems Application to Power Systems,
2005. Proceedings of the 13th International Conference on, pages 84–91.
IEEE, 2005.
[126] Tatsuya Okabe, Yaochu Jin, Markus Olhofer, and Bernhard Sendhoff. On
test functions for evolutionary multi-objective optimization. In Parallel
Problem Solving from Nature-PPSN VIII, pages 792–802. Springer, 2004.
[127] Yew-Soon Ong, Prasanth B Nair, and Kai Yew Lum. Max-min surrogate-
assisted evolutionary algorithm for robust design. Evolutionary Computa-
tion, IEEE Transactions on, 10(4):392–404, 2006.
[128] Andrzej Osyczka and Sourav Kundu. A new method to solve generalized
multicriteria optimization problems using the simple genetic algorithm.
Structural optimization, 10(2):94–99, 1995.
[129] Ingo Paenke, Jürgen Branke, and Yaochu Jin. Efficient search for robust
solutions by means of evolutionary algorithms and fitness approximation.
Evolutionary Computation, IEEE Transactions on, 10(4):405–420, 2006.
[137] Tapabrata Ray and Warren Smith. A surrogate assisted parallel multiob-
jective evolutionary algorithm for robust engineering design. Engineering
Optimization, 38(8):997–1011, 2006.
[144] Jason R Schott. Fault tolerant design using single and multicriteria genetic
algorithm optimization. Technical report, DTIC Document, 1995.
BIBLIOGRAPHY 225
[145] Ian Scriven, David Ireland, Andrew Lewis, Sanaz Mostaghim, and Jürgen
Branke. Asynchronous multiple objective particle swarm optimisation in
unreliable distributed environments. In Evolutionary Computation, 2008.
CEC 2008.(IEEE World Congress on Computational Intelligence). IEEE
Congress on, pages 2481–2486. IEEE, 2008.
[146] Pranay Seshadri, Shahrokh Shahpar, and Geoffrey T Parks. Robust com-
pressor blades for desensitizing operational tip clearance variations. In
ASME Turbo Expo 2014: Turbine Technical Conference and Exposition,
pages V02AT37A043–V02AT37A043. American Society of Mechanical En-
gineers, 2014.
[148] Margarita Reyes Sierra and Carlos A Coello Coello. Improving pso-based
multi-objective optimization using crowding, mutation and-dominance. In
Evolutionary multi-criterion optimization, pages 505–519. Springer, 2005.
[149] Vinicius LS Silva, Elizabeth F Wanner, Sérgio AAG Cerqueira, and Ri-
cardo HC Takahashi. A new performance metric for multiobjective opti-
mization: The integrated sphere counting. In Evolutionary Computation,
2007. CEC 2007. IEEE Congress on, pages 3625–3630. IEEE, 2007.
[154] R. Storn and K. Price. Differential evolution–a simple and efficient heuristic
for global optimization over continuous spaces. Journal of global optimiza-
tion, 11(4):341–359, 1997.
[157] Kay Chen Tan, Tong Heng Lee, and Eik Fun Khor. Evolutionary al-
gorithms for multi-objective optimization: performance assessments and
comparisons. Artificial intelligence review, 17(4):251–290, 2002.
[159] Shigeyoshi Tsutsui and Ashish Ghosh. Genetic algorithms with a robust
solution searching scheme. Evolutionary Computation, IEEE Transactions
on, 1(3):201–208, 1997.
[163] JK Vis. Particle swarm optimizer for finding robust optima. 2009.
[165] Robert W Walters and Luc Huyse. Uncertainty analysis for fluid mechanics
with applications. Technical report, DTIC Document, 2002.
[166] Darrell Whitley, Soraya Rana, John Dzubera, and Keith E Mathias. Evalu-
ating evolutionary algorithms. Artificial intelligence, 85(1):245–276, 1996.
[168] Dirk Wiesmann, Ulrich Hammel, and Thomas Back. Robust design of mul-
tilayer optical coatings by means of evolutionary algorithms. Evolutionary
Computation, IEEE Transactions on, 2(4):162–167, 1998.
[169] D.H. Wolpert and W.G. Macready. No free lunch theorems for optimiza-
tion. Evolutionary Computation, IEEE Transactions on, 1(1):67–82, 1997.
[170] Jin Wu and Shapour Azarm. Metrics for quality assessment of a multi-
objective design optimization solution set. Journal of Mechanical Design,
123(1):18–25, 2001.
[172] Dongbin Xiu and Jan S Hesthaven. High-order collocation methods for
differential equations with random inputs. SIAM Journal on Scientific
Computing, 27(3):1118–1139, 2005.
[175] Xin Yao, Yong Liu, and Guangming Lin. Evolutionary programming made
faster. Evolutionary Computation, IEEE Transactions on, 3(2):82–102,
1999.
[177] Qingfu Zhang and Hui Li. Moea/d: A multiobjective evolutionary algo-
rithm based on decomposition. Evolutionary Computation, IEEE Trans-
actions on, 11(6):712–731, 2007.
[178] Qingfu Zhang, Aimin Zhou, Shizheng Zhao, Ponnuthurai Nagaratnam Sug-
anthan, Wudong Liu, and Santosh Tiwari. Multiobjective optimization test
instances for the cec 2009 special session and competition. University of
Essex, Colchester, UK and Nanyang Technological University, Singapore,
Special Session on Performance Assessment of Multi-Objective Optimiza-
tion Algorithms, Technical Report, 2008.
[179] Aimin Zhou, Bo-Yang Qu, Hui Li, Shi-Zheng Zhao, Ponnuthurai Nagarat-
nam Suganthan, and Qingfu Zhang. Multiobjective evolutionary algo-
rithms: A survey of the state of the art. Swarm and Evolutionary Com-
putation, 1(1):32–49, 2011.
[181] Eckart Zitzler, Kalyanmoy Deb, and Lothar Thiele. Comparison of mul-
tiobjective evolutionary algorithms: Empirical results. Evolutionary com-
putation, 8(2):173–195, 2000.
[182] Eckart Zitzler and Lothar Thiele. Multiobjective optimization using evo-
lutionary algorithms-a comparative case study. In Parallel problem solving
from nature-PPSN V, pages 292–301. Springer, 1998.
BIBLIOGRAPHY 229
[184] Eckart Zitzler, Lothar Thiele, Marco Laumanns, Carlos M Fonseca, and
Viviane Grunert Da Fonseca. Performance assessment of multiobjective
optimizers: An analysis and review. Evolutionary Computation, IEEE
Transactions on, 7(2):117–132, 2003.
Appendix A
Some of the test problems of this work have been adopted from [18, 58, 163, 105].
The full description of the benchmark functions utilised and proposed are as
follows:
A.1.1 TP1
N N
Y 1 X 2
f (~x) = 1 − H(xi ) + x (A.1)
i=1
100 i=1 i
0 x < 0
i
H(xi ) = (A.2)
1 otherwise
230
A. Single-objective robust test functions 231
A.1.2 TP2
2
p
f (~x) = ||~x|| + e−5||~x|| (A.3)
A.1.3 TP3
5
f (~x) = √ − max{f0 (~x), f1 (~x), f2 (~x), f3 (~x)} (A.4)
5− 5
1 0.5||~x||
f0 (~x) = e (A.5)
10
s
5 ||~x + 5||
f1 (~x) = √ (1 − √ ) (A.6)
5− 5 5 N
||~x + 5|| 4
f2 (~x) = c1 (1 − ( √ )) (A.7)
5 N
||~x − 5|| d2
f3 (~x) = c2 (1 − ( √ ) ) (A.8)
5 N
625
c1 = , c2 = 1.5975, d2 = 1.1513 (A.9)
624
232 A. Single-objective robust test functions
A.1.4 TP4
N
1 X
f (~x) = c − f1 (xi ) (A.10)
N i=1
−(x + 1)2 + 1 −2 ≤ x < 0
i i
f1 (xi ) = (A.11)
c.2−8|xi −1| 0 ≤ xi < 2
c = 1.3 (A.12)
A.1.5 TP5
N
X
f (~x) = f1 (xi ) (A.13)
i=1
1 −0.5 ≤ x < 0.5
i
f1 (xi ) = (A.14)
0 otherwise
Input noise: ~δ ∼ U
~ (−0.2, 0.2)
Robust optimum fitness (nominal value) ≈ −2.0(2D), −20.0(20D)
Robust optimum fitness (expected value) ≈ −2.0(2D), −20.0(20D)
Robust optimum location 2D(0, 0), 20D(0, 0, ..., 0)
The 20D version of this function has been used in the experimental results.
A.1.6 TP6
N
X
f (~x) = f1 (xi ) (A.15)
i=1
1 −0.6 ≤ xi < −0.2
f1 (xi ) = 1.25 (0.2 ≤ xi < 0.36) ∨ (0.44 ≤ xi < 0.6) (A.16)
0
otherwise
A.1.7 TP7
N
1 X
f (~x) = 1 − f1 (xi ) (A.17)
N i=1
x + 0.8 −0.8 ≤ x < 0.2
i i
f1 (xi ) = (A.18)
0 otherwise
A.1.8 TP8
N
1 X
f (~x) = − g(x) (A.19)
N i=1
e−2ln2( x−0.1
0.8
)2
p
|sin(5πxi )| 0.4 < xi ≤ 0.6
g(xi ) = x−0.1 2
(A.20)
e−2ln2( 0.8 ) sin6 (5πxi ) otherwise
A.1.9 TP9
cos( 21 xi ) + 1 −2π ≤ xi < 2π
f (~x) = 1.1cos(xi + π) + 1.1 2π ≤ xi < 4π (A.21)
0
otherwise
f1 (~x) = ΣN 2
i=1 xi (A.23)
f3 (~x) = ΣN 2
i=1 [xi − 10cos(2πxi ) + 10] (A.25)
√1 N 2 1 N
f4 (~x) = −20e(−0.2 N Σi=1 xi ) − e( N Σi=1 cos(2πxi )) + 20 + e (A.26)
N N
Y 1 X 2
Y (~x) = 1 − H(xi ) + x (A.28)
i=1
100 i=1 i
0 x < 0
i
H(x) = (A.29)
1 otherwise
G(~x) = ΣN 2
i=3 50xi (A.30)
B(~x) = (ΣN
i=1 |xi |)
θ
(A.31)
θ = 0.1
Search space: ~x ∈ [−10, 10]N
Input noise: ~δ ∼ U
~ (−1, 1)
Robust optimum fitness (nominal value): 0.02
Robust optimum fitness (expected value): 3.58
Robust optimum location (1, 1, 0, 0, ..., 0)
The 10D version of this function has been used in the experimental results.
−(x + 1)2 + 1 −2 ≤ x < 0
i i
y(xi ) = (A.33)
c × 2xi −1 0 ≤ xi < 2
G(~x) = ΣN 2
i=3 50xi (A.34)
B(~x) = (ΣN
i=1 |xi |)
θ
(A.35)
A. Single-objective robust test functions 237
θ = 0.1
c = 1.3
Search space: ~x ∈ [−10, 10]N
Input noise: ~δ ∼ U
~ (−1, 1)
Robust optimum fitness (nominal value): 0.2663
Robust optimum fitness (expected value): 8720
Robust optimum location (5, 5, 0, 0, ..., 0)
The 10D version of this function has been used in the experimental results.
G(~x) = (ΣN 2
i=3 50xi ) + 1 (A.38)
A.4.2 TP14
f (~x) = (H(x1 ) + H(x2 )) × G(~x) − 1 (A.39)
1 x−0.5 2
−( x−0.04 )2 x−0.04 2
H(x) = − 0.5e−( 0.05 ) − Σ11
i=1 (0.3e
0.004 + 0.3e−( 0.004 ) + sin(πx)) (A.40)
2
238 A. Single-objective robust test functions
G(~x) = (ΣN 2
i=3 50xi ) + 1 (A.41)
A.4.3 TP15
(H(x1 ) + H(x2 )) × G(~x) − 1 (x1 ≤ 1) ∧ (x2 ≤ 1)
(H(2 − x ) + H(x − 2)) × G(~x) − 1
(x1 > 1) ∧ (x2 > 1)
1 2
f (~x) = (A.42)
(H(2 − x1 ) + H(x2 )) × G(~x) − 1 (x1 > 1) ∧ (x2 ≤ 1)
(H(x1 ) + H(2 − x2 )) × G(~x) − 1
(x1 ≤ 1) ∧ (x2 > 1)
G(~x) = (ΣN 2
i=3 50xi ) + 1 (A.44)
3 x−0.5 2
−( x−0.0063i )2
x−(1−0.0063i) 2
H(x) = − 0.5e−( 0.04 ) − Σ16
i=0 (0.8e
0.004 + 0.8e−( 0.004 ) ) (A.46)
2
G(~x) = (ΣN 2
i=3 50xi ) (A.47)
A.5.2 TP17
f (~x) = (H(x1 ) + H(x2 )) × G(~x) − 1.399 (A.48)
3 x−0.5 2
−( x−0.0063i )2
x−(1−0.0063i) 2
H(x) = − 0.8e−( 0.04 ) − Σ16
i=0 (0.5e
0.004 + 0.5e−( 0.004 ) ) (A.49)
2
G(~x) = (ΣN 2
i=3 50xi ) (A.50)
1 x−0.95 2 x−0.05 2
H(x) = − (0.2e−( 0.03 ) + 0.2e−( 0.01 ) ) (A.52)
2
G(~x) = (ΣN 2
i=3 50xi ) + 1 (A.53)
A.7.2 TP20
f (~x) = −H(x1 ) × H(x2 ) + G(~x) (A.55)
2 π
e2x sin(8π(x + 16
)) −x
H(x) = + 0.5 (A.56)
3
A. Single-objective robust test functions 241
ΣN
i=3 xi
G(~x) = 10 (A.57)
N
n
X
g(~x) = 10 + x2i − 10cos(4πxi ) (B.5)
i=2
1
S(x1 ) = + x21 (B.6)
0.2 + x1
242
B. Multi-objective robust test functions 243
B.1.2 RMTP2
f1 (~x) = x1 (B.7)
n
X
g(~x) = 10 + x2i − 10cos(4πxi ) (B.11)
i=2
1
S(x1 ) = + 10x21 (B.12)
0.2 + x1
B.1.3 RMTP3
f1 (~x) = x1 (B.13)
x2 −0.35 2 x2 −0.85 2
where : h(x1 ) = 2 − 0.8e−( 0.25
)
− e−( 0.03
)
(B.16)
n
X
g(~x) = 50x2i (B.17)
i=3
√
S(x1 ) = 1 − x1 (B.18)
244 B. Multi-objective robust test functions
B.1.4 RMTP4
f1 (~x) = x1 (B.19)
f2 (~x) = x2 (B.20)
n
X
g(~x) = (10 + x2i − 10cos(4πxi )) (B.24)
i=3
0.75 0.75
S(x1 , x2 ) = + 10x81 + + 10x82 (B.25)
0.2 + x1 0.2 + x2
B.1.5 RMTP5
f1 (~x) = x1 (B.26)
f2 (~x) = x2 (B.27)
x3 −0.35 2 x3 −0.85 2
where : h(x3 ) = 2 − 0.8e−( 0.25
)
− e−( 0.03
)
(B.30)
n
X
g(~x) = (10 + x2i − 10cos(4πxi )) (B.31)
i=3
√ √
S(x1 , x2 ) = 1 − x1 − x2 (B.32)
B.2.1 RMTP6
f1 (~x) = x1 (B.33)
Pn
i=2 (xi )
g(~x) = (B.37)
n−1
1
S(x1 ) = (B.38)
0.2 + x1
246 B. Multi-objective robust test functions
B.2.2 RMTP7
πx1
f1 (~x) = cos( ) (B.39)
2
πx1
f2 (~x) = g(~x)sin( ) (B.40)
2
Pn
i=2 (xi )
g(~x) = 1 + 10 (B.42)
n−1
B.2.3 RMTP8
f1 (~x) = 1 − x21 (B.43)
πx1
f2 (~x) = g(~x)sin( ) (B.44)
2
Pn
i=2 (xi )
g(~x) = 1 + 10 (B.46)
n−1
B.2.4 RMTP9
x1
e −1
f1 (~x) = (B.47)
e−1
sin(4πx1 ) − 15x1
f2 (~x) = g(~x)[ + 1] (B.48)
15
Pn
i=2 (xi )
g(~x) = 1 + 10 (B.50)
n−1
B. Multi-objective robust test functions 247
B.2.5 RMTP10
f1 (~x) = x1 (B.51)
sin(4πx1 ) − 15x1
f2 (~x) = g(~x)[ + 1] (B.52)
15
Pn
i=2 (xi )
g(~x) = 1 + 10 (B.54)
n−1
B.3.2 RMTP12
Framework 1
α = 0.1
ω = 1.5
β1 = 1.5
β2 = 1.5
B.3.3 RMTP13
Framework 1
α = 0.1
248 B. Multi-objective robust test functions
ω = 1.5
β1 = 0.5
β2 = 0.5
B.3.4 RMTP14
Framework 1
α = 0.1
ω = 1.5
β1 = 0.5
β2 = 1
B.3.5 RMTP15
Framework 1
α = 0.1
ω = 1.5
β1 = 1
β2 = 0.5
B.3.6 RMTP16
Framework 1
α = 0.1
ω = 1.5
β1 = 1
β2 = 1.5
B.3.7 RMTP17
Framework 1
α = 0.1
ω = 1.5
β1 = 1.5
β2 = 1
B. Multi-objective robust test functions 249
B.3.8 RMTP18
Framework 1
α = 0.1
ω = 1.5
β1 = 1.5
β2 = 0.5
B.3.9 RMTP19
Framework 1
α = 0.1
ω = 1.5
β1 = 0.5
β2 = 1.5
B.3.10 RMTP20
Framework 2
γ=3
λ=4
ω=1
β=1
B.3.11 RMTP21
Framework 2
γ=3
λ=4
ω=1
β = 0.5
B.3.12 RMTP22
Framework 2
γ=3
λ=4
250 B. Multi-objective robust test functions
ω=1
β = 1.5
B.3.13 RMTP23
Framework 2
ζ=2
λ=6
γ=3
ω = 0.5
B.3.14 RMTP24
Framework 2
ζ=4
λ=6
γ=3
ω = 0.5
B.3.15 RMTP25
Framework 2
ζ=8
λ=6
γ=3
ω = 0.5
B.4.1 RMTP26
f1 (~x) = x1 (B.55)
Pn
i=2 (xi )
g(~x) = 1 + 10 (B.58)
n−1
0 0 ≤ x1 ≤ 0.2
0.2 < x1 ≤ 0.4
0.25
C(x1 ) = 0.5 0.4 < x1 ≤ 0.6 (B.59)
0.75 0.6 < x1 ≤ 0.8
1 0.8 < x1 ≤ 1
B.4.2 RMTP27
x1
e −1
f1 (~x) = (B.60)
e−1
Pn
i=2 (xi )
g(~x) = 1 + 10 (B.63)
n−1
0 0 ≤ x1 ≤ 0.2
0.2 < x1 ≤ 0.4
0.25
C(x1 ) = 0.5 0.4 < x1 ≤ 0.6 (B.64)
0.75 0.6 < x1 ≤ 0.8
1 0.8 < x1 ≤ 1
252 B. Multi-objective robust test functions
B.4.3 RMTP28
f1 (~x) = x1 (B.65)
f2 (~x) = x2 (B.66)
r
x1 x1
f3 (~x) = g(~x) × 1− − sin(ζ × 2πx1 )
g(~x) g(~x) (B.67)
r
x2 x2
+ H(x1 ) 1− − sin(ζ × 2πx2 ) + H(x3 ) +ω
g(~x) g(~x)
2
e−2x sin(λ × 2π(x + π
4λ
)) −x
where : H(x) = + 0.5 (B.69)
γ
PN
i=2 xi
g(~x) = 1 + 10 (B.70)
N
γ, λ ≥ 1 (B.71)
B.4.4 RMTP29
f1 (~x) = x1 (B.72)
f2 (~x) = x2 (B.73)
Pn
i=2 (xi )
g(~x) = 1 + 10 (B.76)
n−1
0
0(≤ x1 ≤ 0.2) ∨ (0 ≤ x2 ≤ 0.2)
(0.2 < x1 ≤ 0.4) ∨ (0.2 < x2 ≤ 0.4)
0.25
C(x1 ) = 0.5 (0.4 < x1 ≤ 0.6) ∨ (0.4 < x2 ≤ 0.6) (B.77)
0.75 (0.6 < x1 ≤ 0.8) ∨ (0.6 < x2 ≤ 0.8)
1 (0.8 < x1 ≤ 1) ∨ (0.8 < x2 ≤ 1)
B.4.5 RMTP30
f1 (~x) = x1 (B.78)
f2 (~x) = x2 (B.79)
sin(4πx1 ) − 15x1
f3 (~x) = g(~x)[ + 1] (B.80)
15
Pn
i=2 (xi )
g(~x) = 1 + 10 (B.82)
n−1
B.4.6 RMTP31
x1
e −1
f1 (~x) = (B.83)
e−1
ex2 − 1
f2 (~x) = (B.84)
e−1
sin(4πx1 ) − 15x1
f3 (~x) = g(~x)[ + 1] (B.85)
15
254 B. Multi-objective robust test functions
Pn
i=2 (xi )
g(~x) = 1 + 10 (B.87)
n−1
B.4.7 RMTP32
f1 (~x) = x1 (B.88)
f2 (~x) = x2 (B.89)
sin(4πx1 x2 ) − 15x1 x2
f3 (~x) = g(~x)[ + 1] (B.90)
15
Pn
i=2 (xi )
g(~x) = 1 + 10 (B.92)
n−1
B.4.8 RMTP33
πx1
f1 (~x) = cos( ) (B.93)
2
πx1
f2 (~x) = g(~x)sin( ) (B.94)
2
Pn ψ
i=2 (xi )
g(~x) = 1 + 10 (B.96)
n−1
ψ = 1/3
B. Multi-objective robust test functions 255
B.4.9 RMTP34
f1 (~x) = x1 (B.97)
Pn ψ
i=2 (xi )
g(~x) = 1 + (B.101)
n−1
1
S(x1 ) = (B.102)
0.2 + x1
ψ = 1/3
B.4.10 RMTP35
x1
e −1
f1 (~x) = (B.103)
e−1
sin(4πx1 ) − 15x1
f2 (~x) = g(~x)[ + 1] (B.104)
15
Pn ψ
i=2 (xi )
g(~x) = 1 + 10 (B.106)
n−1
ψ = 1/3
256 B. Multi-objective robust test functions
g(~x) = ΣN 2
i=3 50xi (B.110)
S(x) = 1 − xβ (B.111)
β = 1/2
B.5.2 RMTP37
Same as RMTP36 but with β = 1.
B.5.3 RMTP38
Same as RMTP36 but with β = 2.
M
3 −( x−0.5 )2
X
−( x−0.02i )2 −(
x−(0.6+0.02i) 2
)
H(x) = − 0.5e 0.04 − 0.8e 0.004 − 0.8e 0.004 (B.114)
2 i=0
g(~x) = ΣN 2
i=3 50xi (B.115)
S(x) = 1 − xβ (B.116)
M = 20
β = 1/2
B.6.2 RMTP40
Same as RMTP39 but with β = 1.
B.6.3 RMTP41
Same as RMTP39 but with β = 2.
1 x−0.95 2 x−0.05 2
H(x) = − 0.2e−( 0.03 ) − 0.2e−( 0.01 ) (B.119)
2
g(~x) = (ΣN 2
i=3 50xi ) + 1 (B.120)
258 B. Multi-objective robust test functions
S(x) = 1 − xβ (B.121)
β = 1/2
B.7.2 RMTP43
Same as RMTP42 but with β = 1.
B.7.3 RMTP44
Same as RMTP42 but with β = 2.
Appendix C
Complete results
259
260 C. Complete results
2 2 2
f2
f2
f2
RMTP1 1 1 1
0 0 0
-1 -1 -1
0 0.5 1 0 0.5 1 0 0.5 1
f1 f1 f1
CRMOPSO IRMOPSO ERMOPSO
3 3 3
2 2 2
f2
f2
f2
RMTP2 1 1 1
0 0 0
-1 -1 -1
0 0.5 1 0 0.5 1 0 0.5 1
f1 f1 f1
CRMOPSO IRMOPSO ERMOPSO
2 2 2
f2
f2
1 1 1
RMTP3
0.5 0.5 0.5
0 0 0
0 0.5 1 0 0.5 1 0 0.5 1
f1 f1 f1
CRMOPSO IRMOPSO ERMOPSO
2
8 8
1.5
6 6
f3
f3
4 4 1
RMTP4
f3
2 2
0.5
0 0
1 1 0
1 1 0 1
0.5 0.5 0.5 0.5
0.5 0.5 1 0
f2 0 0 f2 0 0 f1 f2
f1 f1
CRMOPSO IRMOPSO ERMOPSO
16 16 16
14 14 14
12 12 12
RMTP5
f3
f3
f3
10 10 10
8 8 8
6 6 6
0 1 0 1 0 1
0.5 0.5 0.5 0.5 0.5 0.5
1 0 1 0 1 0
f2 f2 f2
f1 f1 f1
f2
f2
RMTP6 0.4 0.4 0.4
0 0 0
0 0.5 1 0 0.5 1 0 0.5 1
f1 f1 f1
CRMOPSO IRMOPSO ERMOPSO
1 1 1
f2
f2
RMTP7 0.4 0.4 0.4
f2
f2
0 0 0
0 0.5 1 0 0.5 1 0 0.5 1
f1 f1 f1
CRMOPSO IRMOPSO ERMOPSO
1 1 1
f2
f2
f2
f2
0 0 0
0 0.5 1 0 0.5 1 0 0.5 1
f1 f1 f1
1 1 1
f2
f2
f2
RMTP11
0.5 0.5 0.5
0 0 0
0 0.5 1 1.5 2 0 0.5 1 1.5 2 0 0.5 1 1.5 2
f1 f1 f1
CRMOPSO IRMOPSO ERMOPSO
1.5 1.5 1.5
1 1 1
f2
f2
f2
0.5 0.5 0.5
RMTP12
0 0 0
1 1 1
f2
f2
f2
RMTP13
0.5 0.5 0.5
0 0 0
0 0.5 1 1.5 2 0 0.5 1 1.5 2 0 0.5 1 1.5 2
f1 f1 f1
CRMOPSO IRMOPSO ERMOPSO
1.5 1.5 1.5
1 1 1
f2
f2
f2
RMTP14
0.5 0.5 0.5
0 0 0
0 0.5 1 1.5 2 0 0.5 1 1.5 2 0 0.5 1 1.5 2
f1 f1 f1
CRMOPSO IRMOPSO ERMOPSO
1.5 1.5 1.5
1 1 1
f2
f2
f2
RMTP15
0.5 0.5 0.5
0 0 0
0 0.5 1 1.5 2 0 0.5 1 1.5 2 0 0.5 1 1.5 2
f1 f1 f1
Figure C.3: Robust fronts obtained for RMTP11 to RMTP15. Note that the
dominated (local) front is robust and considered as reference for the performance
measures.
C. Complete results 263
1 1 1
f2
f2
f2
RMTP16
0.5 0.5 0.5
0 0 0
0 0.5 1 1.5 2 0 0.5 1 1.5 2 0 0.5 1 1.5 2
f1 f1 f1
CRMOPSO IRMOPSO ERMOPSO
1.5 1.5 1.5
1 1 1
f2
f2
f2
RMTP17
0.5 0.5 0.5
0 0 0
0 0.5 1 1.5 2 0 0.5 1 1.5 2 0 0.5 1 1.5 2
f1 f1 f1
CRMOPSO IRMOPSO ERMOPSO
1.5 1.5 1.5
1 1 1
f2
f2
f2
1 1 1
f2
f2
f2
0 0 0
0 0.5 1 1.5 2 0 0.5 1 1.5 2 0 0.5 1 1.5 2
f1 f1 f1
Figure C.4: Robust fronts obtained for RMTP16 to RMTP19. Note that the
dominated (local) front is robust and considered as reference for the performance
measures.
264 C. Complete results
f2
f2
RMTP20 0.4 0.4 0.4
0 0 0
0 0.5 1 0 0.5 1 0 0.5 1
f1 f1 f1
CRMOPSO IRMOPSO ERMOPSO
1 1 1
f2
f2
0 0 0
0 0.5 1 0 0.5 1 0 0.5 1
f1 f1 f1
CRMOPSO IRMOPSO ERMOPSO
1 1 1
f2
f2
0 0 0
0 0.5 1 0 0.5 1 0 0.5 1
f1 f1 f1
Figure C.5: Robust fronts obtained for RMTP20 to RMTP22. Note that the
worst front is the most robust and considered as reference for the performance
measures.
C. Complete results 265
2 2 2
f2
f2
f2
RMTP23 1 1 1
0 0 0
-1 -1 -1
0 0.5 1 0 0.5 1 0 0.5 1
f1 f1 f1
CRMOPSO IRMOPSO ERMOPSO
3 3 3
2 2 2
f2
f2
f2
RMTP24 1 1 1
0 0 0
-1 -1 -1
0 0.5 1 0 0.5 1 0 0.5 1
f1 f1 f1
CRMOPSO IRMOPSO ERMOPSO
3 3 3
2 2 2
f2
f2
f2
1 1 1
RMTP25
0 0 0
-1 -1 -1
0 0.5 1 0 0.5 1 0 0.5 1
f1 f1 f1
Figure C.6: Robust fronts obtained for RMTP23 to RMTP25. Note that the
worst front is the most robust and considered as reference for the performance
measures.
f2
f2
0 0 0
0 0.5 1 0 0.5 1 0 0.5 1
f1 f1 f1
CRMOPSO IRMOPSO ERMOPSO
1 1 1
f2
f2
0 0 0
0 0.5 1 0 0.5 1 0 0.5 1
f1 f1 f1
4 4 4
2 2 2
RMTP28
f3
f3
f3
0 0 0
-2 0 -2 0 -2 0
0 0 0
0.5 0.5 0.5
0.5 0.5 0.5
1 1 f1 1 1 f1 1 1 f1
f2 f2 f2
CRMOPSO IRMOPSO ERMOPSO
3 3 3
2 2 2
1 1 1
RMTP29
f3
f3
f3
0 0 0
-1 -1 -1
0 1 0 1 0 1
0.5 0.5 0.5 0.5 0.5 0.5
1 0 1 0 1 0
f2 f1 f2 f1 f2 f1
CRMOPSO IRMOPSO ERMOPSO
2 1.5 1.5
1.5
1 1
1
RMTP30
f3
f3
f3
0.5 0.5
0.5
0 0 0
0 1 0 1 0 1
0.5 0.5 0.5 0.5 0.5 0.5
1 0 1 0 1 0
f2 f1 f2 f1 f2 f1
CRMOPSO IRMOPSO ERMOPSO
1.5 2 2
1.5 1.5
1
1 1
RMTP31
f3
f3
f3
0.5
0.5 0.5
0 0 0
0 1 0 1 0 1
0.5 0.5 0.5 0.5 0.5 0.5
1 0 1 0 1 0
f2 f1 f2 f1 f1 f2
1.5
1 1
1
f3
f3
f3
RMTP32 0.5
0.5 0.5
0 0 0
0 1 0 1 0 1
0.5 0.5 0.5 0.5 0.5 0.5
1 0 1 0 1 0
f2 f1 f2 f1 f2 f1
f2
f2
RMTP33 0.4 0.4 0.4
0 0 0
0 0.5 1 0 0.5 1 0 0.5 1
f1 f1 f1
CRMOPSO IRMOPSO ERMOPSO
1 1 1
f2
f2
RMTP34 0.4 0.4 0.4
0 0 0
0 0.5 1 0 0.5 1 0 0.5 1
f1 f1 f1
CRMOPSO IRMOPSO ERMOPSO
1 1 1
f2
f2
RMTP35 0.4 0.4 0.4
0 0 0
0 0.5 1 0 0.5 1 0 0.5 1
f1 f1 f1
CRMOPSO IRMOPSO ERMOPSO
1 1 1
f2
f2
0 0 0
0 0.5 1 0 0.5 1 0 0.5 1
f1 f1 f1
CRMOPSO IRMOPSO ERMOPSO
1 1 1
f2
f2
0 0 0
0 0.5 1 0 0.5 1 0 0.5 1
f1 f1 f1
CRMOPSO IRMOPSO ERMOPSO
1 1 1
f2
f2
0 0 0
0 0.5 1 0 0.5 1 0 0.5 1
f1 f1 f1
Figure C.9: Robust fronts obtained for RMTP33 and RMTP38. Note that
in RMTP36, RMTP37, and RMTP38, the worst front is the most robust and
considered as the reference for the performance measures.
268 C. Complete results
f2
f2
RMTP39 0.4 0.4 0.4
0 0 0
0 0.5 1 0 0.5 1 0 0.5 1
f1 f1 f1
CRMOPSO IRMOPSO ERMOPSO
1 1 1
f2
f2
RMTP40 0.4 0.4 0.4
0 0 0
0 0.5 1 0 0.5 1 0 0.5 1
f1 f1 f1
CRMOPSO IRMOPSO ERMOPSO
1 1 1
f2
0 0 0
0 0.5 1 0 0.5 1 0 0.5 1
f1 f1 f1
CRMOPSO IRMOPSO ERMOPSO
1 1 1
f2
f2
0 0 0
0 0.5 1 0 0.5 1 0 0.5 1
f1 f1 f1
CRMOPSO IRMOPSO ERMOPSO
1 1 1
f2
f2
0 0 0
0 0.5 1 0 0.5 1 0 0.5 1
f1 f1 f1
CRMOPSO IRMOPSO ERMOPSO
1 1 1
f2
f2
0 0 0
0 0.5 1 0 0.5 1 0 0.5 1
f1 f1 f1