Sei sulla pagina 1di 292

Confidence-based Robust Optimisation of

Engineering Design Problems

Seyedali Mirjalili
MS,BS

School of Information and Communication Technology


Faculty of Engineering and Information Technology
Griffith University

Submitted in fulfilment of the requirements of the degree of


Doctor of Philosophy

November 2015
i

Abstract
Robust optimisation refers to the process of combining good performance with
low sensitivity to possible perturbations. Due to the presence of different un-
certainties when optimising real problems, failure to employ robust optimisa-
tion techniques may result in finding unreliable solutions. Robust optimisation
techniques play key roles in finding reliable solutions when considering possible
uncertainties during optimisation.
Evolutionary optimisation algorithms have become very popular for solving
real problems in science and industry mainly due to simplicity, gradient-free
mechanism, and flexibility. Such techniques have been employed widely as very
reliable alternatives to mathematical optimisation approaches for tackling diffi-
culties of real search spaces such as constraints, local optima, multiple objectives,
and uncertainties. Despite the advances in considering the first three difficulties
in the literature, there is significant room for further improvements in the area
of robust optimisation, especially combined with multi-objective approaches.
Finding optimal solutions that are less sensitive to perturbations requires a
highly systematic robust optimisation algorithm design process. This includes
designing challenging robust test problems to compare algorithms, performance
metrics to measure by how much one robust algorithm is better than another,
and computationally cheap robust algorithms to find robust solutions for optimi-
sation problems. The first two phases of a systematic algorithm design process,
developing test functions and performance metrics, are prerequisite to the third
phase, algorithm development. Firstly, this thesis identifies the current gaps in
the literature relating to each of these phases to establish a systematic robust
algorithm design process as follows:

• The need for more standard and challenging robust test functions for both
single- and multi-objective algorithms.

• The need for more standard performance metrics for quantifying the per-
formance of robust multi-objective algorithms.

• The need for more investigation and analysis of the current robustness
metrics.

• High computational cost of the current robust optimisation techniques that


rely on additional function evaluations.

• Low reliability of the current robust optimisation techniques that rely on


the search history (sampled points during optimisation).

Secondly, the current robustness metrics are investigated and analysed in de-
tails. Thirdly, several test functions and performance metrics are proposed to fill
ii

out the first two above-mentioned gaps in the literature. Fourthly, a novel met-
ric called the confidence measure is proposed to reduce the computational cost
and increase the reliability of the current robust optimisation methods. Lastly
but most importantly, the proposed confidence metric is employed to establish
novel and cheap approaches for finding robust optimal solutions in single- and
multi-objective search spaces called confidence-based robust optimisation and
confidence-based robust multi-objective optimisation. The most well-regarded
evolutionary population-based algorithms such as Genetic Algorithm (GA), Par-
ticle Swarm Optimisation (PSO), and Multi-Objective Particle Swarm Optimi-
sation (MOPSO) are modified as the first confidence-based robust optimisation
algorithms.
Several experiments are conducted using the proposed benchmark problems
and performance metrics to evaluate the proposed confidence-based robust algo-
rithms qualitatively and quantitatively. The thesis also considers the application
of the proposed techniques in designing a marine propeller problem to emphasise
the applicability of the confidence-based optimisation in practice. The results
show that the proposed confidence-based algorithms mainly benefit from high
reliability and low computational cost when solving the benchmark problems.
The merits of the proposed benchmark problems and performance metrics in
comparing different algorithms are evidenced by the results of the test beds as
well. The results of real applications demonstrate that the proposed method is
able to confidently and reliably find robust optimal solutions without significant
extra computational burden for real problems with unknown search spaces.
iii

Certificate of Originality

This work has not previously been submitted for a degree or diploma in any
university. To the best of my knowledge and belief, the thesis contains no material
previously published or written by another person except where due reference is
made in the thesis itself.
iv

Approval
Name: Seyedali Mirjalili
Title: Confidence-based Robust Optimisation of Engineer-
ing Design Problems
Submission Date: 7 November, 2015

Supervisor: Dr Andrew Lewis


Griffith University
Australia

Co-supervisor: Dr René Hexel


Griffith University
Australia
v

Acknowledgements
Commencing, pursuing and completing this dissertation, like any other project,
required an abundance of resources as well as strong motivation which would not
have been possible without guidance and support of a group of people. Therefore,
I would like to express my gratitude to the people below.
I should first like to thank my principal supervisor and mentor, Dr. Andrew
Lewis whose advice, guidance, patience and support has been always available
for me throughout my academic journey in my Ph. D. candidature. I should
also like to thank Dr. René Hexel for his generous support and guidance as my
associate supervisor and Dr. Seyed Ali Mohammad Mirjalili for his invaluable
advice.
vi

To my mother and father


vii

List of Outcomes Arising from this Thesis


From chapter 4:

1. S. Mirjalili, A. Lewis, Novel Frameworks for Creating Robust Multi-


objective Benchmark Problems, Information Sciences 300 (2015): 158-192.
http://dx.doi.org/10.1016/j.ins.2014.12.037

2. S. Mirjalili, A. Lewis, Obstacles and Difficulties for Robust Benchmark


problems: A Novel Penalty-based Robust Optimization Method, Informa-
tion Sciences 328 (2016): 485-509.
http://dx.doi.org/10.1016/j.ins.2015.08.041

3. S. Mirjalili, A. Lewis, Hindrances for Robust Multi-objective Test Prob-


lems, Applied Soft Computing 35 (2015): 333-348.
http://dx.doi.org/10.1016/j.asoc.2015.05.037

4. S. Mirjalili, Shifted Robust Multi-objective Test Problems, Structural and


Multidisciplinary Optimization 52 (2015): 217-226.
http://dx.doi.org/10.1007/s00158-014-1221-9

From Chapter 5:

5. S. Mirjalili, A. Lewis, Novel Performance Metrics for Robust Multi-


objective Optimization Algorithms, Swarm and Evolutionary Computation
21 (2015): 1-23.
http://dx.doi.org/10.1016/j.swevo.2014.10.005

From Chapters 6 and 7:

6. S. Mirjalili, A. Lewis, and S. Mostaghim. Confidence measure: A novel


metric for robust meta-heuristic optimisation algorithms. Information Sci-
ences 317 (2015): 114-142.
http://dx.doi.org/10.1016/j.ins.2015.04.010

7. S. Mirjalili, A. Lewis, A Reliable and Computationally Cheap Approach


for Finding Robust Optimal Solutions, GECCO 2015 : 1439-1440.
http://dx.doi.org/10.1145/2739482.2764640

From Chapter 9:

8. S. Mirjalili, A. Lewis, Multi-objective Optimization of Marine Propellers,


Procedia Computer Science 51 (2015): 2247-2256.
http://dx.doi.org/10.1016/j.procs.2015.05.504
viii

9. S. Mirjalili, T. Rawlings, J. Hettenhausen, A, Lewis, A comparison of


multi-objective optimisation metaheuristics on the 2D airfoil design prob-
lem, ANZIAM journal, 2013 Volume 54, Pages C345 - C360.
http://dx.doi.org/10.0000/anziamj.v54i0.6154

Copyrith and permission notice:

Reprinted from Information Sciences, Vol. 317, Seyedali Mirjalili, Andrew Lewis,
and Sanaz Mostaghim, Confidence measure: A novel metric for robust meta-
heuristic optimisation algorithms, Pages No. 114-142, Copyright c 2015, with
permission from Elsevier.

Reprinted from Information Sciences, Vol. 300, Seyedali Mirjalili and Andrew
Lewis, Novel frameworks for creating robust multi-objective benchmark prob-
lems, Pages No. 158-192, Copyright
c 2015, with permission from Elsevier.

Reprinted from Information Sciences, Vol. 328, Seyedali Mirjalili and Andrew
Lewis, Obstacles and difficulties for robust benchmark problems: A novel penalty-
based robust optimisation method, Pages No. 485-509, Copyright c 2016, with
permission from Elsevier.

Reprinted from Applied Soft Computing, Vol. 35, Seyedali Mirjalili and Andrew
Lewis, Hindrances for robust multi-objective test problems, Pages No. 333-348,
Copyright c 2015, with permission from Elsevier.

Reprinted from Swarm and Evolutionary Computation, Vol. 21, Seyedali Mir-
jalili and Andrew Lewis, Novel performance metrics for robust multi-objective
optimization algorithms, Pages No. 1-23, Copyright c 2015, with permission
from Elsevier.

Structural and Multidisciplinary Optimization, Shifted robust multi-objective


test problems, Vol. 52, Pages No. 217-226, Seyedali Mirjalili, Copyright
c 2015
Springer-Verlag Berlin Heidelberg, With permission of Springer.
Contents

1 Introduction 1
1.1 Problem Background . . . . . . . . . . . . . . . . . . . . . . . . . 4
1.2 Problem Statement and Objectives . . . . . . . . . . . . . . . . . 6
1.3 Scope and Significance . . . . . . . . . . . . . . . . . . . . . . . . 7
1.4 Organisation of the thesis . . . . . . . . . . . . . . . . . . . . . . 8

2 Related work 11
2.1 Evolutionary single-objective optimisation . . . . . . . . . . . . . 13
2.2 Evolutionary Multi-objective optimisation . . . . . . . . . . . . . 21
2.3 Robust single-objective optimisation . . . . . . . . . . . . . . . . 29
2.3.1 Preliminaries and definitions . . . . . . . . . . . . . . . . . 30
2.3.2 Expectation measure . . . . . . . . . . . . . . . . . . . . . 33
2.3.3 Variance measure . . . . . . . . . . . . . . . . . . . . . . . 35
2.4 Robust multi-objective optimisation . . . . . . . . . . . . . . . . . 38
2.4.1 Preliminaries and definitions . . . . . . . . . . . . . . . . . 38
2.4.2 Current expectation and variance measures . . . . . . . . . 44
2.5 Benchmark problems . . . . . . . . . . . . . . . . . . . . . . . . . 50
2.5.1 Benchmark problems for single-objective robust optimisation 54
2.5.2 Benchmark problems for multi-objective robust optimisation 56
2.6 Performance metrics . . . . . . . . . . . . . . . . . . . . . . . . . 60
2.6.1 Convergence performance indicators: . . . . . . . . . . . . 62
2.6.1.1 Generational Distance (GD): . . . . . . . . . . . 62
2.6.1.2 Inverted Generational Distance (IGD): . . . . . . 62
2.6.1.3 Delta Measure: . . . . . . . . . . . . . . . . . . . 63
2.6.1.4 Hypervolume metric: . . . . . . . . . . . . . . . . 63
2.6.1.5 Inverse hypervolume metric: . . . . . . . . . . . . 63
2.6.2 Coverage performance indicators: . . . . . . . . . . . . . . 63
2.6.2.1 Spacing (SP): . . . . . . . . . . . . . . . . . . . 63
2.6.2.2 Radial coverage metric: . . . . . . . . . . . . . . 64
2.6.2.3 Maximum Spread (M ): . . . . . . . . . . . . . . 64
2.6.3 Success performance indicators: . . . . . . . . . . . . . . . 65
2.6.3.1 Error Ratio (ER): . . . . . . . . . . . . . . . . . 65
2.6.3.2 Success counting (SCC): . . . . . . . . . . . . . . 65

ix
x CONTENTS

2.7 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 66

3 Analysis 68
3.1 Benchmark problems . . . . . . . . . . . . . . . . . . . . . . . . . 68
3.2 Performance metrics . . . . . . . . . . . . . . . . . . . . . . . . . 69
3.3 Robust algorithms . . . . . . . . . . . . . . . . . . . . . . . . . . 70
3.4 Systematic robust optimisation algorithm design process . . . . . 74
3.5 Objectives and plan . . . . . . . . . . . . . . . . . . . . . . . . . . 75
3.6 Contributions and scope . . . . . . . . . . . . . . . . . . . . . . . 77
3.7 Significance of Study . . . . . . . . . . . . . . . . . . . . . . . . . 79
3.8 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 81

4 Benchmark problems 84
4.1 Benchmarks for robust single-objective optimisation . . . . . . . . 86
4.1.1 Framework I . . . . . . . . . . . . . . . . . . . . . . . . . . 87
4.1.2 Framework II . . . . . . . . . . . . . . . . . . . . . . . . . 89
4.1.3 Framework III . . . . . . . . . . . . . . . . . . . . . . . . . 91
4.1.4 Obstacles and difficulties for single-objective robust bench-
mark problems . . . . . . . . . . . . . . . . . . . . . . . . 93
4.1.4.1 Desired number of variables . . . . . . . . . . . . 93
4.1.4.2 Biased search space . . . . . . . . . . . . . . . . . 95
4.1.4.3 Deceptive search space . . . . . . . . . . . . . . . 96
4.1.4.4 Multi-modal search space . . . . . . . . . . . . . 97
4.1.4.5 Flat search space . . . . . . . . . . . . . . . . . . 99
4.2 Benchmarks for robust multi-objective optimisation . . . . . . . . 101
4.2.1 Framework 1 . . . . . . . . . . . . . . . . . . . . . . . . . 102
4.2.2 Framework 2 . . . . . . . . . . . . . . . . . . . . . . . . . 106
4.2.3 Framework 3 . . . . . . . . . . . . . . . . . . . . . . . . . 109
4.2.4 Hindrances for robust multi-objective test problems . . . . 111
4.2.4.1 Biased search space . . . . . . . . . . . . . . . . . 112
4.2.4.2 Deceptive search space . . . . . . . . . . . . . . . 113
4.2.4.3 Multi-modal search space . . . . . . . . . . . . . 115
4.2.4.4 Flat (non-improving) search space . . . . . . . . 118
4.3 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 119

5 Performance measures 122


5.1 Robust coverage measure (Φ) . . . . . . . . . . . . . . . . . . . . 124
5.2 Robust success ratio (Γ) . . . . . . . . . . . . . . . . . . . . . . . 128
5.3 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 135

6 Improving robust optimisation techniques 136


6.1 Confidence measure . . . . . . . . . . . . . . . . . . . . . . . . . . 137
6.2 Confidence-based robust optimisation . . . . . . . . . . . . . . . . 140
CONTENTS xi

6.2.1 Confidence-based relational operators . . . . . . . . . . . . 140


6.2.2 Confidence-based Particle Swarm Optimisation . . . . . . 141
6.2.3 Confidence-based Robust Genetic Algorithms . . . . . . . 144
6.3 Confidence-based robust multi-objective optimisation . . . . . . . 145
6.3.1 Confidence-based Pareto optimality . . . . . . . . . . . . . 145
6.3.2 Confidence-based Robust Multi-Objective Particle Swarm
Optimisation . . . . . . . . . . . . . . . . . . . . . . . . . 148
6.4 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 149

7 Confidence-based robust optimisation 150


7.1 Behaviour of CRPSO on benchmark problems . . . . . . . . . . . 150
7.2 Comparative Results for Confidence-based Robust PSO . . . . . . 155
7.3 Comparative Results for Confidence-based Robust GA . . . . . . 160
7.4 Comparison of CRPSO and CRGA . . . . . . . . . . . . . . . . . 166
7.5 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 167

8 Confidence-based robust multi-objective optimisation 168


8.1 Behaviour of CRMOPSO on benchmark problems . . . . . . . . . 168
8.2 Discussion of results . . . . . . . . . . . . . . . . . . . . . . . . . 170
8.3 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 183

9 Real world applications 185


9.1 Marine Propeller design and related works . . . . . . . . . . . . . 185
9.1.1 Propeller design . . . . . . . . . . . . . . . . . . . . . . . . 185
9.1.2 Related work . . . . . . . . . . . . . . . . . . . . . . . . . 188
9.2 Results and discussion . . . . . . . . . . . . . . . . . . . . . . . . 189
9.2.1 Approximating the Pareto Front using the algorithm . . . 190
9.2.2 Number of blades . . . . . . . . . . . . . . . . . . . . . . . 191
9.2.3 Revolutions Per Minute (RPM) . . . . . . . . . . . . . . . 191
9.2.4 Post analysis of the results . . . . . . . . . . . . . . . . . . 194
9.2.5 Effects of uncertainties in operating conditions on the ob-
jectives . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 195
9.2.6 Uncertainties in the structural parameters . . . . . . . . . 196
9.3 Confidence-based Robust optimisation of marine propellers . . . . 197
9.4 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 200

10 Conclusion 202
10.1 Summary and conclusions . . . . . . . . . . . . . . . . . . . . . . 202
10.2 Achievements and significance . . . . . . . . . . . . . . . . . . . . 207
10.3 Future work . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 210

Index 210

Bibliography 210
xii CONTENTS

Appendices 229

A Single-objective robust test functions 230


A.1 Test functions in the literature . . . . . . . . . . . . . . . . . . . 230
A.1.1 TP1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 230
A.1.2 TP2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 231
A.1.3 TP3 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 231
A.1.4 TP4 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 232
A.1.5 TP5 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 232
A.1.6 TP6 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 233
A.1.7 TP7 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 233
A.1.8 TP8 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 234
A.1.9 TP9 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 234
A.2 Test functions generated by the proposed framework I . . . . . . . 235
A.2.1 TP10 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 235
A.3 Proposed biased test functions . . . . . . . . . . . . . . . . . . . 235
A.3.1 TP11 - biased 1 . . . . . . . . . . . . . . . . . . . . . . . . 235
A.3.2 TP12 - biased 2 . . . . . . . . . . . . . . . . . . . . . . . . 236
A.4 Proposed deceptive test functions . . . . . . . . . . . . . . . . . . 237
A.4.1 TP13 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 237
A.4.2 TP14 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 237
A.4.3 TP15 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 238
A.5 Proposed multi-modal robust test functions . . . . . . . . . . . . 238
A.5.1 TP16 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 238
A.5.2 TP17 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 239
A.6 Proposed flat robust test function . . . . . . . . . . . . . . . . . . 239
A.6.1 TP18 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 239
A.7 Test functions generated by the proposed frameworks II and III . 240
A.7.1 TP19 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 240
A.7.2 TP20 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 240

B Multi-objective robust test functions 242


B.1 Deb’s test functions . . . . . . . . . . . . . . . . . . . . . . . . . . 242
B.1.1 RMTP1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . 242
B.1.2 RMTP2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . 243
B.1.3 RMTP3 . . . . . . . . . . . . . . . . . . . . . . . . . . . . 243
B.1.4 RMTP4 . . . . . . . . . . . . . . . . . . . . . . . . . . . . 244
B.1.5 RMTP5 . . . . . . . . . . . . . . . . . . . . . . . . . . . . 244
B.2 Gaspar Cunha’s functions . . . . . . . . . . . . . . . . . . . . . . 245
B.2.1 RMTP6 . . . . . . . . . . . . . . . . . . . . . . . . . . . . 245
B.2.2 RMTP7 . . . . . . . . . . . . . . . . . . . . . . . . . . . . 246
B.2.3 RMTP8 . . . . . . . . . . . . . . . . . . . . . . . . . . . . 246
B.2.4 RMTP9 . . . . . . . . . . . . . . . . . . . . . . . . . . . . 246
CONTENTS xiii

B.2.5 RMTP10 . . . . . . . . . . . . . . . . . . . . . . . . . . . 247


B.3 Test functions generated by the proposed frameworks 1, 2, and 3 . 247
B.3.1 RMTP11 . . . . . . . . . . . . . . . . . . . . . . . . . . . 247
B.3.2 RMTP12 . . . . . . . . . . . . . . . . . . . . . . . . . . . 247
B.3.3 RMTP13 . . . . . . . . . . . . . . . . . . . . . . . . . . . 247
B.3.4 RMTP14 . . . . . . . . . . . . . . . . . . . . . . . . . . . 248
B.3.5 RMTP15 . . . . . . . . . . . . . . . . . . . . . . . . . . . 248
B.3.6 RMTP16 . . . . . . . . . . . . . . . . . . . . . . . . . . . 248
B.3.7 RMTP17 . . . . . . . . . . . . . . . . . . . . . . . . . . . 248
B.3.8 RMTP18 . . . . . . . . . . . . . . . . . . . . . . . . . . . 249
B.3.9 RMTP19 . . . . . . . . . . . . . . . . . . . . . . . . . . . 249
B.3.10 RMTP20 . . . . . . . . . . . . . . . . . . . . . . . . . . . 249
B.3.11 RMTP21 . . . . . . . . . . . . . . . . . . . . . . . . . . . 249
B.3.12 RMTP22 . . . . . . . . . . . . . . . . . . . . . . . . . . . 249
B.3.13 RMTP23 . . . . . . . . . . . . . . . . . . . . . . . . . . . 250
B.3.14 RMTP24 . . . . . . . . . . . . . . . . . . . . . . . . . . . 250
B.3.15 RMTP25 . . . . . . . . . . . . . . . . . . . . . . . . . . . 250
B.4 Extended version of current test functions . . . . . . . . . . . . . 250
B.4.1 RMTP26 . . . . . . . . . . . . . . . . . . . . . . . . . . . 250
B.4.2 RMTP27 . . . . . . . . . . . . . . . . . . . . . . . . . . . 251
B.4.3 RMTP28 . . . . . . . . . . . . . . . . . . . . . . . . . . . 252
B.4.4 RMTP29 . . . . . . . . . . . . . . . . . . . . . . . . . . . 252
B.4.5 RMTP30 . . . . . . . . . . . . . . . . . . . . . . . . . . . 253
B.4.6 RMTP31 . . . . . . . . . . . . . . . . . . . . . . . . . . . 253
B.4.7 RMTP32 . . . . . . . . . . . . . . . . . . . . . . . . . . . 254
B.4.8 RMTP33 . . . . . . . . . . . . . . . . . . . . . . . . . . . 254
B.4.9 RMTP34 . . . . . . . . . . . . . . . . . . . . . . . . . . . 255
B.4.10 RMTP35 . . . . . . . . . . . . . . . . . . . . . . . . . . . 255
B.5 Proposed deceptive test functions . . . . . . . . . . . . . . . . . . 256
B.5.1 RMTP36 . . . . . . . . . . . . . . . . . . . . . . . . . . . 256
B.5.2 RMTP37 . . . . . . . . . . . . . . . . . . . . . . . . . . . 256
B.5.3 RMTP38 . . . . . . . . . . . . . . . . . . . . . . . . . . . 256
B.6 Proposed multi-modal robust test functions . . . . . . . . . . . . 256
B.6.1 RMTP39 . . . . . . . . . . . . . . . . . . . . . . . . . . . 256
B.6.2 RMTP40 . . . . . . . . . . . . . . . . . . . . . . . . . . . 257
B.6.3 RMTP41 . . . . . . . . . . . . . . . . . . . . . . . . . . . 257
B.7 Proposed flat robust test functions . . . . . . . . . . . . . . . . . 257
B.7.1 RMTP42 . . . . . . . . . . . . . . . . . . . . . . . . . . . 257
B.7.2 RMTP43 . . . . . . . . . . . . . . . . . . . . . . . . . . . 258
B.7.3 RMTP44 . . . . . . . . . . . . . . . . . . . . . . . . . . . 258
xiv CONTENTS

C Complete results 259


C.1 Robust Pareto optimal fronts obtained by CRMOPSO, IRMOPSO,
and ERMOPSO . . . . . . . . . . . . . . . . . . . . . . . . . . . . 259
List of Figures

1.1 Organisation of the thesis (Purple: literature review and related


works, Red: analysis of the literature and current gaps, Green:
proposed systematic robust algorithm design process, Blue: re-
sults on the test beds and real case study, and Orange: conclusion
and future works) . . . . . . . . . . . . . . . . . . . . . . . . . . . 10

2.1 Different components of an optimisation system: inputs, outputs,


operating conditions, and constraints . . . . . . . . . . . . . . . . 14
2.2 Example of a search landscape with two variables and several
constraints . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15
2.3 Stochastic population-based optimisers consider the system as
black box . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
2.4 Individual-based versus population-based stochastic optimisation
algorithms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
2.5 Pareto dominance . . . . . . . . . . . . . . . . . . . . . . . . . . . 23
2.6 Pareto optimal set versus Pareto optimal front . . . . . . . . . . . 24
2.7 A priori method versus a posteriori methods [42] . . . . . . . . . 26
2.8 Different categories of uncertainties and their effects on a system:
Type A, Type B, and Type C . . . . . . . . . . . . . . . . . . . . 31
2.9 Conceptual model of a robust optimum versus a non-robust op-
timum. The same perturbation level (δ) in the parameter (p)
0
results in different changes (∆ and ∆ ) in the objective (f ) . . . . 32
2.10 Search space of an expectation measure versus its objective function 34
2.11 Conceptual model of infeasible regions when employing a variance
measure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36
2.12 Concepts of robustness and a robust solution in multi-objective
search space . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40
2.13 Four possible robust Pareto optimal fronts with respect to the
main Pareto optimal front . . . . . . . . . . . . . . . . . . . . . . 42
2.14 Collected current test functions in the literature for robust single-
objective optimisation. The details can be found in Appendix
A. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55
2.15 Test problems proposed by Deb and Gupta in 2006 [44] . . . . . . 57

xv
xvi LIST OF FIGURES

2.16 Test problem proposed by Gaspar-Cunha et al. in 2013 . . . . . . 58

3.1 An example of the failure of archive-based methods in distin-


guishing robust and non-robust solutions. Note that the variances
shown are the actual variances, not those detected by the sampling. 74
3.2 Test functions and performance metrics are essential for system-
atic algorithm design . . . . . . . . . . . . . . . . . . . . . . . . . 76
3.3 Gaps targeted by the thesis . . . . . . . . . . . . . . . . . . . . . 78
3.4 Scope and contributions of the thesis . . . . . . . . . . . . . . . . 79

4.1 Proposed function with adjustable local optima robustness pa-


rameter. The parameter α changes the landscape significantly
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 87
4.2 Effect α of on the robustness of the global optimum . . . . . . . . 88
4.3 Shape of the search landscape with controlling parameters con-
structed by framework II . . . . . . . . . . . . . . . . . . . . . . . 90
4.4 Effect of parameter λ on the shape of search landscape . . . . . . 91
4.5 An example of the search space that can be constructed by the
framework III . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 92
4.6 Search space of the function G(~x) . . . . . . . . . . . . . . . . . . 94
4.7 Search space becomes larger proportional to N and without any
change in the main search landscape . . . . . . . . . . . . . . . . 94
4.8 Density of solutions when θ < 1, θ = 1, θ > 1 (p = 0) . . . . . . . 95
4.9 Conversion of an un-biased search space to a biased search space . 96
4.10 50,000 randomly generated solutions reveal there is low density
toward the robust optimum in the biased test function, while the
density is uniform in the un-biased test function . . . . . . . . . . 97
4.11 Proposed deceptive robust test problem . . . . . . . . . . . . . . . 98
4.12 Proposed multi-modal robust test function (M = 10) . . . . . . . 99
4.13 Proposed flat robust test function . . . . . . . . . . . . . . . . . . 100
4.14 Search space and objective space constructed by proposed frame-
work 1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 103
4.15 Effect of α on the robustness of the global Pareto optimal front’s
valley . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 104
4.16 Shape of parameter space and Pareto optimal fronts when: β =
0.5, β = 1, and β = 1.5. Note that the red curve indicates the
robustness of the robust front and black curves are the front. . . 104
4.17 Changing the shape of global and robust Pareto optimal front with β 105
4.18 Shape of the parameter space and its relation with the objective
space constructed by the framework 2 . . . . . . . . . . . . . . . . 107
4.19 Effect of λ on both parameter and objective spaces. Note that the
red curve indicates the robustness of the robust front and black
curves are the fronts. . . . . . . . . . . . . . . . . . . . . . . . . . 108
LIST OF FIGURES xvii

4.20 Effect of γ on the fronts. Note that the red curve indicates the
robustness of the robust front and black curves are the fronts. . . 109
4.21 Effect of ζ on the parameter and objective spaces. Note that the
red curve indicates the robustness of the robust front and black
curves are the fronts. . . . . . . . . . . . . . . . . . . . . . . . . . 110
4.22 Parameter and objective spaces constructed by the third frame-
work. The red curve indicates the robustness of the robust front
and black curves are the fronts. . . . . . . . . . . . . . . . . . . . 111
4.23 A non-biased objective space versus a biased objective space (50,000
random solutions). The proposed bias function requires the ran-
dom points to cluster away from the Pareto optimal front. . . . . 112
4.24 Bias of the search space is increased inversely proportional to ψ . 113
4.25 There are four deceptive non-robust optima and one robust opti-
mum in the function H(x) . . . . . . . . . . . . . . . . . . . . . . 114
4.26 Different shapes of Pareto fronts that can be obtained by manip-
ulating β . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 115
4.27 H(x) creates one robust and 2M global Pareto optimal fronts . . 117
4.28 Parameter space and objective space of the proposed multi-modal
robust multi-objective test problem . . . . . . . . . . . . . . . . . 117
4.29 Different shapes of Pareto fronts that can be obtained by manip-
ulating β . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 118
4.30 H(x) makes two global optima close to the boundaries . . . . . . 119
4.31 H(x) makes two global optima close to the boundaries . . . . . . 120

5.1 Schematic of the proposed coverage measure (Φ) . . . . . . . . . . 124


5.2 Effect of the number of occupied robust segments on the proposed
coverage measure . . . . . . . . . . . . . . . . . . . . . . . . . . . 126
5.3 Zero effect of occupied non-robust segments on the proposed cov-
erage measure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 126
5.4 Segments that are partially robust do not count when calculating Φ127
5.5 The accuracy of the proposed coverage measure is increased pro-
portional to the number of segments . . . . . . . . . . . . . . . . 127
5.6 Effect of the minimum robustness on the number of robust seg-
ments and Φ . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 128
5.7 All segments are converted to robust and counted when Rmin >
max (R(robustness curve)) . . . . . . . . . . . . . . . . . . . . . . 128
5.8 Conceptual model of the proposed success ratio measure . . . . . 129
5.9 An example of a probable problem in case of using diagonal seg-
ments when calculating Γ . . . . . . . . . . . . . . . . . . . . . . 130
5.10 Success ratio is zero if there is no robust solution in the set of
solutions obtained . . . . . . . . . . . . . . . . . . . . . . . . . . . 131
5.11 Example of the success ratio for a set that contains only robust
solutions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 131
xviii LIST OF FIGURES

5.12 Success ratio increases proportional to the number of robust so-


lutions obtained . . . . . . . . . . . . . . . . . . . . . . . . . . . . 132
5.13 Success ratio is inversely proportional to the number of non-robust
solutions obtained . . . . . . . . . . . . . . . . . . . . . . . . . . . 132
5.14 Effect of minimum robustness on success ratio when Rmin <
min (R(robustness curve)) . . . . . . . . . . . . . . . . . . . . . . 133
5.15 Effect of minimum robustness on success ratio when Rmin >
max (R(robustness curve)) . . . . . . . . . . . . . . . . . . . . . . 134

6.1 Confidence measure considers the number, distribution, and dis-


tance of sampled point from the current solution . . . . . . . . . . 138
6.2 Flow chart of the general framework of the proposed confidence-
based robust optimisation . . . . . . . . . . . . . . . . . . . . . . 142

7.1 Behaviour of CRPSO1 finding the robust optima of TP1, TP2,


TP3, TP4, and TP5 . . . . . . . . . . . . . . . . . . . . . . . . . 152
7.2 Behaviour of CRPSO1 finding the robust optima of TP6, TP7,
TP8, TP9, and TP10 . . . . . . . . . . . . . . . . . . . . . . . . . 153
7.3 Search history of GA and IRGA. GA converges towards the global
non-robust optimum, while IRGA failed to determine the robust
optimum . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 165
7.4 Search history of CRGA1 and CRGA2. The exploration of CRGA2
is much less than that of CRGA1 . . . . . . . . . . . . . . . . . . 165

8.1 Robust fronts obtained for RMTP1, RMTP7, RMTP9, and RMTP27,
one test case per row. . . . . . . . . . . . . . . . . . . . . . . . . 170
8.2 Robust fronts obtained for RMTP13 to RMTP16 and RMTP19,
one test case per row. Note that the dominated (local) front is
robust and considered as reference for the performance measures. 179
8.3 Robust fronts obtained for RMTP21 to RMTP25 one test case per
row. Note that the worst front is the most robust and considered
as reference for the performance measures. . . . . . . . . . . . . 181

9.1 Airfoils along the blade define the shape of the propeller (NACA
a = 0.8 meanline and NACA 65A010 thickness) . . . . . . . . . . 188
9.2 Propeller used as the case study . . . . . . . . . . . . . . . . . . . 189
9.3 (left) Pareto optimal front obtained by the MOPSO algorithm (6
blades), (right) Pareto optimal fronts for different numbers of blades190
9.4 (left) Best Pareto optimal fronts obtained for different RPM (right)
Optimal RPM . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 192
9.5 PF obtained when varying RPM compared to PFs obtained with
different RPM values . . . . . . . . . . . . . . . . . . . . . . . . . 193
9.6 Optimal RPM coordinates . . . . . . . . . . . . . . . . . . . . . . 194
LIST OF FIGURES xix

9.7 Pareto optimal solutions in case of (left) δRP M = +1 , (right)


δRP M = −1 fluctuations in RPM (right). Original values are
shown in blue, perturbed results in red. . . . . . . . . . . . . . . . 195
9.8 Pareto optimal solutions in case of (left) δ = +1.5% (right) δ =
−1.5% perturbations in parameters. Original values are shown in
blue, perturbed results in red. . . . . . . . . . . . . . . . . . . . . 196
9.9 Robust front obtained by CRMOPSO versus global front obtained
by MOPSO . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 198
9.10 Global and robust Pareto optimal fronts obtained by MOPSO and
CRMOPSO when RPM is also a variable . . . . . . . . . . . . . . 199
9.11 Optimal and robust optimal values for RPM (note that there are
98 robust Pareto optimal solutions and 100 global optimal solutions)200

10.1 Gaps filled by the thesis . . . . . . . . . . . . . . . . . . . . . . . 203


10.2 Contributions of the thesis . . . . . . . . . . . . . . . . . . . . . . 207

C.1 Robust fronts obtained for RMTP1 to RMTP5. . . . . . . . . . . 260


C.2 Robust fronts obtained for RMTP6 to RMTP10. . . . . . . . . . . 261
C.3 Robust fronts obtained for RMTP11 to RMTP15. Note that the
dominated (local) front is robust and considered as reference for
the performance measures. . . . . . . . . . . . . . . . . . . . . . . 262
C.4 Robust fronts obtained for RMTP16 to RMTP19. Note that the
dominated (local) front is robust and considered as reference for
the performance measures. . . . . . . . . . . . . . . . . . . . . . . 263
C.5 Robust fronts obtained for RMTP20 to RMTP22. Note that the
worst front is the most robust and considered as reference for the
performance measures. . . . . . . . . . . . . . . . . . . . . . . . . 264
C.6 Robust fronts obtained for RMTP23 to RMTP25. Note that the
worst front is the most robust and considered as reference for the
performance measures. . . . . . . . . . . . . . . . . . . . . . . . . 265
C.7 Robust fronts obtained for RMTP26 and RMTP27. . . . . . . . . 265
C.8 Robust fronts obtained for RMTP28 and RMTP32. . . . . . . . . 266
C.9 Robust fronts obtained for RMTP33 and RMTP38. Note that in
RMTP36, RMTP37, and RMTP38, the worst front is the most
robust and considered as the reference for the performance measures.267
C.10 Robust fronts obtained for RMTP39 and RMTP44. . . . . . . . . 268
List of Tables

7.1 Number of times that the confidence operators and confidence


measure triggered . . . . . . . . . . . . . . . . . . . . . . . . . . . 154
7.2 Statistical results of the RPSO algorithms over 30 independent
runs: (ave ± std(median)) . . . . . . . . . . . . . . . . . . . . . . 157
7.3 Results of Wilcoxon ranksum test for RPSO algorithms . . . . . . 158
7.4 Statistical results of the RGA algorithms over 30 independent
runs: (ave ± std(median)) . . . . . . . . . . . . . . . . . . . . . . 161
7.5 Results of Wilcoxon ranksum test for RGA algorithms . . . . . . 162
7.6 Number of times that CRGA2 makes confident and risky decisions
over 100 generations . . . . . . . . . . . . . . . . . . . . . . . . . 164

8.1 Statistical results of RMOPSO algorithms using IGD . . . . . . . 171


8.2 P-values of Wilcoxon ranksum test for the RMOPSO algorithms
in Table 8.1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 172
8.3 Statistical results of RMOPSO algorithms using Φ . . . . . . . . . 173
8.4 P-values of Wilcoxon ranksum test for the RMOPSO algorithms
in Table 8.3 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 174
8.5 Statistical results of RMOPSO algorithms using Γ . . . . . . . . . 175
8.6 P-values of Wilcoxon ranksum test for the RMOPSO algorithms
in Table 8.5 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 176
8.7 Number of times that the proposed confidence-based Pareto dom-
inance prevented a solution entering the archive . . . . . . . . . . 183

9.1 Fuel consumption discrepancy in case of perturbation in all of the


structural parameters for both PS obtained by MOPSO and RPS
obtained by CRMOPSO . . . . . . . . . . . . . . . . . . . . . . . 198
9.2 Fuel consumption discrepancy in case of perturbation in RPM for
both PS obtained by MOPSO and RPS obtained by CRMOPSO . 200

xx
Nomenclature

Γ Robust success ratio

Φ Robust coverage measure

ACO Ant Colony Optimisation

BWA Bang-Bang Weighted Aggregation

C Confidence Measure

CFD Computational Fluid Dynamics

CRGA Confidence-based Robust Genetic Algorithm

CRMO Confidence-based Robust Multi-objective Optimisation

CRMOPSO Confidence-based Robust Multi-Objective Particle Swarm Opti-


misation

CRO Chemical Reaction Optimisation

CRO Confidence-based Robust Optimisation

CRPSO Confidence-based Robust Particle Swarm Optimisation

DE Differential Evolution

DMOO Dynamic Multi-Objective Optimisation

DWA Dynamic Weighted Aggregation

EA Evolutionary Algorithm

EMA Evolutionary Multi-objective Algorithm

EMOO Evolutionary Multi-Objective Optimisation

EP Evolutionary Programming

ER Error Ratio

xxi
xxii

ERGA Explicit Averaging Robust Genetic Algorithm

ERMOPSO Explicit Averaging Robust Multi-Objective Particle Swarm Op-


timisation

ERPO Explicit Averaging Robust Particle Swarm Optimisation

ES Evolution Strategy

GA Genetic Algorithm

gBest Global Best

GD Generational Distance

GSA Gravitational Search Algorithm

IGD Inverted Generational Distance

IMOO Interactive Multi-Objective Optimisation

IRGA Implicit Averaging Robust Genetic Algorithm

IRMOPSO Implicit Averaging Robust Multi-Objective Particle Swarm Op-


timisation

IRPO Implicit Averaging Robust Particle Swarm Optimisation

LHS Latin Hypercube Sampling

MOEA/D Multi-Objective Evolutionary Algorithm based on Decomposition

MOPSO Multi-Objective Particle Swarm Optimisation

N/A Not Applicable

NACA National Advisory Committee for Aeronautics

NSGA Non-dominated Sorting Genetic Algorithm

O Big O

PAES Pareto Archived Evolution Strategy

pBest Personal Best

PDE Pareto-frontier Differential Evolution

PF Pareto Optimal Front


xxiii

PS Pareto Optimal Set

PSO Particle Swarm Optimisation

R Robustness Measure

RMOO Robust Multi-Objective Optimisation

RMTP Robust Multi-objective Test Problem

RNSGA Robust Non-dominated Sorting Genetic Algorithm

RPF Robust Pareto Optimal Front

RPM Revolutions Per Minute

RPS Robust Pareto Optimal Set

RPSGA Reduced Pareto Set Genetic Algorithm

SA Simulated Annealing

SCC Success Counting

SP Spacing

SPEA Strength-Pareto Evolutionary Algorithm

TP Test Problem

TS Tabu Search

ZDT ZitzlerDebThiele
Chapter 1

Introduction

In the past, the computational engineering design process used to be mostly ex-
perimentally based [103]. This meant that a real system first had to be designed
and constructed to be able to do experiments. In other words, the design model
was an actual physical model. For instance, an actual airplane or prototype
would have to put in a massive wind tunnel to investigate the aerodynamics of
the aircraft [1]. Obviously, the process of design was very tedious, expensive,
and slow.
After the development of computers, engineers started to simulate models in
computers to compute and investigate different aspects of real systems. This was
a revolutionary idea since there was no need for an actual model in the design
phase anymore. Another advantage of modelling problems in computers was the
reduced time and cost. It was no longer necessary to build a wind tunnel and
real model to compute and investigate the aerodynamics of an aircraft. The next
step was to investigate not only the known characteristics of the problem but also
explore and discover new features. Exploring the search space of the simulated
model in a computer allowed designers to better understand the problem and find
optimal values for design parameters. Despite the use of computer in modelling,
a designer still had to manipulate the parameters of the problem manually.
After the first two steps, people started to develop and utilise computa-
tional/optimisation algorithms to use the computer itself to find optimal solu-
tions of the simulated model for a given problem. Thus, the computer manipu-
lated and chose the parameters with minimum human involvement. This was the
birth of automated and computer-aided design fields. Evolutionary Algorithms
(EA) also became popular tools in finding the optimal solutions for optimisation

1
2 1. Introduction

problems.
Generally speaking, EAs mostly have very similar frameworks. They first
start the optimisation process by creating an initial set of random, trial solutions
for a given problem. This random set is then iteratively evaluated by objective
function(s) of the problem and evolved to minimise or maximise the objective(s).
Although this framework is very simple, optimisation of real world problems
requires considering and addressing several issues of which the most important
ones are: local optima, expensive computational cost of function evaluations,
constraints, multiple objectives, and uncertainties.
Real problems have mostly unknown search spaces that may contain many
sub-optimal solutions. Stagnation in local optima is a very common phenomenon
when using EAs. In this case, the algorithm is trapped in one of the local solu-
tions and assumes it to be the global solution. Although the stochastic operators
of EAs improve the local optima avoidance ability compared to deterministic
mathematical optimisation approaches, local optima stagnation may occur in
any EAs as well.
EAs are also mostly population-based paradigms. This means they iteratively
evaluate and improve a set of solutions instead of a single solution. Although
this improves the local optima avoidance as well, solving expensive problems with
EAs is not feasible sometimes due to the need for a large number of function
evaluations. In this case, different mechanisms should be designed to decrease
the required number of function evaluations. Constraints are another difficulty
of real problems, in which the search space may be divided into two regions: fea-
sible and infeasible. The search agents of EAs should be equipped with suitable
mechanisms to avoid all the infeasible regions and explore the feasible areas to
find the feasible global optimum. Handling constraints requires specific mecha-
nisms and has been a popular topic among researchers.
Real engineering problems often also have multiple objectives. Optimisation
in a multi-objective search space is quite different and needs special considera-
tions compared to a single-objective search space. In a single-objective problem,
there is only one objective function to be optimised and only one global solution
to be found. However, in multi-objective problems there is no longer a single so-
lution for the problem, and a set of solutions representing the trade-offs between
the multiple objectives, the Pareto optimal set, must be found.
Last but not least, another key concept in the optimisation of real engineer-
1. Introduction 3

ing problems is robustness. Robust optimisation refers to the process of finding


optimal solutions for a particular problem that have least variability in response
to probable uncertainties. Uncertainties are unavoidable in the real world and
can be classified in three categories: those affecting parameters, operating con-
ditions, and outputs.
One of the most common uncertainties is perturbation of parameters, in
which the design parameters may vary. Such uncertainties mostly occur dur-
ing the manufacturing process due to the resolution and imprecision of devices.
These inaccuracies always exist, so a design that does not consider them is prone
to be unreliable. An example is the length of a bar in a truss. If the length goes
over a certain threshold due to manufacturing perturbations, it may cause col-
lapse of the entire structure. Since parameters are primary inputs of a problem,
this type of uncertainty is of the highest importance.
Uncertainties also may happen in the secondary inputs of a system: the
operating conditions. An example is the change in fuel consumption of a car
when varying speed. There is an optimal speed for a car to have the least fuel
consumption: the car consumes more fuel if it goes slower or faster. Of course,
the primary parameters and uncertainties in the design of the car play the key
role in fuel consumption. However, varying speed also has significant impact on
the consumption. This type of uncertainty has been a main cause of airplane
crashes in history: due to icing and wind.
The last type of uncertainty occurs in the outputs themselves. They usually
come from the approximate and simulated models in the computer, which are
unavoidable. Systems with time varying outputs (dynamic systems) also fall
into this category. Once again, failure to consider such perturbation during the
design process may result in failure of the entire system to produce the desired
output.
Although uncertainties are small perturbations in different components of a
system, they usually have substantial impacts on the desired outputs. Without
considering uncertainties during the design process, a system has the potential
to show undesirable outputs. This is critical for systems on which humans rely.
For instance, a small perturbation in the shape of an aircraft’s wing, operating
conditions, or simulated model may result in a crash. Uncertainties are undesir-
able inputs that always exist when the system is operating in a real environment.
Therefore, it is essential to consider and handle them using robust techniques
4 1. Introduction

during the design process to avoid or minimise their negative consequences on


the output(s) of the entire system. Robust optimisation is essential when solving
real problems since failure to consider uncertainty can eclipse all the efforts a
team put into designing and implementing a system. In addition, this may bring
a substantial waste of money and time for stakeholders.
Similar to conventional optimisation, robust optimisation in a single-objective
search space is also different from that in a multi-objective search space. In a
single-objective search space, there is one robust solution with the best perfor-
mance and least variations. In a multi-objective search space, however, robust
optimal solutions belong to a set of optimal solutions called the robust Pareto
optimal set representing the robust trade-offs between the multiple objectives.
There are several works in the literature that employed multi-objective meta-
heuristics to perform robust optimisation [27, 88, 136, 44]. In 2006, Deb and
Gupta [44] investigated two different approaches for robust optimisation in
multi-objective search spaces: expectation-based and variance-based methods.
In the former method an expectation measure, which is calculated by averaging
a representative set of neighbouring solutions, is optimised instead of the main
objective function. In the latter method, however, the main objective functions
are optimised with an additional constraint (variance measure) which limits the
optimisation process in terms of the robustness of search agents.
In contrast to other branches of multi-objective optimisation, unfortunately,
robust multi-objective optimisation has not gained deserved attention [81, 80].
As evidence, a publication report was conducted from 1994 to 2014 in ISI Web
of Knowledge with the keywords “multi-objective optimisation” and “robust
optimisation”. It was found that only about 0.5% of publications on these topics
over the past decade contained both keywords. Among current works, there is
a considerable number of studies that focused on robustness in single-objective
search spaces (e.g. [176].) However, there are fewer works on the investigation
of robustness in multi-objective search spaces [69, 75].

1.1 Problem Background


In the literature, robust meta-heuristic optimisation is performed with a wide
range of robust measures [93], of which the most well-regarded ones are: expec-
tation and variance. Generally speaking, these measures are used for observing
1. Introduction 5

the behaviour in objective space in the neighbourhood of a particular solution


to confirm robustness.
Robust optimisation using expectation measures was named Type I robust
optimisation by Deb and Gupta [44]. In this kind of optimisation, the objective
function(s) are replaced by the expectation measure(s). Then, the expectation
measure(s) are optimised. Technically speaking, a finite set of H solutions are
chosen randomly or with a structured procedure in the hypervolume ([−δ, δ])
around the solution ~x, and then the expectation measure(s) of all samples are
optimised by heuristic optimisation algorithms.
Since the proposal of this kind of optimisation, several researchers proposed
different expectation measures and tried to perform robust optimisation [74, 12,
13, 168, 94, 159]. Another way of robust optimisation using expectation mea-
sures is to consider them as separate objective functions [135]. In this case
the Pareto front finally obtained would indicate trade-offs between objective(s)
and robustness. So the production for decision makers of a wide range of solu-
tions with different degrees of robustness could be considered as the advantage
of this method. As a drawback, however, considering expectation measures as
additional objectives increases the computational complexity of a problem, as
discussed by Brockhoff et al. [24, 23].
The second method of robust optimisation, using a variance measure, does
not replace the main objective functions. An additional constraint is added to the
problem in order to handle uncertainties. This constraint controls the variance
of objectives of solutions in the objective space based on the local perturba-
tions around the solution in the parameter space. A set of random solutions
is generated around the solutions in the parameter space and the variance of
their corresponding objective values limited by a pre-defined threshold. Viola-
tion of this constraint assists us in distinguishing between a robust solution and
a non-robust solution. Deb and Gupta named this method Type II robustness
handling [44]. There are also other, different variance measures proposed in the
literature [69, 74].
Robust optimisation using expectation measures and variance measures are
both able to consider uncertainties during optimisation and prevent an algorithm
from finding solutions that are sensitive to possible perturbations in real envi-
ronments. The question here is how effective each of theses methods are and if
both of them are worthy of further improvements. Robust optimisation using
6 1. Introduction

an expectation measure (Type I) benefits from the separation of the robustness


measure from the algorithm’s structure. This method directly integrates the
robustness measure in the objective function, so there is no need to modify the
structure of an algorithm or employ specific operators to handle uncertainties.
However, this method changes the shape of the search landscape that might
impact the computational time, as usually a function evaluation is replaced by
several (function evaluations).
The variance measures (Type II) do not change the shape of the search space,
which can be considered as an advantage because the optimiser searches the
actual search space. However, they add another difficulty to the search space,
which is infeasible regions. This means that an algorithm should be equipped
with a constraint handling technique to be able to work with a variance measure.
A variance measure has the potential to increase the amount of infeasible regions,
which will definitely need special considerations. A search space with a large
infeasible portion is very likely to result in having many infeasible search agents
in each iteration. The problem here is that, by default, most of the meta-
heuristics discard infeasible solutions and only rely on feasible solutions to drive
the search agents towards optimal solution(s). Therefore, a powerful constraint
handling method should be utilised when solving such problems.

1.2 Problem Statement and Objectives


The disadvantages of type I and type II robust optimisation methods are related
to the additional difficulty that needs to be considered as well as a large number
of function evaluations. However, they are able to find robust solutions for a
given problem, subject to creating enough sampled points or additional function
evaluations. Both types of measure quantify the robustness of solutions, which
is fruitful when comparing the solutions during optimisation. No matter how
expensive this is, these measures can confirm the robustness of solutions. With
this confirmation, an algorithm can then reliably favour robust solutions and
discard non-robust solutions to drive the whole optimisation process towards
better robust solutions.
In order to alleviate the disadvantages of both methods, several steps need
to be taken systematically. All of these steps are essential and will be discussed
in detail in the following chapters. Obviously, a solution to the problem of
1. Introduction 7

both methods start with hypotheses. However, any idea needs to be tested,
evaluated, compared, and verified to reliably and confidently prove that it is
beneficial. This process is done in different branches of optimisation including
global optimisation, dynamic optimisation, interactive optimisation, and so on
as well.
Despite the advances in all of the above-mentioned branches and importance
of considering uncertainties during optimisation of real problems, there is signifi-
cant room for further improvements in the area of robust optimisation, especially
combined with multi-objective approaches. There are few works in this field as
I will show in the next chapter. Finding optimal solutions that are less sensitive
to perturbations needs a systematic design approach. Therefore:

A highly systematic robust optimisation algorithm design process is essential to


design reliable robust algorithms.

There is a need for a systematic design process in the field of robust optimi-
sation to better and conveniently alleviate the drawbacks of the current robust
optimisation techniques and/or propose new ones.
Consequently, the aim of this study is to investigate and fill one of the most
substantial current gaps in robust heuristic optimisation techniques, with em-
phasis on development of tools that could assist in real-world applications. The
main research question can be stated as follows:

How can a systematic design process be established to systematically test,


evaluate, and propose computationally cheap robust optimisation techniques?

Associated research questions, objectives, and plan are discussed in detail in


Chapter 3.

1.3 Scope and Significance


The presence of a systematic design process will allow designers in the field of
robust optimisation to reliably and confidently test, evaluate, and propose new
algorithms or improve the current ones.
The first phase of a systematic design process will provide test environment
for designers. A set of suitable test beds is an essential in any kind of exper-
imental studies and can benchmark different ideas. I will only concentrate on
8 1. Introduction

unconstrained single- and multi-objective optimisation in this phase since they


are the main foundation of most of research branches in this field.
After the first phase, a systematic design process needs to evaluate the perfor-
mance of a given idea to be able to compare with others. Qualitative and quan-
titative evaluation criteria are very important in this phase because they show
us how and how much better an algorithm is. Since quantitative performance
evaluators are more accurate compared to qualitative ones, I will contribute to
the quantitative evaluators in the second phase. In addition, the focus will be on
multi-objective performance indicators due to the importance and complexity of
performance evaluation in multi-objective optimisation.
In the third phase of the systematic design process, a series of ideas are pro-
posed to alleviate the drawbacks of the current robust optimisation techniques.
The contributions will be in both fields of robust single- and multi-objective
optimisation. A set of algorithms will be proposed that are reliable and do not
need additional function evaluation. These make them highly suitable for solving
expensive real world problems. All of the methods are unconstrained, but they
can readily be applied to constrained problems as well.

1.4 Organisation of the thesis


The organisation of the thesis is illustrated in Fig. 1.1. It may be seen in
this figure that Chapter 2 investigates the state-of-the-art of heuristic optimi-
sation algorithms, multi-objective optimisation techniques, robust optimisation
in single-objective search spaces, robust optimisation in multi-objective search
spaces, current single- and multi-objective robust test functions, current per-
formance metrics, explicit and implicit methods, and relevant criticism of the
current works. The state-of-the-art is analysed and current gaps are identified in
detail in Chapter 3. Chapter 4 proposes the first phase of the systematic design
process for testing robust single- and multi-objective optimisation techniques.
Chapter 5 proposes the second phase of the systematic optimisation process for
evaluating and comparing the performance of robust multi-objective optimisa-
tion algorithms. The confidence measure, confidence-based relational operators,
confidence-based robust optimisation, confidence-based Particle Swarm Opti-
misation (PSO), confidence-based Genetic Algorithms (GA), confidence-based
Pareto optimality, confidence-based robust multi-objective optimisation, and
1. Introduction 9

confidence-based Multi-Objective Particle Swarm Optimisation (MOPSO) are


proposed and discussed theoretically in Chapter 6 as the last phase of the sys-
tematic robust algorithm design process. The results of the confidence-based
single-objective robust optimisation techniques on the robust single objective
test functions are presented and discussed in Chapter 7. Chapter 8 demonstrates
and analyses the results of confidence-based multi-objective robust optimisation
techniques on the robust multi-objective test functions. The real application
of the proposed confidence-based robust optimisation perspective is presented
and discussed in Chapter 9. Finally, Chapter 10 concludes the thesis, describes
the achievements of the work, and suggests several research directions for future
studies.
10 1. Introduction

Single-objective optimisation

Multi-objective optimisatoin
Chapter 2: related works
Robust optimisation
Chapter 3: Analysis
Robust multi-objective optimisation

Proposed systematic design process Robust single objective test functions

Chapter 4: Benchmark problems Robust multi-objective test functions

Coverage measure
Chapter 5: Performance measures
Success ratio

Core Confidence measure


CRPSO
Confidence-based robust single-
Chapter 6: Improving robust optimisation objective optimisation
techniques CRGA
Confidence-based robust multi-
CRMOPSO
objective optimisatoin
Chapter 7: Confidence-based robust
Results
optimisation

Chapter 8: Confidence based robust multi-


Results
objective optimisation
Multi-objective optimisation of
marine propellers
Chapter 9: Real application
Robust multi-objective optimisation
of marine propellers
Conclusion

Chapter 10: Conclusion Achievements and significance

Future works

Figure 1.1: Organisation of the thesis (Purple: literature review and related
works, Red: analysis of the literature and current gaps, Green: proposed sys-
tematic robust algorithm design process, Blue: results on the test beds and real
case study, and Orange: conclusion and future works)
Chapter 2

Related work

In recent years meta-heuristic algorithms have been used as primary techniques


for obtaining the estimated optimal solutions of real engineering design opti-
misation problems [16, 17, 79]. Such algorithms mostly benefit from stochastic
operators [15] that make them distinct from deterministic approaches. A deter-
ministic algorithm [36, 6, 109] reliably determines the same answer for a given
problem with a similar initial starting point. However, this behaviour results
in local optima entrapment, which can be considered as a disadvantage for de-
terministic optimisation techniques [150]. Local optima stagnation refers to the
entrapment of an algorithm in local solutions and consequent failure in finding
the true global optimum. Since real problems may have a large number of lo-
cal solutions, deterministic algorithms lose their reliability in finding the global
optimum.
Stochastic optimisation (meta-heuristic) algorithms [152] refer to the family
of algorithms with stochastic operators including evolutionary algorithms [7].
Randomness is the main characteristic of stochastic algorithms [91]. This means
that they utilise random operators when looking for global optima in search
spaces. Although the randomised nature of such techniques might make them
unreliable in obtaining a similar solution in each run, they are able to avoid
local solutions much more easily than deterministic algorithms. The stochastic
behaviour also results in obtaining different solutions for a given problem in each
run [101].
Meta-heuristic and evolutionary algorithms search for the global optimum
in a search space by creating one or more random solutions for a given prob-
lem [156]. This set is called the set of candidate solutions. The set of candidates

11
12 2. Related work

is then improved iteratively until satisfaction of a terminating condition. The


improvement can be considered as finding a more accurate approximation of
the global optimum than the initial random guesses. This mechanism brings
evolutionary algorithms several intrinsic advantages: independency of problem,
independency of derivatives, local optima avoidance, and simplicity.
Problem and derivation independencies originate from the consideration of
problems as a black box. Evolutionary algorithms only utilise the objective
function for evaluating the set of candidate solutions. The main process of
optimisation is independent of the problem and based on the inputs provided
and outputs received. Therefore, the nature of the problem is not a concern, yet
the representation is the key step when utilising evolutionary algorithms. This
is the same reason why evolutionary algorithms do not need to find derivatives
of functions in the problem to obtain its estimated global optimum.
As another advantage, local optima avoidance is high due the stochastic
nature of evolutionary algorithms. If an evolutionary algorithm is trapped in
a local optimum, stochastic operators lead to random changes in the solution
and eventual escape from the local optimum. Although there is no guarantee
of resolving this issue, stochastic algorithms have much higher probability to
escape from local optima compared to deterministic methods. Very accurate
approximation of the global optimum also is not guaranteed, but running an
evolutionary algorithm several times increases the probability of obtaining a
better solution.
Lastly, simplicity is another characteristic of evolutionary algorithms. Nat-
ural evolutionary or collective behaviours are the main inspirations for the ma-
jority of algorithms in this field. While exhibiting sophisticated behaviour, their
basic mechanics are often inherently quite simple. In addition, evolutionary al-
gorithms follow a general and common framework, in which a set of randomly
created solutions is enhanced or evolved iteratively. What makes algorithms
different in this field is the method of improving this set.
Some of the most popular algorithms in this field are: Genetic Algorithms
(GA) [89, 90], Particle Swarm Optimisation (PSO) [60], Ant Colony Optimisa-
tion (ACO) [35], Differential Evolution (DE) [154], Evolutionary Programming
(EP) [70, 175], and Evolution Strategy (ES) [139, 138]. Although these algo-
rithms are able to solve many real and challenging problems, the so-called No
Free Lunch theorem [169] allows researchers to propose new algorithms. Ac-
2. Related work 13

cording to this theorem, all algorithms perform equally when averaged across all
possible optimisation problems. Therefore, one algorithm can be very effective
in solving one set of problems and not effective on a different set of problems.
This is the foundation of many works in this field.
Despite the popularity and simplicity of evolutionary algorithms, optimi-
sation using these techniques requires several considerations and has its own
challenges. There are also different types of optimisation in this field, of which
the most important ones are single-objective, multi-objective, unconstrained,
constrained, dynamic, robust, and interactive optimisation. Single-objective op-
timisation is the simplest and fundamental expression of the optimisation pro-
cess. It deals with varying parameters, seeking to satisfy an objective. As such,
this kind of optimisation is the foundation for consideration of new, generally
applicable methods and ideas. This thesis concentrates on development of effec-
tive robust optimisation and as a starting point, must consider single-objective
approaches.
In the real world, most optimisation problems have multiple, often compet-
ing objectives. For the methods proposed to be useful, widely applicable and
effective, they must take into consideration this multi-objective nature of the
majority of problems to be addressed. Multi-objective optimisation deals with
extending and developing approaches to solve these kind of problems. So, for the
contributions of this thesis to be broadly applicable, methods must be developed
and tested for single-objective optimisation and multi-objective optimisation.
This chapter reviews the literature of single-objective optimisation, multi-
objective optimisation, robust single-objective optimisation, and robust multi-
objective optimisation. Due to the scope of the thesis, a large portion of this
chapter covers robust optimisation.

2.1 Evolutionary single-objective optimisation


This section first covers the preliminaries and definitions of optimisation. The
mechanisms and challenges of stochastic/heuristic optimisation techniques are
then discussed.
As its name implies, single-objective optimisation deals with optimising only
one objective. Handling multiple objectives requires special considerations and
mechanisms [179] and will be discussed in the next section.
14 2. Related work

In addition to the objective, other elements involved in the single-objective


optimisation process are parameters and constraints. Parameters are the vari-
ables of optimisation problems (systems) that have to be optimised. As Fig. 2.1
shows, variables can also be considered as inputs of systems and constraints are
the limitations applied to the system. In fact, the constraints define the feasibil-
ity of the obtained objective value. Examples of constraints are stress constraints
when designing aerodynamic systems or the range of variables.
Operating/environmental conditions

Variables Objective
(inputs) System (output)

Constraints Feasibility

Figure 2.1: Different components of an optimisation system: inputs, outputs,


operating conditions, and constraints

Other inputs of a system that may affect its output are operating (environ-
mental) conditions. Such inputs are considered as secondary inputs that are
defined when a system is operating in the simulated/final environment. Exam-
ples of such conditions are: temperature/density of fluid when a propeller is
turning or the angle of attack when an aircraft is flying. These types of inputs
are not optimised by the optimisers but definitely have to be considered during
optimisation since they may have significant impacts on the outputs.
Without loss of generality, a single-objective optimisation can be formulated
as a minimisation problem as follows:

M inimise : f (x1 , x2 , x3 , ..., xn−1 , xn ) (2.1)

Subject to : gi (x1 , x2 , x3 , ..., xn−1 , xn ) ≥ 0, i = 1, 2, ..., m (2.2)


2. Related work 15

hi (x1 , x2 , x3 , ..., xn−1 , xn ) = 0, i = 1, 2, ..., p (2.3)

lbi ≤ xi ≤ ubi , i = 1, 2, ..., n (2.4)

where n is number of variables, m indicates the number of inequality con-


straints, p shows the number of equality constraints, lbi is the lower bound of
the i-th variable, and ubi is the upper bound of the i-th variable.
As can be seen in Equations 2.2 and 2.3, there are two types of constraints:
inequality and equality. The set of variables and constraints construct a search
space, and the objectives define the search landscape for a given problem. Un-
fortunately, it is usually impossible to draw the search space due to the high-
dimensionality of the variables. However, an example of a search space con-
structed by two variables and several constraints is shown in Fig. 2.2.

Figure 2.2: Example of a search landscape with two variables and several con-
straints

It may be observed in Fig. 2.2 that the search space can have multiple local
optima, but one of them is the global optimum (or some of them in case of a
flat landscape). The constraints create gaps in the search space and occasionally
split it into various separated regions. In the literature, infeasible regions refer to
the areas of the search space that violate constraints. The search space of a real
16 2. Related work

problem can be very challenging. Some of the difficulties of the real search spaces
are discontinuity, a massive number of local optima, high infeasibility, global
optimum located on the boundaries of constraints, deceptive valleys toward local
optima, and isolation of the global optimum.
When formulating a problem, an optimiser would be able to tune its variables
based on the outputs and constraints. As mentioned in the introduction of this
chapter, one of the advantages of evolutionary algorithms is that they consider a
system as a black box. Fig. 2.3 shows that the optimisers only provide the system
with variables and observe the outputs. The optimisers then iteratively and
stochastically change the inputs of the system based on the feedback (output)
obtained so far until satisfaction of an end criterion. The process of changing
the variables based on the history of outputs is defined by the mechanism of
an algorithm. For instance, PSO saves the best solutions obtained so far and
encourages new solutions to relocate around them.

Operating/environmental conditions

Objective
Variables

System
(black box) Feasibility

Optimiser

Figure 2.3: Stochastic population-based optimisers consider the system as black


box

A general classification of the algorithms in this field is based on the number


of candidate solutions that is improved during optimisation. An algorithm may
start and perform the optimisation process by single or multiple random solu-
tions. In the former case the optimisation process begins with a single random
2. Related work 17

solution, and it is iteratively improved over the iterations. In the latter case, a
set of solutions (more than one) is created and improved during optimisation.
These two families are called individual-based and population-based algorithms
and illustrated in Fig. 2.4.

(a) Individual-based stochastic optimisation

(b) Population-based stochastic optimisation

Figure 2.4: Individual-based versus population-based stochastic optimisation


algorithms

There are several advantages and disadvantages for each of these families.
Individual-based algorithms need less computational cost and function evalua-
tion but can suffer from premature convergence. Premature convergence refers
to the stagnation of an optimisation technique in local optima, which prevents
it from convergence towards the global optimum. Fig. 2.4 shows that the single
candidate solution becomes entrapped in the local optimum which is very close
the the global optimum. In contrast, population-based algorithms have a greater
ability to avoid local optima since a set of solutions are involved during opti-
misation. Fig. 2.4 illustrates how the collection of candidate solutions result in
finding the global optimum. In addition, information can be exchanged between
the candidate solutions and assist them to overcome the above-mentioned diffi-
culties of search spaces. However, high computational cost and the need for more
function evaluations are two major drawbacks of population-based algorithms.
The well-known algorithms in the individual-based family are: Tabu Search
(TS) [70, 77, 78], hill climbing [41], Iterated Local Search (ILS) [117], and Sim-
18 2. Related work

ulated Annealing (SA) [102, 25]. TS is an improved local search technique that
utilises short-term, intermediate-term, and long-term memories to ban and trun-
cate unpromising/repeated solutions. Hill climbing is also another local search
and individual-based technique that starts optimisation from a single solution.
This algorithm then iteratively attempts to improve the solution by changing its
variables. ILS is an improved hill climbing algorithm to decrease the probability
of entrapment in local optima. In this algorithm, the optimum obtained at the
end of each run is retained and considered as the starting point in the next iter-
ation. Initially, the SA algorithm tends to accept worse solutions proportionally
to a variable called the cooling factor. This assists SA to promote exploration of
the search space and prevents it becoming trapped in local optima when it does
search them.
Although different improvements of individual-based algorithms promote lo-
cal optima avoidance, the literature shows that population-based algorithms
are better in handling and alleviating this problem. Regardless of the differ-
ences between population-based algorithms, the common characteristic is the
separation of the optimisation process into two, conflicting goals: exploration
versus exploitation [62]. Exploration encourages candidate solutions to change
abruptly and stochastically. This mechanism improves the diversity of solutions
and causes greater exploration of the search space. In PSO, for instance, the
inertial weight maintains the tendency of particles toward their previous direc-
tions and emphasises exploration. In GA, a high probability of cross-over causes
more combination of individuals and is the main mechanism for exploration.
In contrast, exploitation aims at improving the quality of solutions by lo-
cally searching around the promising solutions obtained in the exploration. In
exploitation, candidate solutions are obliged to change less suddenly and search
locally. In PSO, for instance, a low inertial rate causes low exploration and
a higher tendency toward the best personal/global solutions obtained. There-
fore, the particles converge toward best points instead of churning around the
search space. The mechanism that brings GA exploitation is mutation. Muta-
tion causes slight random changes in the individuals and local search around the
candidate solutions.
Exploration and exploitation are two conflicting goals where promoting one
generally results in degrading the other [4]. A correct balance between these
two goals can guarantee a very accurate approximation of the global optimum
2. Related work 19

using population-based algorithms. On the one hand, mere exploration of the


search space prevents an algorithm from finding an accurate approximation of
the global optimum. On the other hand, mere exploitation results in local optima
stagnation and low quality of the approximated optimum. Due to the unknown
shape of the search landscape for optimisation problems, in addition, there is
no clear and accurate timing for transition between these two goals. Therefore,
population-based algorithms balance exploration and exploitation to firstly find
a rough approximation of the global optimum, and then improve its accuracy.
The general framework of population-based algorithms is almost identical.
~ = {X
The first step is to generate a set of random initial solutions ( X ~1, X
~ 2 , ..., X~n }
). Each of these solutions is considered as a candidate solution for a given
problem, assessed by the objective function, and assigned an objective value
(O~ = {O1 , O2 , ..., On }). The algorithm then combines/moves/updates the can-
didate solutions based on their fitness values with the hope to improve them.
The solutions created are again assessed by the objective function and assigned
their relevant fitness values. This process is iterated until satisfaction of an end
condition. At the end of this process, the best solution obtained is reported as
the best approximation for the global optimum.
Recently, many population-based algorithms have been proposed. They can
be classified into three main categories based on the source of inspiration: evo-
lution, physical, or swarm. Evolutionary algorithms are those who mimic the
evolutionary processes in nature. Some of the most popular proposed evolution-
ary algorithms are GA [90, 89], DE [154], ES [139, 138], and EP [70, 175].
There are also many swarm-based algorithms. Two of the most popular
ones are ACO [35] and PSO [60]. The third class of algorithms is inspired
from physical phenomena in nature. The most recent algorithms in this cat-
egory are: Gravitational Search Algorithm (GSA) [134] and Chemical Reac-
tion Optimisation (CRO) [108]. In addition to the above-mentioned algorithms,
there are also other population-based algorithms with different sources of inspi-
ration [16, 17, 79].
As the above paragraphs show, there are many algorithms in this field, which
indicates the popularity of these techniques in the literature. If we consider the
hybrid, multi-objective, discrete, and constrained methods, the number of pub-
lications will increase dramatically. The reputation of these algorithms is due to
several reasons. Firstly, simplicity is the main advantage of the population-based
20 2. Related work

algorithm. The majority of algorithms in this field follow a simple framework


and have been inspired from simple concepts. Secondly, these algorithms con-
sider problems as black boxes, so they do not need derivative information of the
search space in contrast to mathematical optimisation algorithms. Thirdly, local
optima avoidance of population-based stochastic optimisation algorithms is very
high, making them suitable for practical applications. Lastly, population-based
algorithms are highly flexible, meaning that they are readily applicable for solv-
ing different optimisation problems without structural modifications. In fact,
the problem representation becomes more important than the optimiser when
using population-based algorithms. The application of these algorithms can be
found in science and industry as well [39, 30].

Despite the merits of these optimisers, there is a fundamental question here


as whether there is any optimiser for solving all optimisation problems. Accord-
ing to the NFL theorem [169], there is no algorithm for solving all optimisation
problems. This means that an optimiser may perform well in a set of prob-
lems and fail to solve a different set of problems. In other words, the average
performance of optimisers is equal when considering all optimisation problems.
Therefore, there are still problems that can be solved by new optimisers better
than the current optimisers.

Although meta-heuristics are very efficient, optimisation of real problems is


not as easy as just applying algorithms and involves many difficulties that should
be considered: expensive computational cost of function evaluations, constraints,
multiple objectives, and uncertainties. Since most of the population-based algo-
rithms search for a single solution to maximise of minimise an objective, they
cannot be used for solving problems with multiple objectives.

However, most real problems have more than one objective to be optimised.
A single-objective algorithm can be applied to such problems, but multiple ob-
jectives must first be aggregated to a single objective, and then a single objective
optimisation algorithm needs to be run multiple times to find the best trade-offs
between objectives. Another drawback of this method is that it cannot solve all
types of multi-objective problems as will be discussed and explained in detail in
the following sections.
2. Related work 21

2.2 Evolutionary Multi-objective optimisation

There are different challenges in solving real engineering problems, which need
specific tools to handle them. One of the most important characteristics of real
problems is multi-objectivity. A problem is called multi-objective if there is
more than one objective to be optimised. There are two common approaches for
handling multiple objectives: a priori versus a posteriori [119, 22].
The former class of optimisers combines the objectives of a multi-objective
problem to a single-objective with a set of weights (provided by decision makers),
which defines the importance of each objective, and employs a single-objective
optimiser to solve it. The unary-objective nature of the combined search spaces
allows finding a single solution as the optimum. In contrast, a posterior meth-
ods maintain the multi-objective formulation of the multi-objective problems,
allowing exploration of the behaviour of the problems across a range of design
parameters and operating conditions compared to a priori approaches [42]. In
this case, decision makers will eventually choose one of the solutions obtained
based on their needs. There is also another way of handling multiple objectives
called the progressive method, in which decision makers’ preferences about the
objectives are considered during optimisation [21].
In contrast to single-objective optimisation, there typically might be no sin-
gle solution when considering multiple objectives as the goal of the optimisation
process. In this case, a set of optimal solutions, which represents various trade-
offs between the objectives, is the “solution” of a multi-objective problem [31].
Before 1984, mathematical multi-objective optimisation techniques were popu-
lar among researchers in different fields of study such as applied mathematics,
operation research, and computer science. Since the majority of the conven-
tional approaches (including deterministic methods) suffered from stagnation in
local optima, however, such techniques were not as widely applicable as they are
nowadays.
In 1984, a revolutionary idea was proposed by David Schaffer [32]. He intro-
duced the concept of multi-objective optimisation in stochastic (including evolu-
tionary and heuristic) optimisation techniques. Since then a significant number
of researches has been dedicated to developing and evaluating multi-objective
evolutionary/heuristic algorithms. The advantages of stochastic optimisation
techniques such as their gradient-free mechanism and local optima avoidance
22 2. Related work

made them readily applicable to real problems as well. Nowadays, the applica-
tion of multi-objective optimisation techniques can be found in many different
fields of studies: e.g. mechanical engineering [100], civil engineering [118], chem-
istry [133], and other fields [30].
Without loss of generality, multi-objective optimisation can be formulated as
a minimisation problem as follows:

M inimise : F (~x) = {f1 (~x), f2 (~x), ..., fo (~x)} (2.5)

Subject to : gi (~x) ≥ 0, i = 1, 2, ..., m (2.6)

hi (~x) = 0, i = 1, 2, ..., p (2.7)

lbi ≤ xi ≤ ubi , i = 1, 2, ..., n (2.8)

where o is the number of objective functions, m is the number of inequality


constraints, p is the number of equality constraints, g shows the inequality con-
straints, h indicates the equality constraints, and [lbi , ubi ] are the boundaries of
the i-th variable.
In single-objective optimisation, solutions can be compared easily due to the
unary objective function. For minimisation problems, solution x is better than
y if and only if x < y. However, the solutions in a multi-objective space cannot
be compared by the inequality relational operators due to multiple comparison
metrics. In this case, a solution is better than (dominates) another solution if
and only if it shows better or equal objective value on all of the objectives and
provides a better value in at least one of the objective functions. The concepts
of comparison of two solutions in multi-objective problems were first proposed
Francis Ysidro [61] and then extended by Vilfredo Pareto [130]. Without loss of
generality, the mathematical definition of Pareto dominance for a minimisation
problem is as follows [29]

Definition 2.2.1 (Pareto Dominance): Suppose that there are two vectors such
as: ~x = (x1 , x2 , ..., xk ) and ~y = (y1 , y2 , ..., yk ).
2. Related work 23

Vector ~x dominates vector ~y (denote as ~x ≺ ~y ) iff:

∀i ∈ (1, 2, ..., o)

[fi (~x) ≤ fi (~y )] ∧ [∃i ∈ 1, 2, ..., o : fi (~x) < fi (~y )]

Fig. 2.5 illustrates the concept of Pareto dominance. It can be seen in this
Minimise
f2

f1
Minimise

Figure 2.5: Pareto dominance

figure that the circles dominate some of the other solutions (squares) since they
show lesser values on both of the objectives. However, a circle shows a lesser
value on one objective and greater value on another, compared to other circles,
meaning that it cannot dominate them. The definition of Pareto optimality is
as follows [125]

Definition 2.2.2 (Pareto Optimality): A solution ~x ∈ X is called Pareto-


optimal iff:
{6∃ ~y ∈ X|~y ≺ ~x}

Definition 2.2.3 (Pareto optimal set): The set of all Pareto-optimal solutions:

P S := {~x, ~y ∈ X|6∃ ~y ≺ ~x}

A set containing the corresponding objective values of Pareto optimal solu-


tions in the Pareto optimal set is called the Pareto optimal front. The definition
of the Pareto optimal front is as follows:
24 2. Related work

Definition 2.2.4 (Pareto optimal front): A set containing the value of objective
functions for Pareto solutions set:

∀i ∈ (1, 2, ..., o)

P F := {fi (~x)|~x ∈ P S}

The Pareto optimal set and Pareto optimal front of Fig. 2.5 are shown in
Fig. 2.6.

Minimise
x2

f2

x1
f1
Minimise
Pareto set: { , , , , , , } Pareto front: { , , , , , , }

Figure 2.6: Pareto optimal set versus Pareto optimal front

The ultimate goal of multi-objective optimisation algorithms (via a posteriori


methods) is to find a very accurate approximation of the true Pareto optimal
solutions with the highest diversity. This allows decision makers to have a di-
verse range of design options. As mentioned above, in the past, the solution
of multi-objective problems would have been undertaken by a priori aggrega-
tion of objectives into a single objective. However, this method has two main
drawbacks [38, 99, 121]:

• In order to find the Pareto optimal front, different weights with a proper
distribution should be employed. However, an even distribution of the
weights does not necessarily guarantee finding Pareto optimal solutions
with an even distribution.

• This method is not able to find the non-convex regions of the Pareto op-
timal front because negative weights are not allowed and the sum of all
2. Related work 25

the weights should be constant. In other words, the convex sum of the
objectives is usually used in conventional aggregation methods

There are some works in the literature that tried to improve this method. For ex-
ample Parsopoulos and Vrahatis used two dynamic weighted aggregations [131]:

• Dynamic Weighted Aggregation (DWA): In this method, the weights are


changed gradually over the course of iterations.

• Bang-Bang Weighted Aggregation (BWA): The weights are abruptly changed


as the iteration index increases.

It should be noted here that Chebyshev decomposition solves the non-convex


issue and is used in many modern decomposition-based approaches. However,
aggregation methods still need to be run many times to approximate the whole
set of Pareto optimal solutions because there is only one best solution obtained
in each run. This method is illustrated in Fig. 2.7 (left). According to Deb [42],
the multi-objective optimisation process utilising meta-heuristics deals with over-
coming many difficulties such as infeasible areas, local fronts, diversity of solu-
tions, and isolation of the optimum. It can be seen in Fig. 2.7 (left) that an
a priori method should deal with all these difficulties in each run. However,
maintaining the multi-objective formulation of problems brings some advantages.
First, information about the search space is exchanged between the search agents,
as illustrated in Fig. 2.7 (right). It can be seen that the information exchanges
bring quick movement towards the true Pareto optimal front.
Second, the multi-objective approaches assist in approximating the whole
true Pareto optimal front in a single run. Finally, maintaining the multi-objective
formulation of a problem allows the exploration of the behaviour of the problems
across a range of design parameters and operating conditions, but requires the
use of more complex meta-heuristics and a need to address conflicting objectives.
In general, the majority of the most well-known heuristic algorithms have been
extended to solve multi-objective problem. In the following paragraphs the most
popular and recent ones are briefly presented.
Early years of multi-objective stochastic optimisation saw conversion of differ-
ent single-objective optimisation techniques to multi-objective algorithms. Some
of the most well-known stochastic optimisation techniques proposed so far are:

• Strength-Pareto Evolutionary Algorithm (SPEA) [180, 183]


26 2. Related work

Local front Infeasible area Pareto front Local front Infeasible area Pareto front

Initial population
Pareto solutions
Information exchange
Quick movement

Maximise
Maximise

f2
f2

f1 f1
Maximise Maximise
Initial start points
in each run

Figure 2.7: A priori method versus a posteriori methods [42]

• Non-dominated Sorting Genetic Algorithm [153]

• Non-dominated sorting Genetic Algorithm version 2 (NSGA-II) [45]

• Multi-Objective Particle Swarm Optimisation (MOPSO) [28]

• Multi-Objective Evolutionary Algorithm based on Decomposition (MOEA/D) [177]

• Pareto Archived Evolution Strategy (PAES) [104]

• Pareto-frontier Differential Evolution (PDE) [2]

The literature shows that the most popular multi-objective meta-heuristic is


Non-dominated Sorting GA (NSGA-II) [45], which is a multi-objective version of
the well-known GA algorithm [89, 82]. This algorithm was proposed to alleviate
the three problems of the first version [153]. These problems are: high compu-
tational cost of non-dominated sorting, lack of considering elitism, and lack of a
sharing parameter (different from niching). In order to alleviate the aforemen-
tioned problems, NSGA-II utilises a fast non-dominated sorting technique, an
elite-keeping technique, and a new niching operator which is parameterless as
follows:

• Fast non-dominated sorting: The non-dominated sort of NSGA is of order


O(M N 3 ) where N and M are the number of individuals in the population
2. Related work 27

and the objective functions, respectively. This order is due to the compar-
ison of all individuals based on all the objective functions together (M N 2 )
and sorting in non-dominated levels (M N 2 × N ). In the new method a hi-
erarchical model of non-dominated levels has been proposed to reduce this
computational cost whereby we do not need to compare all the dominated
individuals after the first non-dominated level (O(M N 2 )). There are two
counters for each individual which show how many individuals dominate it
and how many individual it dominates. These counters help to build the
domination levels.

• Elite-keeping technique: Elitism is automatically achieved due to the com-


parison of the current population with the previously found best non-
dominated solutions.

• New niching operator (crowding-distance): The nearest neighbour density


estimates the perimeter of a rectangle (cube or hypercube) neighbourhood,
which is formed by the nearest neighbours. The individuals with a higher
value for this measure are selected as the leaders. In the niching technique
a diameter (σshare ) should be defined, and the results are highly dependent
on the diameter. However, there is no parameter to define in the proposed
operator.

The NSGA-II algorithm starts with a random population. The individuals


are grouped based on the non-dominated sorting method. The fitness of each
individual is defined based on its non-domination level. The second population is
created by selection, recombination, and mutation operators. Both populations
create a large new population. This new population is then sorted again by
the non-dominated sorting approach. The higher the non-domination level, the
higher the priority to be selected as a new individual for the final population.
The process of selecting the non-dominated individuals should be repeated until
a population with the same size as the initial population is constructed. Finally,
these steps are run until the satisfaction of an end criterion.
The second most popular multi-objective meta-heuristic is Multi-Objective
Particle Swarm Optimisation (MOPSO). The MOPSO algorithm was proposed
by Coello Coello [28, 33]. Following the same concepts as PSO, it employs a
number of particles, which fly around in the search space to find the best solution.
Meanwhile, they all trace the best location (best solution) in their paths [147].
28 2. Related work

In contrast to PSO, there is, of course, no single “best” solution to track. In


other words, particles must consider their own non-dominated solutions (pbest)
as well as one of the non-dominated solutions the swarm has obtained so far
(gbest) when updating position. An external archive is generally used for storing
and retrieving the Pareto-optimal solutions obtained. In addition, a mutation
operator called turbulence is also embedded in MOPSO in some cases to increase
randomness and promote diversity of trial solutions. A comprehensive survey of
the PSO-based multi-objective optimisers can be found in [140].
The external archive of MOPSO is similar to the adaptive grid in Pareto
Archived Evolution Strategy (PAES) [104] as it has been designed to save the
non-dominated solutions obtained so far. It has two main components: an
archive controller and a grid. The former component is responsible for deciding
if a solution should be added to the archive or not. If a new solution is domi-
nated by one of the archive members it should be omitted immediately. If the
new solution is not dominated by the archive members, it should be added to
the archive. If a member of the archive is dominated by a new solution, it has to
be replaced by the new solution. Finally, if the archive is full the adaptive grid
mechanism is triggered.
The grid component is responsible to keep the archive solutions as diverse as
possible. In this method the objective space is divided into several regions. If a
newly obtained solution lies outside the grid, all the grid locations should be re-
calculated to cover it. If a new solution lies within the grid, it is directed to the
portion of the grid with the lowest number of particles. The main advantage of
this grid is the low computational cost compared to niching (In worst case when
the grid must be updated in each iteration it is the same as niching O(N 2 ) ).
MOPSO has a very fast convergence speed which could make it prone to
premature termination with a false Pareto optimal front in multi-objective opti-
misation. The mutation strategy is helpful in this case. The mutation strategy
randomly affects not only particles in the swarm but also the design variables
of problems. The mutation rate decreases over the course of iterations. The
MOPSO algorithm handles constraints whenever two solutions are being com-
pared. In comparing two feasible solutions the non-dominance comparison is
applied directly. In comparing a feasible and an infeasible solution the feasi-
ble solution is selected. Among two infeasible solutions the solution with less
constraint violation is chosen.
2. Related work 29

The MOPSO algorithm starts by randomly placing the particles in a problem


space. Over the course of iterations, the velocities of particles are calculated.
After defining the velocities, the position of particles can be updated. All the
non-dominated solutions are added to the archive. Finally, the search process is
terminated by satisfaction of a stopping criterion.
In summary, the discussions in this subsection showed that optimisation in
a multi-objective search space is much more challenging that a single-objective
search space. Multi-objective optimisation has its owns difficulties that might
not exist in single-objective optimisation: local fronts, high number of non-
dominated solutions, coverage of solution across the front, and so on.
However, the two types of optimisation have several difficulties in common
as well. One of the most important challenges is the existence of uncertainties in
both search spaces. Uncertainties do exist as undesirable inputs in real problems
with any number of objectives and have to be considered during the optimisa-
tion process to prevent the optimal solutions obtained from showing undesirable
behaviours in real environments. Although uncertainties are a common difficulty
in both single- and multi-objective problems, they require different approaches
in order to be handled due to the different nature of these two types of problems.
Therefore, the following two sections discuss single-objective and multi-objective
robust optimisation methods respectively.

2.3 Robust single-objective optimisation


One of the key concepts in the optimisation of real problems is robustness. Ro-
bust optimisation refers to the process of finding optimal solutions for a partic-
ular problem that have least variability to probable uncertainties. Uncertainties
are unavoidable in the real world and occur in different aspects of a system:
operating/environmental conditions, parameters, outputs, and constraints.
Generally speaking, robust optimisation refers to the process of considering
any types of uncertainties during optimisation. In the literature, this term has
mostly referred to handling uncertainties in the parameters of a problem. In
this thesis, however, the term “robust optimisation” is used for considering any
type of uncertainties. There are different classifications in the literature for
categorising uncertainties [14, 176]. The classification provided by Beyer and
Sendhoff [14] is utilised, in which the uncertainties are categorised based on
30 2. Related work

their sources, as follows:

1. Type A: this uncertainty occurs in the environmental and operating con-


ditions. Perturbation in speed, temperature, moisture, angle of attack in
airfoil design, and speed of the vehicle in propeller design are some exam-
ples of this type of uncertainty.

2. Type B: in this case the parameters of the problem may change. One of
the major sources of this kind of uncertainty is manufacturing tolerance.

3. Type C: in this case the system itself produces noises. The uncertainty
of the outputs of a system is caused by Type A and Type C uncertainties.
It might be due to sensory measurement errors or randomised simulations.
Time-varying (dynamic) systems are also considered as having type C un-
certainty.

It should be noted that computer models (e.g. CFD) do produce errors, but,
being deterministic, they do not produce noisy outputs. The source for these
errors can be found either in the failure to consider uncertain parameters of type
A or B during simulation, or to errors in the models, caused by a number of
issues. Real world systems do produce noisy outputs, but these again are the
effects of type A and B uncertainties. Fig. 2.8 shows where these three types of
uncertainties happen during and after optimisation.
Another very important classification is between aleatory (i.e. random) and
epistemic uncertainty (i.e. due to lack of knowledge) [165].

2.3.1 Preliminaries and definitions


There are different types of uncertainties, but some of the most common types
are the manufacturing errors and production perturbations. In this thesis the
focus is on this type of uncertainty. In this type, the variables of a particular
problem may vary after finding the optimum. This undesired fluctuation might
degrade the output of the cost function significantly. Without loss of generality,
a robust optimisation problem with respect to the perturbation in the variables
(parameters) is formulated as follows:

M inimise : f (~x + ~δ) (2.9)


2. Related work 31

Parameters
Type A: operating Type B: parameters
conditions

Type C: outputs
Environment

System

Type D: constraints

Figure 2.8: Different categories of uncertainties and their effects on a system:


Type A, Type B, and Type C

Subject to : gi (~x + ~δ) ≥ 0, i = 1, 2, ..., m (2.10)

hi (~x + ~δ) = 0, i = 1, 2, ..., p (2.11)

lbi ≤ xi + δi ≤ ubi , i = 1, 2, ..., n (2.12)

where ~x is the set of parameters, ~δ indicates the uncertainty vector corresponding


to each variable in ~x, o is the number of objective functions, m is the number
of inequality constraints, p is the number of quality constraints, [lbi , ubi ] are the
boundaries of the i-th variable.
It should be noted that ~δ in Equation 2.9 is a stochastic (random) variable
with a given (known or unknown) probability density function, and not a deter-
ministic variable. The general concepts of a robust solution that is not sensitive
to uncertainties are illustrated in Fig. 2.9. In this figure, the horizontal axis
shows a parameter and the vertical axis is the objective function. There are two
valleys (one global and one local) in this figure and the objective is the minimi-
sation of f . The left valley is the global optimum, whereas the right valley is
the robust optimum. The reason for the greater robustness of the right valley
32 2. Related work

Δ’
f

Δ
Robust solution

Non-robust
solution

δ δ δ δ

Figure 2.9: Conceptual model of a robust optimum versus a non-robust optimum.


The same perturbation level (δ) in the parameter (p) results in different changes
0
(∆ and ∆ ) in the objective (f )

is its lesser sensitivity to δ error in the variable x compared to the left valley.
Note that a robust solution should be an acceptable solution as well. Fig. 2.9
0
clearly shows that δ in the parameter causes ∆ and ∆ in the left and right
0
valleys, respectively. What makes the right valley robust is that ∆ > ∆ . In ro-
bust optimisation such solutions are fruitful. The same concepts are valid when
considering uncertainties in operation conditions. In such circumstances, per-
turbations in the operating conditions might cause greater change (∆) or lesser
0
change ∆ in the output of the system.
Handling uncertainties in parameters is mostly undertaken, in the literature,
by investigating the behaviour of a neighbourhood of solutions in the the objec-
tive space. In the literature of population-based stochastic optimisation tech-
niques, the robustness of each individual should be verified in every iteration.
There are two main approaches proposed so far in order to require population-
based stochastic algorithm to handle uncertainties in parameters as follows [44]

• Replacing objective functions with an expectation measure


2. Related work 33

• Adding a new constraint called the variance measure

• Formulating a multi-objective problem considering both expectation and


variance measures

The first two approaches are discussed with their recent developments in the
following subsections. It should be noted that there are numerous examples of
optimisation in the presence of uncertainty being dealt with by solving multi-
objective (expectation vs. variance) optimisation problems as well [98, 76, 146,
59, 107].

2.3.2 Expectation measure


As discussed above, an expectation measure defines the robustness of search
agents during optimisation for stochastic optimisation algorithms. So, search
agents are no longer evaluated by the main objective function, and the evaluator
is the expectation measure. The mathematical formulation of robust optimisa-
tion using an expectation measure is as follows [44]
Z
1
M inimise : E(~x) = f (~y )dy (2.13)
|Bδ (~x)| ~y∈Bδ (~x)

where Bδ (~x) shows the δ-radius neighbourhood of the solution ~x and |Bδ (~x)|
indicates the hypervolume of the neighbourhood.
It should be noted here that in the literature, the uncertainty sets and sce-
nario sets are referred to in order to explicitly acknowledge the potentially asym-
metric, and location dependent, nature of the uncertainty at any design location.
Due to the existence of Bδ (~x) in Equation 2.13, however, this definition does not
allow for this and is therefore limited to a subset of design problems.
It may be inferred from Equation 2.13 that the expectation measure is the
analytical integration of the main objective function over the maximum possible
perturbation in the parameters. This equation is applicable to problems with
known integration of the search space. For real problems with unknown search
space, however, the analytical integration of the search space is impossible to
calculate. In this case, the integration is approximated by the Monte Carlo
method as follows:
H
1 X
E(~x) = f (~x + δ~i ) (2.14)
H i=1
34 2. Related work

where H is the number of samples.


This method approximates the integration by perturbing the variables and
calculating the average of the objective values. Fig. 2.10 illustrates the search
space of an objective function and its corresponding expectation measure. This
figure shows how a non-robust global optimum is considered as a local optimum
when using the expectation measure. In contrast, a robust optimum has the
potential to behave as the global optimum using an expectation measure as the
right valley in Fig. 2.10 shows.

Expectation measure (E)


f

Main objective function (f)

δ δ δ δ

Figure 2.10: Search space of an expectation measure versus its objective function

Deb and Gupta named this method “Type I” robust optimisation [44]. In
this case, the robust optimisation starts by creating a set of random candidate
solutions for a particular problem. Every candidate solution is evaluated by the
average of H generated random solutions around it. The random solutions are
created in the hypervolume of δmax around the solutions where δmax indicates
the maximum possible perturbation.
In the literature, there are also other different expectation measures for im-
proving the performance of robust meta-heuristics [43, 74, 94, 106, 159, 9]. Ex-
pectation measures have been optimised mostly instead of the main objective
function. Some studies [135, 115, 5], however, consider the expectation measure
2. Related work 35

(or other similar robustness indicators) as an additional objective and convert


the problem to a multi- or many-objective problem. In this case, the primary
objective is to minimise/maximise a cost function, and the secondary objective
is to minimise the perturbation using the expectation measure. The Pareto op-
timal front obtained represents trade-offs between the objective function and
expectation measure.

2.3.3 Variance measure


In this method a constraint is employed to confirm the robustness of the solu-
tions during optimisation. A variance measure that indicates the variance of H
generated random solutions in the neighbourhood of a search agent is utilised
mostly in the literature. In this case, a robust algorithm optimises the origi-
nal objective function, but it is subject to satisfying the robustness constraint
calculated by the variance measure. Technically speaking, H random solutions
are generated in the hypervolume of δ around the solutions in the parameter
space where δ indicates the maximum possible perturbation. Then, the variance
of the corresponding objective values should not exceed a threshold (η). The
mathematical formulation of robust optimisation using a variance measure is as
follows [44]:

M inimise : f (~x) (2.15)

||F (~x) − f (~x)||


Subject to : V (~x) = ≤η (2.16)
||f (~x)||

where F (~x) can be selected as the effective mean or worst function value
among the H selected solutions and η is a vector of thresholds in [0,1]. Note
that || || in this equation can be any norm measure.
It may be seen in the Equation 2.16 that the normalised fluctuation of the
objective is considered as the constraint. The robust solutions are favoured as η
decreases. Fig. 2.11 illustrates the effects of the variance measure on the search
space as a constraint. This figure shows that the regions of the search space
that show greater fluctuations are considered as infeasible. Therefore, a solution
becomes infeasible if it enters such regions.
36 2. Related work

Non-robust (infeasible) regions

p
δ δ δ δ

Figure 2.11: Conceptual model of infeasible regions when employing a variance


measure

Deb and Gupta first named methods that utilise variance measures as “Type
II” robust optimisation. After the proposal of this type of robust optimisa-
tion [44], different variance measures were proposed in order to improve the
performance of robust meta-heuristics [74, 94]. The disadvantage of this method
is that a robust meta-heuristic should be equipped with a constraint handling
method to be able to find the robust optimum.
In summary, both measures quantify the robustness of solutions during op-
timisation. This assists optimisation techniques to quantitatively compare the
solutions and favour robust ones. The expectation measures change the shape
of the search space and smooth out the global optimum. Some of them use
Monte Carlo approximation to investigate the landscape around a solution. The
advantage of these methods is replacement of the main objective by the expecta-
tion measure and separation of robust measure from the optimisation algorithm.
However, they may change the shape of the search landscape and affect the
expected behaviour of an algorithm. Considering expectation measure as an
additional objective gives us solutions with different levels of robustness but in-
creases the difficulty of optimisation due to the additional objective. Note that
2. Related work 37

there are also other, cheaper approximations for expectation than Monte Carlo in
the literature such as Polynomial Chaos [173, 59], Collocation Methods [172, 63],
etc.
Variance measures do not change the search space but bring an additional
constraint. This means that an algorithm should be equipped with a constraint
handling technique to be able to work with a variance measure. A variance
measure has the potential to make a search space with dominated infeasible
regions, which will definitely need special considerations. A search space with a
large infeasible portion is very likely to result in having many infeasible search
agents in each iterations. The problem here is that by default, most of the meta-
heuristics discard infeasible solutions and only rely on feasible solutions to drive
the search agents towards optimal solution(s). Therefore, a powerful constraint
handling method should be utilised when solving such problems.
Despite the success of both expectation and variance measures in assisting
algorithms for finding robust solutions, they both suffer from the need for ad-
ditional function evaluation. All of the current measures become unreliable as
the number of additional sampled points decreases. This is the main gap in the
literature at present. In addition, most of the current work only concentrates
on single-objective search spaces and there should be more work in the litera-
ture about computationally cheap robust optimisation in multi-objective search
spaces.
Comparison of solutions with these two metrics are different when considering
single or multiple objectives. In a single-objective search space, there is only one
objective, so the solutions can be compared easily with the value of robustness
indicators. Due to the nature of single-objective problems, there is one global
robust optimum.
In a multi-objective problem, however, the solutions cannot be compared with
the robustness indicators across only one objective due to the presence of multiple
objectives. In this case robustness should be calculated across all objectives and
then the solutions can be compared using Pareto dominance operators. Due to
the nature of such problems, there is a set of robust solutions (robust Pareto
optimal solutions) as the robust designs. Considering robustness and multiple
objectives make the whole optimisation process very challenging. The following
section presents the preliminaries and reviews the literature of robust multi-
objective optimisation approaches.
38 2. Related work

Robust optimisation is important in both single- and multi-objective search


spaces. There are still many single-objective problems, which are used widely in
science and industry. Needless to say, failure in considering uncertainties in such
problems may result in undesired output(s). Considering multiple objectives
is also essential, but requires special considerations. A robust multi-objective
optimiser should look for the best trade-offs between objectives while considering
their robustness. More challenges when finding robust solutions in a multi-
objective search space does not mean it is less important than a single-objective
search space.

2.4 Robust multi-objective optimisation


This section reviews the literature of Robust Multi-Objective Optimisation (RMOO).

2.4.1 Preliminaries and definitions


As mentioned above, there are three main types of uncertainty in a system, of
which perturbation in the parameters can be considered as the most important
one. In the following paragraphs, this type of uncertainty (Type B) is discussed
in the context of multi-objective search spaces.
There is a set of robust solutions for a multi-objective problem because of its
nature. Without loss of generality, the robust multi-objective optimisation con-
sidering uncertainties in the parameters is formulated as a minimisation problem
as follows:

M inimise : F (~x + ~δ) = {f1 (~x + ~δ), f2 (~x + ~δ), ..., fo (~x + ~δ)} (2.17)

Subject to : gi (~x + ~δ) ≥ 0, i = 1, 2, ..., m (2.18)

hi (~x + ~δ) = 0, i = 1, 2, ..., p (2.19)

lbi ≤ xi ≤ ubi , i = 1, 2, ..., n (2.20)


2. Related work 39

where ~x is the set of parameters, ~δ indicates the uncertainty vector corresponding


to each variable in ~x, which is a stochastic (random) variable with a given (known
or unknown) probability density function, o is the number of objective functions,
m is the number of inequality constraints, p is the number of quality constraints,
[lbi , ubi ] are the boundaries of the i-th variable.
In robust single-objective optimisation, there is a single robust solution that
might be either the global or a local optimum. The ultimate goal is to find
the best solution that is not sensitive to the probable uncertainties. Since there
is one comparison criterion (the objective function), solutions can be compared
easily with inequality/equality operators. In robust multi-objective optimisa-
tion, however, two solutions cannot be compared with similar operators as in
robust single-objective optimisation. This is due to the fact that two solutions
in a multi-objective search space might be incomparable (non-dominated) with
respect to each other. In this case, there is a new concept of comparison called ro-
bust Pareto dominance. Without loss of generality, the mathematical definition
of robust Pareto dominance for a minimisation problem is as follows:

Definition 2.4.1 (Robust Pareto Dominance): Suppose that there are two vec-
tors such as: ~x = (x1 , x2 , ..., xk ) and ~y = (y1 , y2 , ..., yk ).
Vector ~x dominates vector ~y (denote as ~x ≺ ~y ) iff:

∀i ∈ (1, 2, ..., o)

[fi (~x + ~δ) ≤ fi (~y + ~δ)] ∧ [∃i ∈ (1, 2, ..., o) : fi (~x + ~δ) < fi (~y + ~δ)]

where k is the number of variables, o is the number of objectives, and ~δ =


{δ1 , δ2 , ..., δk }. It should be noted that tolerances are not constant across design
space in definition 2.4.1. This definition shows that a solution is able to dominate
another if and only if it shows better or equal values on all objectives of which
at least one of them is better considering perturbations in the parameters. With
this definition, robust Pareto optimality can be defined as follows:

Definition 2.4.2 (Robust Pareto Optimality): A solution ~x ∈ X is called Pareto-


optimal iff:

{6∃ ~y ∈ X|~y ≺ ~x}


40 2. Related work

A set containing all the non-dominated robust solutions (robust Pareto opti-
mal solutions) is the robust answer to a multi-objective problem and defined as
follows [81]:

Definition 2.4.3 (Robust Pareto optimal set): The set of all Pareto-optimal
solutions:

RP S := {~x, ~y ∈ X|6∃ ~y ≺ ~x}

The projection of the robust Pareto optimal set in the objective space is
called the robust Pareto optimal front and defined as follows:

Definition 2.4.4 (Robust Pareto optimal front): A set containing the value of
objective functions for Pareto solutions set:

∀i ∈ (1, 2, ..., o)

RP F := {fi (~x)|~x ∈ RP S}

As may be inferred from the first two definitions, a solution is able to robustly
dominate another if and only if it is compared to all perturbations and found
to be better or equal under all of them. The concepts of robustness in a multi-
objective search space are illustrated in Fig. 2.12.
Minimize (f1)

S1

Robust solution

S2
f2
x2

S4

S3

x1 f1
Minimize (f2)

Figure 2.12: Concepts of robustness and a robust solution in multi-objective


search space
2. Related work 41

This figure shows the mapping from a 2D parameter space to a 2D objective


space. It may be observed that equal perturbations in the parameters x1 and
x2 result in different variations in the objective space. As shown in Fig. 2.12,
solution S2 has the least sensitivity to perturbations, meaning that this solution
is the most robust solution. However, there are other robust solutions (S1, S3,
and S4 ) with different sensitivities to the perturbations. This figure shows that
the Robust Pareto optimal Front (RPF) may consist of both non-dominated
and/or dominated solutions. Depending on the robustness of solutions, the RPF
may also consist of dominated solutions completely. Generally speaking, there
are four possible scenarios for the RPF in comparison with the Pareto optimal
front [44, 69, 75, 83]:

(a) The Pareto optimal front is totally robust

(b) A part of the Pareto optimal front is robust

(c) The Pareto optimal front is partially robust

(d) The RPF is completely dominated by the Pareto optimal front

These four situations are illustrated in Fig. 2.13.


In order to handle uncertainties in parameters, three main methods are used
in the literature. Firstly, the average of fitness functions in a pre-defined neigh-
bourhood around a solution is calculated and optimised as the expected fitness
function instead of the original objective functions. Secondly, the variation of the
fitness function is investigated in a pre-defined neighbourhood around a solution
and considered an additional constraint for the problem. This assists robust op-
timisers to ignore non-robust solutions with high variation in objectives (which
become infeasible) during optimisation. Thirdly, the expected fitness function
or any other robustness indicators are added to a problem as new objectives to
be optimised. In this case the final Pareto optimal front will represent different
trade-offs between other objectives and robustness.
The first type of robustness handling (type I) is formulated as follows [44]:

M inimise : f1ef f (~x), f2ef f (~x), .., foef f (~x) (2.21)

Subject to : ~x ∈ S (2.22)
42 2. Related work

Minimise
Minimise

f2
f2

Robust front

Pareto front and robust front

Pareto front

f1 f1
Minimise Minimise

a) Pareto front is totally robust b) A part of Pareto font is robust


Minimise

Minimise
f2

f2

Robust front
Pareto front

Pareto front
Robust front

f1 f1
Minimise Minimise

c) A part of Pareto front is robust, but there are other robust d) The Pareto front is not robust at all, so the robust front
solutions consists the local front(s)

Figure 2.13: Four possible robust Pareto optimal fronts with respect to the main
Pareto optimal front

gi (~x) ≥ 0, i = 1, 2, ..., m (2.23)

hi (~x) = 0, i = 1, 2, ..., p (2.24)

lbi ≤ xi ≤ ubi , i = 1, 2, ..., n (2.25)


2. Related work 43

H
1 X
where : fief f = fi (~x + δ~j ) (2.26)
H j=1

where S is the feasible search space, H is the number of samples, o is the


number of objectives, ~x is the set of parameters, ~δ indicates the uncertainty
vector corresponding to each variable in ~x, o is the number of objective functions,
m is the number of inequality constraints, p is the number of equality constraints,
[lbi , ubi ] are the boundaries of the i-th variable.
As may be seen in these equations, H random solutions are generated in the
neighbourhood of a solution in order to investigate its robustness. These ran-
dom solutions can be created systematically or obtained from previously sam-
pled points during optimisation. In fact, this method tries to approximate the
integration of the main objective(s) along the perturbations (δ) by a Monte
Carlo sampling approach. If the mathematical formulation of the search space
is known, the robustness can be confirmed by analytical integration as follows:
Z
ef f 1
M inimise : f (~x) = f (~y )dy (2.27)
|Bδ (~x)| y∈Bδ (~x)

Subject to : ~x ∈ S (2.28)

where S is the feasible search space, Bδ (~x) is the neighbourhood of the solution
~x within δ radius, and |Bδ (~x)| is the hypervolume of the neighbourhood.
The second method of handling type B uncertainties adds an extra constraint
(variance measure) to a problem as follows:

M inimise : f1 (~x), f2 (~x), .., fo (~x) (2.29)

Subject to : ~x ∈ S (2.30)

||f p (~x) − f (~x)||


≤η (2.31)
||f (~x)||

gi (~x) ≥ 0, i = 1, 2, ..., m (2.32)


44 2. Related work

hi (~x) = 0, i = 1, 2, ..., p (2.33)

lbi ≤ xi ≤ ubi , i = 1, 2, ..., n (2.34)

where S is the feasible search space, f p (x) can be selected as the effective mean
or worst function value among H selected solutions in the neighbourhood, η is
a threshold that defines the level of robustness for solutions, o is the number of
objectives, ~x is the set of parameters, o is the number of objective functions, m
is the number of inequality constraints, p is the number of equality constraints,
[lbi , ubi ] are the boundaries of the i-th variable.
In these equations, robust solutions are favoured as η decreases. Equa-
tion 2.31 is called the variance measure because it defines the normalised varia-
tion of the objectives in a neighbourhood. The η threshold can be chosen as a
single value for all objectives or a different value for each objective according to
decision makers’ preferences.

2.4.2 Current expectation and variance measures


The expectation measure can be any effective mean of the objective function(s).
Technically speaking, a finite set of H solutions are chosen randomly or with a
structured procedure in the hypervolume ([−δi , δi ]) around the solution x, and
then the effective mean objectives of all samples are optimised by optimisation
algorithms.
Another similar method is called degree of robustness, proposed by Barrico et
al. [12, 13]. Basically, the degree of robustness refers to the analysis of a solution
with respect to its neighbours to find the robust solution(s). A set of solutions
in the neighbourhood (kδ) around a solution (~x) is selected. This selection is
subject to keeping the images of these solutions in a predefined neighbourhood
∆ around f (~x) in objective space. A hyperbox (neighbourhood) of radius δ is
assumed around the solution ~x, and the radius of this hyperbox is increased
in steps of 2δ, 3δ, 4δ, etc. Random solutions (of number H) are generated and
analysed inside these hyperboxes. This process continues until the percentage
of solutions that lies in the projection in objective space (hypersphere with the
2. Related work 45

radius of ∆) is not greater than a pre-defined threshold. The degree of robustness


is proportional to the number of hyperbox enlargements required.
In 2002, Ray handled perturbations in parameters by adding two external ob-
jective functions to the main objective function: 1- mean performance of neigh-
bouring solutions and 2- standard deviation of neighbouring solutions [135]. A
new constraint-handling method which is based on considering the individual
feasibility and its feasibility of neighbours has been proposed as well [135]. The
average and standard deviation were calculated by re-sampling.
The first variance measure was also proposed by Deb and Gupta [44]. In
this measure, a set of randomly created solutions in the δ−neighbourhood of x
should not exceed a certain pre-defined neighbourhood ∆ around f (x) in ob-
jective space. Violation of this constraint assists us in distinguishing between a
robust solution and a non-robust solution. The mathematical model is as follows:

||F (~x) − f (~x)||


V (~x) = ≤ η , ~xS (2.35)
||f (~x)||

where η is a threshold in [0,1], S shows the feasible search space, and F (~x) can
be selected as the effective mean or worst function value among the H selected
solutions.
This constraint is called the variance measure because it defines the deviation
of the objective function(s) in the neighbourhood of a solution.
In 2003, Jin and Sendhoff used the current information of individuals in each
iteration to estimate the robustness of the individuals of the next iteration, so
there were no additional fitness evaluations (re-sampling) [94]. Their work is
formulated to permit different uncertainty levels in design space. After estimat-
ing the robustness, they used it as another objective function to be optimised
in addition to the main objective function(s). Eventually the Pareto optimal
front provided the trade-offs between the performance and robustness. They
proposed two methods to optimise the original function and minimise the vari-
ance. The difference between this method and other methods is that it considers
the variances of design variables in addition to the variance of objective func-
tions. The variances were calculated on the members of a neighbourhood around
the individuals within distance δ. In both methods the robustness of individuals
were calculated by dividing the standard deviation of neighbourhood individual’s
46 2. Related work

fitness values and the standard deviation of variables as follows:

N
1 X σfj
Vi (~x) = (2.36)
N j=1 σxj

where fj indicates the objective of j-th neighbouring solution, N is the num-


ber of neighbouring solutions within the desired radius from the solution xi , σfj
is the standard deviation of objectives of the j-th neighbouring solution, and σxj
shows the standard deviation of the variables of the j-th neighbouring solution.
In order to decrease the computational cost, the neighbourhood members
were selected from the current population. Each of these estimated functions
were used as another objective function to be optimised by multi-objective algo-
rithms. The non-dominated front obtained contained the trade-offs between the
performance and robustness.
There are also some studies in the literature which propose hybrids of type I
and type II in order to handle uncertainties as follows:
A study of different approaches of handling parameters’ uncertainties for
modification of canonical PSO, fully-informed PSO, multi-swarm PSO and charged
PSO was performed by Dippel [58]. The effects of different topologies such as
ring and fully-connected were also investigated. Both re-sampling and archiving
(previously sampled points) approaches were used in this study. In the latter
case, the expectation measure was as follow:
N
P
w(x~j ).f (x~j )
j=1
E(~x) = N
(2.37)
P
w(x~j )
j=1

where x~j shows the j-th solution, N is the number of desirable solutions in
the archive within radius δ from the solution ~x, w(x~j ) ∼ pdf (~δ) is a weighting
function that weights the importance of the previously sampled points in terms
of their distributions within δ.
The sampled points that had not been used for a pre-defined period of time
should be removed from the archive in this method.
In 2011, Saha et al. improved the handling of type B uncertainties proposed
by Deb in terms of reducing the number of function evaluations [143]. In this
method the vicinity of a point in the search space was defined based on the
2. Related work 47

previously evaluated solutions and an archive of size P × 100 was employed to


keep the most recent unique solutions where P is the maximum number of search
agents. In other words, for every search agent in the current population, there
were 100 points in the archive to decide its robustness. For every search agent,
H neighbourhood points were created by the Latin Hypercube Sampling (LHS)
method. This set was called the reference set. Then, the closest solutions in the
archive to the reference points were chosen. If H solutions close to the reference
points could not be found, they had to be created by true function evaluations.
After building the full neighbourhood, the robust measure could be calculated.
There was also a simple modification in the variance measure as follows:

||Fm (~x) − fm (~x)||


V (~x) = max ( )≤η (2.38)
m=1,2,...,M ||fm (~x)||

where M is the number of objectives, Fm (x) is the mean value of m-th ob-
jectives of neighbouring solutions, and η is a threshold in [0, 1].
In 2008, Gaspar-Cunha and Covas proposed two measures for handling type
B uncertainties [74]. The efficiency of combining both previously proposed met-
rics of type I and II robustness handling was also investigated. The measures
proposed were [74, 69]:
Type I:
N
|f˜(x~j ) − f˜(~
P
xi )|
j=0
xi ) = (1 −
E(~ )f (~
xi ) (2.39)
N

Type II:
N
1 X f˜(x~j ) − f˜(~xi )
V (~
xi ) = | | , di,j < dmax (2.40)
N j=0 x~j − x~i

where di,j is the euclidean distance between agent i and j, N is the number of
those agents having distances less than dmax , f˜(~
xi ) = ffmax
(x~i )−fmin
−fmin
for maximisation,
and f˜(~ f ( x
~ )−f
xi ) = 1 − fmax −fmin for minimisation.
i min

The authors also investigated the efficiency of these robust metrics in finding
robust frontiers for multi-objective problems. In order to do this, it was suggested
that the metrics for each of objective functions be calculated one by one. Two
48 2. Related work

methods for calculating the final robustness measurement in this case are effective
mean and worst function value:
M
1 X
V 1(~
xi ) = Vm (~
xi ) (2.41)
M m=1

V 2(~
xi ) = max Vm (~
xi ) (2.42)
m=1,...,M

where x~i is the i-th solution, Vm is the variance of neighbouring solutions in


m-thobjective, and M is the maximum number of objectives.
The authors applied these concepts to the Reduced Pareto Set Genetic Al-
gorithm (RPSGA) and examined it over five test problems. The combinations
of both methods were: c1 = f + V where f is the fitness function and V is
the variance measure in Equation 2.40 (this combination duplicates the number
of objectives), c2 = f + V 1, c3 = f + V 2, E, and f . The evaluation criteria
were the percentage of peaks detected, the ability to find the fittest and more
robust solutions, and the accuracy of results. The results show that both robust
measures were better than previous measures. The best results of combinations
were those of c1 = f + V . However, this method increases the complexity of the
problem, which is a factor that should be considered. The authors recommended
it for moderately sized problems. For bigger problems, the authors recommended
c1 = f + V 1, based on the results. The superior results of this combination were
due to the effective mean of robustness measures which included good knowledge
of the landscape around the current position of particles. The measure for type II
robustness introduced showed good results because the authors considered both
the distance between objective values in the objective space (f˜(x~j ) − f˜(~
xi )) and
variables in the parameter space (x~j − x~i ). This helped to differentiate between
the samples around a particle based on the ratio of the distances in both spaces.
One of the best comparative studies in hybrid methods was made by Branke
in 1998 [18]. Branke introduced 10 different methods for handling uncertainties
of design parameters (type B) divided into 4 different groups. These methods
were applied to EA with the island model and compared over 2 benchmark
functions. The four groups of robust handling approaches were:

1. Average of several evaluations: In this approach several random points


in a pre-defined neighbourhood of a particular solution were created and
2. Related work 49

evaluated. The average of their fitnesses would indicate the robustness.


However, this approach increased the computational time due to evaluating
random points. The distribution of the random points was also important.
A suggestion could be to use the same distribution as the expected noise
in the real environment. Note that this method is called explicit averaging
in some references.

2. Single distributed evaluation: In this method the disturbed input was eval-
uated instead of evaluating multiple samples.

3. Re-evaluating just the best solutions: In this method all the solutions
were evaluated once without perturbations. The robustness of some of
the best solutions was then evaluated using re-sampling several times. So
the computational time was less in this method, and time was not wasted
computing robustness of useless solutions.

4. Using sampled points over past iterations: In this method the weighted
mean of the previously sampled points around a solutions over the course
of iterations was considered as the robustness measure. This method did
not entail additional computational cost, but needed memory to save the
sampled points. Note that this method is called implicit averaging in some
references.
Branke concluded that previously sampled points are able to provide very use-
ful information about the robustness of solutions during optimisation. Therefore,
an algorithm is able to find the robust optimum without the need for extra func-
tion evaluations subject to proper use of previously sampled points. However,
this method is not reliable due to the stochastic nature of stochastic algorithms.
For improving the reliability of such techniques, one of the current mechanism
in the literature is to generate new neighbouring solutions and evaluate them by
true function evaluations. However, true function evaluations directly increase
the computational cost of an algorithm, which is a vital issue when solving real
expensive problems.
This section showed that the process of considering and handling uncertain-
ties in a multi-objective search space is very challenging. This difficulty is per-
haps one of the reasons of the lesser popularity of this field.
The literature review of this section and its preceding indicated that there
are two main robustness measures in the literature: expectation and variance.
50 2. Related work

On one hand, expectation measures do not add additional borders to the opti-
miser but may change the shape of the search space. In addition, considering
expectation measure as an additional objective would increase the difficulty of
problem. On the other hand, a variance measure maintains the original shape of
the search space, but requires a suitable constraint handling method. It should
be noted that a variance measure can be considered as an objective as well.
Despite the effectiveness of both of methods, they suffer from unreliability in
the case of using previously sampled points and high computational cost in the
case of using new sampled points. Currently, the literature lacks (simultaneously)
reliable and cheap robust multi-objective optimisation approaches in both single
and multi-objective search spaces.
In addition, the literature lacks specific test functions and performance met-
rics for robust optimisation. Suitable benchmark problems allow us to compare
different algorithms effectively. performance metrics are useful for quantifying
the performance of algorithms. Without performance metrics all the analysis can
only be made from qualitative results, which are not as accurate as quantitative
results. In the field of robust optimisation, there is a negligible number of test
functions and literally no specific performance metrics. This is the motivation of
proposing several test problems and performance metrics in this thesis. The next
two sections review the current benchmark functions and performance metrics
in both global and robust optimisation fields.

2.5 Benchmark problems


Generally speaking, benchmark problems are essential for testing and challenging
algorithms from different perspectives. They are involved directly in the design
process of optimisation techniques. A set of suitable benchmark problems can
verify the performance of an algorithm confidently and reliably. Although solving
a real problem can prove the applicability of an algorithm better, a real search
space is usually expensive and has a mix of characteristics which prevents us from
observing different abilities of an algorithm conveniently and independently.
Therefore, researchers in this field should be aware of the types and charac-
teristics of the benchmark problems. Due to the known search space of such test
problems, the behaviour of optimisation algorithms can be clearly observed and
confirmed. Needless to say, it is not possible to simulate all the difficulties of real
2. Related work 51

search spaces in one benchmark problem. This section reviews the benchmark
functions in the fields of single- and multi-objective optimisation with a focus
on those for robust optimisation.
Generally speaking, the design process of a test problem has two goals.
Firstly, a test problem should be simple and modular in order to allow researchers
to observe the behaviour of meta-heuristics and benchmark their performance
from different perspectives. Secondly, a test function should be difficult to be
solved in order to provide a challenging environment similar to that of real search
spaces for meta-heuristics. These two characteristics are in conflict where over-
simplification makes a test function readily solvable for meta-heuristics and the
relevant comparison inefficient. In contrast, although a very difficult test func-
tion is able to effectively mimic real search spaces, it may be very difficult to
solve so that the performance of algorithms cannot be clearly observed and com-
pared. These two conflicting issues make the development of test problems very
challenging.
In single-objective problems there are several important characteristics for an
algorithm: exploration, exploitation, local optima avoidance, and convergence
speed. Unimodal [57], multimodal [122], and composite [116] test functions have
been designed in order to benchmark these abilities. These three types of test
functions have been extensively utilised in the literature.
Basically, the conceptual approach of benchmark design is creating different
difficulties of real search spaces to challenge an algorithm. The single-objective
test problems are relatively simple due to the existence of only one global opti-
mum. They are mostly equipped with obstacles to benchmark the accuracy of an
algorithm in finding the global optimum, and its convergence speed. Challenging
test functions with multiple local solutions are able to benchmark the accuracy
of an algorithm due to the high likeliness of local optima entrapment. However,
test functions with no local solutions but different slopes and saturations are
able to test the convergence speed of a single-objective algorithm.
In multi-objective optimisation, however, there are different performance
characteristics for an algorithm. In addition to the above-mentioned charac-
teristics for single-objective optimisation, other important performance features
are convergence towards the global front, and diversity (coverage) of the Pareto
optimal solutions obtained. In order to benchmark the first characteristic, a test
problem should have a Pareto optimal set located in a unimodal, multi-modal, or
52 2. Related work

composite search space. In addition, a test function should have different shapes
of Pareto optimal front such as linear, convex, concave, and discontinuous in
order to benchmark the coverage capability of a multi-objective meta-heuristic.
Due to the concepts of Pareto optimality and multi-optimal nature of multi-
objective search spaces, developing multi-objective test problems is significantly
more challenging than single-objective test functions.
Since the proposal of evolutionary multi-objective optimisation by David
Schaffer in 1984 [32], a significant number of test functions were developed up
to 1998. Some of them are as follows (classified based on the shape of the true
Pareto optimal front):

• Convex Pareto optimal front: Osyczka et al. in 1995 [128], Valenzuela-


Rendó et al. in 1997 [160], and Laumanns et al. in 1998 [110]

• Concave Pareto optimal front: Fonseca and Fleming in 1993 [72] and in
1995 [71], and Murata and Ishibuchi in 1995 [124]

• Discontinuous Pareto optimal front: Osyczka et al. in 1995 [128] and


Vlennet et al. in 1996 [164]

In 1998, Van Veldhuizen and Lamont argued that the majority of the test
functions proposed until then could not be considered as standard test prob-
lems [161]. They sifted the test functions and chose three of them as standard
test functions due to their large search space, high dimensionality, multiple ob-
jectives, and global optimum composed of a shape of bounded complexity. A
generic framework for creating a two-objective test function was first proposed
by Deb in 1999 [46]. The mathematical model of this framework is as follows:

M inimise : f1 (~x) = f1 (x1 , x2 , ..., xm ) (2.43)

M inimise : f2 (~x) = g(xm+1 , xm+2 , ..., xN )×


(2.44)
h (f1 (x1 , x2 , ..., xm ), (xm+1 , xm+2 , ..., xN ))

The main idea of this framework was to break up a test function into differ-
ent controllable components to systematically benchmark multi-objective algo-
rithms. In this framework, f1 (~x) controls the distribution of true Pareto optimal
solutions and benchmarks the coverage ability of an Evolutionary Multi-objective
2. Related work 53

Algorithm (EMA). The g function was proposed to provide multi-modal, decep-


tive, and isolated search spaces, in which different challenging test beds can be
constructed in order to benchmark the convergence of EMAs. The last compo-
nent of this framework, h, is to define the shape of true/local Pareto optimal
fronts. Deb showed that convex, non-convex, and discontinuous Pareto optimal
fronts could easily be achieved by modifying this component [46]. As can be seen
in this framework, different characteristics of a real search space can be simulated
by the proposed components. These components may be modified individually
or simultaneously in order to mimic different characteristics of real problems and
eventually benchmark the performance of EMAs from different perspectives.
In 1999, Zitzler et al. proposed the first so-called ZitzlerDebThiele (ZDT)
standard set of test functions [181] using the proposed framework of Deb [46] and
mimicked six difficulties in real search spaces (convexity, concavity, discontinu-
ity, multi-modality, deceptiveness, and non-uniformity) within six test problems.
The authors compared 8 different algorithms on the ZDT test problems and ob-
served their behaviours from two perspectives: convergence and coverage.
Almost none of the proposed test functions available up to 2001 were ex-
tendable to a desired number of objectives. In 2001, Deb et al. proposed three
systematic methods for creating scalable multi-objective test problems [56, 55].
The three proposed methods were: multiple single-objective functions, bottom-
up, and constraint surface. The first method was to combine different single-
objective test problems as a multi-objective test problem. The advantage of this
method is the ease of construction of a multi-objective test problem. In the sec-
ond method, a Pareto optimal front is first created in an n-dimensional space,
and then the search space beyond that is constructed. The advantage of this
method is that the shape of the Pareto optimal front is known and controlled by
the designer. Finally, the last method starts by creating a simple search space
(hyperbox) and applying a set of constraints to define the shape of the Pareto
optimal front. The advantage of this method is its simplicity but the mathemat-
ical formulation of the final Pareto optimal front is difficult to express. Deb et
al. employed the last two approaches to propose the DTLZ test problems.
In 2006, Huband et al. provided a review of multi-objective test problems
and proposed a scalable test problem toolkit [92]. They also provided similar
recommendations to those of Deb et al. [56, 55]. The recommended features for
the multi-objective test problems were: scalable number of parameters, scalable
54 2. Related work

number of objectives, dissimilar parameter domains, dissimilar trade-off ranges,


known Pareto optimal set, known Pareto optimal front, different shapes of Pareto
optimal front, parameter dependencies, bias, many-to-one mapping, and multi-
modality.
Another key issue in multi-objective test problems that was first argued by
Okabe et al. in 2004 is the shape and complexity of the Pareto optimal set [126].
Before 2004, the majority of researchers had tried to construct test functions
concentrating on the shape of Pareto optimal front. Okabe et al., however,
suggested a method for controlling the shape of the Pareto optimal set and
created several test problems. Despite the merits of their work, the proposed test
functions were too simple, mapping a 2-D search space to a 2-D objective space.
There is also another method of generating complicated Pareto optimal sets
called variable linkage [123], in which a method is designed to create dependencies
among variables. Linkage was also investigated by Deb et al. [54] and Huband
et al. [92].
Despite providing complicated search spaces using linkage methods, in 2009,
Li and Zhang identified that the shape of the Pareto optimal set and linkage
are two different aspects of multi-objective problems [114]. According to them,
a test function with linkage properties may have a very simple Pareto optimal
set. Therefore, they proposed a general class of multi-objective test problems
with known, complicated Pareto optimal sets. Finally, a set of test functions
considering all the above-mentioned characteristics were prepared in a CEC 2009
special session [178].

2.5.1 Benchmark problems for single-objective robust op-


timisation
There are not many robust benchmark functions in the literature [106, 105].
Some studies utilised the dynamic benchmark problems in addition to the current
robust benchmark functions as discussed in [18, 58]. This thesis collects the
majority of the robust test functions from [18, 19, 20, 58, 105, 106, 129, 163] and
analyses them. These test functions are illustrated in Fig. 2.14.
Generally speaking, the benchmark functions are divided into four groups in
terms of the location of robust and global optima [58], as follows:

• Identical global and robust optima: in this case the robust and global
2. Related work 55
f(x,y)

f(x,y)

f(x,y)
f(x,y)
y y x
y x y
x x
TP1 f(x,y) TP2 TP3 TP4
f(x,y)

f(x,y)

f(x,y)
y y
x x y
x y
x
TP5 TP6 TP7 TP8
f(x,y)
f(x,y)
f(x,y)

f(x,y)
y y y y
x x x x
TP9 TP10 TP11 TP12

Figure 2.14: Collected current test functions in the literature for robust single-
objective optimisation. The details can be found in Appendix A.

optima are same.

• Neighbouring global and robust optima: The global and robust optima are
at the same peak (valley).

• Local-global robust and global optima: The global and robust optima are
at different peaks (valleys), and the robust optimum is a local optimum.

• Max-min robust and global optima: The robust minimum/maximum are


in a non-robust maximum/minimum

As may be seen in Fig. 2.14, the test functions are very simple. For instance,
TP8 and TP11 have a stair-shaped search space and the robust optimum of TP10
is very wide. Simplicity can also be observed in the other test functions. Another
major drawback is lack of scalability. The majority of these test functions cannot
be scaled to more than 2 or 5 dimensions. The number of variables is one of
the key factors for increasing the difficulty and effectively benchmarking the
performance of meta-heuristics. The majority of the current test functions have
56 2. Related work

few non-robust local optima as well. It may also be noticed that there are
no deceptive or flat test functions. The last gap here is the lack of specific test
functions with alterable parameters for defining the degree of difficulty. All these
drawbacks make the current test functions inefficient and readily solvable by
robust meta-heuristics. Therefore, the performance of the robust meta-heuristics
cannot be benchmarked effectively.

2.5.2 Benchmark problems for multi-objective robust op-


timisation
The literature shows that different branches of multi-objective optimisation need
specific or adapted test functions in order to observe the performance of al-
gorithms in detail. For instance, there are specific sets of constrained test
functions for constrained multi-objective meta-heuristics [52, 128, 158, 160],
reliability-based optimisation [40, 47, 51], and dynamic multi-objective test prob-
lems [3, 37, 67, 68, 85, 97, 113, 120] . In the field of robust multi-objective opti-
misation, however, there is little in the literature on the development of robust
multi-objective problems.
The first robust multi-objective test problems were proposed by Deb and
Gupta in 2006 [44]. These test functions are illustrated in Fig. 2.15. In this
figure, the Pareto front as well as robust fronts with different perturbation levels
in parameters are provided for each test function. The left subplots show the
search landscape made by both objectives for each test problem. The right
subplots include the Pareto optimal front and expectation of Pareto optimal
fronts (robust fronts) considering different levels of perturbations. In RMTP1,
the local fronts are the robust front when δ = 0.007, 0.008, 0.009, 0.01 from the
bottom to the top. In RMTP2, the local fronts become the robust front when
δ = 0.004, 0.005, 0.006, 0.007. The fronts for RMTP3, RMTP4, RMTP5, and
RMTP6 show the nominal value and expectation of both global and local fronts.
Deb and Gupta proposed this set of test functions in order to simulate the
four possible different situations (as illustrated in Fig. 2.13) of a robust Pareto
optimal front with respect to the main Pareto optimal front. As may be seen in
Fig. 2.15, the robust and global Pareto optimal fronts are identical in RMTP1.
The RMTP2 test function simulates the second type of robust front, in which
a part of the global Pareto optimal front is robust. The robust Pareto optimal
2. Related work 57

front of RMTP3 is completely dominated by the global Pareto optimal front.


Finally, RMTP4 has a robust Pareto optimal front that is partially identical to
the main Pareto optimal front and local fronts. The rest of the test functions
are the extended three-objective version of RMTP1 and RMTP3. These test
functions provide very challenging test beds, as investigated in [43].

3 2.5

2
2 1.5

f1, f2
f1, f2

f2
f2

1
1
0.5

0 0
0 0.5 1 0 0.5 1
f1 f1
Parameter space Objective space Parameter space Objective space
RMTP1 RMTP2
1.5

1
1
f1, f2
f1, f2

f2

f2
0.5
0.5

0 0
0 0.5 1 0 0.5 1
f1 f1
Parameter space Objective space Parameter space Objective space
RMTP3 RMTP4

10 15
f1, f2, f3
f1, f2, f3

5 10
f3
f3

0 1 5 1
0 0.5 0 0.5
0.5 0.5
f2 1 0 f1 f2 1 0 f1

Parameter space Objective space Parameter space Objective space


RMTP5 RMTP6

Figure 2.15: Test problems proposed by Deb and Gupta in 2006 [44]

Deb and Gupta noted that the analytical robust front of each of these bench-
mark functions is known, so they can be utilised to benchmark both type I
and II robustness handling methods. Although this set of test functions is able
to simulate different types of robust Pareto optimal front with respect to the
global Pareto optimal front, there are other issues when solving real problems,
such as discontinuous robust/global Pareto optimal fronts, convex/concave ro-
bust/global Pareto optimal fronts, and multi-modality.
These issues have been discussed and addressed to some extent by Gaspar-
Cunha et al. in 2013 [75]. Their five new benchmark problems are illustrated
58 2. Related work

in Fig. 2.16. Note that the robustness curves in this figure (red lines) are the
cumulative value for the robustness of f 1 and f 2 and that it is plotted for given
values of f 1 (i.e. a point on the Pareto front has its robustness plotted at the
same x position).
1 1

0.8 0.8

0.6 0.6

f1, f2

f2
f1, f2

f2

0.4 0.4

0.2 0.2

0 0
0 0.5 1 0 0.5 1
f1 f1
Parameter space Objective space Parameter space Objective space
RMTP7 RMTP8
1 1

0.8 0.8

0.6 0.6
f1, f2

f1, f2
f2

f2
0.4 0.4

0.2 0.2

0 0
0 0.5 1 0 0.5 1
f1 f1
Parameter space Objective space Parameter space Objective space
RMTP9 RMTP10
1

0.8

0.6
f1, f2

f2

0.4

0.2

0
0 0.5 1
f1

Parameter space Objective space


RMTP11

Figure 2.16: Test problem proposed by Gaspar-Cunha et al. in 2013

As can be seen in this figure, the shapes of the test functions are very different
to that of Deb and Gupta [44]. The robust regions of the main Pareto optimal
front are on the convex section of RMTP7, whereas the robust areas lie on
concave regions of the Pareto optimal front in RMTP9. RMTP10 and RMTP11
were proposed in order to design separated robust regions in the robust Pareto
optimal fronts. As the robustness curves in Fig. 2.16 suggest (the red lines), the
robustness of the separated regions decreases from left to right in RMTP10, while
the robustness is equal for the three discontinuous parts in RMTP11. Note that
the robustness of the Pareto optimal front is calculated by averaging re-sampled
points in the neighbourhood, so a low value in the robustness curve shows low
2. Related work 59

fluctuation and sensitivity to perturbations in the corresponding region of the


Pareto optimal set. For instance, the middle region of RMTP7’s Pareto front is
the most robust area because the robustness curve shows the lowest fluctuation
in both objectives in case of perturbations in the parameters.
As discussed by Gaspar-Cunha et al., RMTP10 is able to test convergence of
an algorithm toward more robust regions of a search space. In addition, RMTP11
is suitable for benchmarking the ability of an algorithm in terms of converging
to distinct regions of the Pareto optimal front.
Another set of test functions was proposed by Goh et al. in 2010 [81]. They
extended three single-objective robust test functions proposed by Branke [18, 19]
and Paenke et al. [129] and compared them with those of Deb et al. [44]. They
argued that the search spaces of these earlier test functions have a bias toward
the robust Pareto front, so it is hard to distinguish whether the robust measure
assists the EMA to find the robust Pareto front or the robust Pareto front is
obtained because of the failure of the algorithm in finding the Pareto optimal
front. Therefore, they proposed a Gaussian landscape generator to integrate
different parametric sensitivities in deterministic search spaces. They designed
five test functions (GTCO) which show different characteristics based on the
Gaussian landscape generator.
There is also a recent set of test functions (BZ) proposed by Bader and
Zitzler [11]. The lack of specific multi-objective test problems in the literature
was reported and an early attempt made to design a standard set. Bader and
Zitzler proposed six test functions with different robust characteristics. The
focus was mostly on the proposal of a multi-modal parameter space and multi-
frontal objective space. The shapes of the robust Pareto optimal fronts were
linear, concave, or convex in the proposed test functions.
Although the proposed test functions in the literature provide different test
beds for benchmarking the performance of robust meta-heuristics, there is a
lack of general-purpose test functions (or frameworks) with control variables for
adjusting the complexity. In addition, there are few multi-modal benchmark
problems, which can provide very similar test beds to the real search spaces.
Another gap in the literature is the lack of test functions with disconnected,
biased, and flat robust Pareto optimal fronts.
In summary, this section showed that the test functions in the field of global
optimisation are not suitable for benchmarking the performance of robust al-
60 2. Related work

gorithms. This is because such test functions are not seeking to test robustness
of the solutions obtained. There might not even be a robust optimum in a test
function that has been design for testing a global optimiser. It was also observed
that what specific test functions there are for testing robust algorithms are very
simple and limited. They mostly have few local solutions, symmetric search
spaces, and low numbers of variables. On one hand, they allow us to observe
some of the behaviours of a robust algorithm. On the other hand, they are read-
ily solvable by most of the algorithms. The robust multi-objective benchmark
problems also suffer from the same drawbacks despite their use in different stud-
ies. The current gaps for both robust single-objective and multi-objective test
functions are lack of other difficulties such as bias, deceptiveness, flatness, large
number of local solutions (fronts), and large number of variables.

2.6 Performance metrics


Benchmark problems are the main tools for testing the ability of different algo-
rithms in this field. Performance metrics quantify and measure the performance
of algorithms on benchmark and other problems. Without such tools, we can
only compare different optimisation techniques qualitatively, which is not an ac-
curate analysis. We ask ourselves, “How do we determine the extent to which
changes in algorithms are beneficial”. In other words, a qualitative analysis can
show us which method is better, while a quantitative analysis shows how much
better a technique is. Similarly to benchmark functions, there should be differ-
ent performance metrics for indicating the performance of a robust algorithm
since the goal is to find the robust optimal solution(s) and not necessarily global
optimal solution(s). This section only covers the current performance metrics in
the fields of single- and multi-objective optimisation.
The literature shows that the performance of single-objective algorithms is
quantified mostly by two main metrics: an accuracy indicator and convergence
rate. As its name implies, the former performance indicator measures the dis-
crepancy of the approximated global optimum from the true optimum. The
latter performance indicator measures the convergence speed of an algorithm in
approximating the global optimum. These two metrics are the main performance
measures in the literature of single-objective optimisation.
For the single-objective robust algorithms, the above-mentioned performance
2. Related work 61

indicators (accuracy and convergence speed) can be utilised. This includes the
accuracy of the robust optimum obtained and convergence speeds towards the
robust optimum instead of the global optimum. In addition, the sensitivity of the
solution obtained to the possible uncertainties could be considered as a specific
metric of robustness. This can be highlighted by solving the problem with both
global and robust optimisers to show the robustness of both solutions obtained.
The main purpose of a performance indicator in Evolutionary Multi-Objective
Optimisation (EMOO) is to quantify the performance from a specific point of
view. Generally speaking, the ultimate goal in EMOO is to find a very accurate
approximation and large number of the true Pareto optimal solutions with uni-
form distribution across all objectives [181]. Therefore, the current performance
measures can be classified into three main categories: convergence, coverage, and
success metrics. The first class of performance measures quantifies the closeness
of the solutions obtained to the true Pareto front [142, 141], and the second class
of metrics defines how well the solutions obtained “cover” the range of each of
the objectives [66]. In addition, the number of Pareto optimal solutions obtained
is important [184] (the success ratio), which provides decision makers with more
designs from which to choose.
Another classification in the literature is between unary [160, 170] and bi-
nary [182, 84] performance indicators. The former class of metrics only accepts
one input and provides a real value, whereas the latter metrics have two in-
puts and one output. According to Zitzler et al. [184], each of these types have
their own disadvantages. The drawback of the unary performance indicators is
that there should be more than one measure to assess the performance of the
algorithms, and it has been proven by Zitzler et al. that designing an effective
general-purpose unary performance measure to evaluate the overall performance
of an algorithm (convergence, coverage, and success ratio) is impossible [184]. In
addition, binary performance measures provide n(n − 1) different values when
comparing n algorithms, whereas unary metrics provide n values. As the main
drawback, this makes the interpretation, analysis, and presentation of the binary
measures more challenging. It should be noted that an important characteris-
tic of a performance metric is Pareto-compliance [73]. A performance metric
is Pareto-compliant if it does not contradict the order enforced by the Pareto
dominance relation.
Despite the limitations of the unary performance indicators, there is no doubt
62 2. Related work

that they have been the most popular performance assessors in the literature.
This is probably due to their simplicity and ease of analysis. This thesis con-
centrates on the unary performance measures, but addressing the three, major
aspects of performance already specified. In the following subsections a review of
the current convergence, coverage, and success ratio (number of Pareto optimal
solutions obtained) metrics is provided.

2.6.1 Convergence performance indicators:


This subsection covers the most widely-used convergence metrics in the litera-
ture.

2.6.1.1 Generational Distance (GD):

This metric was proposed by Veldhuizen in 1998 [161]. GD calculates the dis-
tance of Pareto optimal solutions obtained from a selected reference set in the
Pareto optimal front. The mathematical formulation is as follows:
pPno
2
i=1 di
GD = (2.45)
n
where no is the number of obtained Pareto optimal solutions and di indicates the
Euclidean distance between the i-th Pareto optimal solution obtained and the
closest true Pareto optimal solution in the reference set. Note that the Euclidean
distance is calculated in the objective space.

2.6.1.2 Inverted Generational Distance (IGD):

The mathematical formulation of IGD is similar to that of GD. This modified


measure was proposed by Sierra and Coello Coello in 2005 [148].
qP
nt 0 2
i=1 (di )
IGD = (2.46)
n
0
where nt is the number of true Pareto optimal solutions and di indicates the
Euclidean distance between the i-th true Pareto optimal solution and the closest
Pareto optimal solution obtained in the reference set.
The Euclidean distance between solutions obtained and the reference set is
different here. In IGD, the Euclidean distance is calculated for every true solution
with respect the nearest Pareto optimal solution obtained, in the objective space.
2. Related work 63

2.6.1.3 Delta Measure:

A similar metric was proposed by Deb et al. in 2002 [45] as follows:


PN PH
i=1 j=1 di,j
Υ= (2.47)
NH
where N is the number of Pareto optimal solutions obtained, H is the number
of solutions selected in the reference set which is different for each solution, and
d(i,j) shows the Euclidean distance from the i-th solution obtained to the j-th
reference point.
A lower value of this metric indicates closeness of the Pareto optimal front
obtained to the true Pareto optimal front.

2.6.1.4 Hypervolume metric:

This metric is for quantifying the convergence behaviour of MOEAs, designed by


Zitzler [183, 180]. The idea is to calculate the area/volume of the objective space
that is dominated by the non-dominated Pareto optimal solutions obtained. Note
that this performance indicator is called Size of Space Covered (SCC) in some
references [157].

2.6.1.5 Inverse hypervolume metric:

As the name implies, this metric is the inverse of the hypervolume metric, in
which the area/volume of the objective space that is not dominated by the Pareto
optimal front obtained is calculated with respect to a reference set [112].

2.6.2 Coverage performance indicators:


This subsection presents the most-widely used coverage metrics in the literature.

2.6.2.1 Spacing (SP):

The spacing metric was first proposed by Schott in 1995 [144]. The main idea of
this metric was to calculate the variance of the Pareto optimal solutions obtained.
The mathematical expression of SP is as follows [28]:
v
u n
u 1 X
SP , t (d¯ − di )2 (2.48)
n − 1 i=1
64 2. Related work

where d¯ is the average of all di , n is the number of Pareto optimal solutions ob-
xi ) − f1 (x~j )|+|f2 (~
tained, and di = minj (|f1 (~ xi ) − f2 (x~j )|) for all i, j = 1, 2, ..., n.
A low value for this measure shows a greater number of, and more equally
spread solutions along the Pareto optimal front obtained.
Another similar metric was proposed by Deb et al. in 2002 [45]. This method
averages the Euclidean distance between the neighbouring Pareto optimal solu-
tions obtained as the spread of the solutions. Note that this metric is calculated
with respect to at least two extreme solutions that define the maximum extent of
each objective based on the true Pareto optimal front. This metric is as follows:

P −1 ¯
df + dl + N i=1 |di − d|
∆= (2.49)
df + dl + (N − 1)d¯
where N is the number of Pareto optimal solutions obtained, d¯ is the average
of Euclidean distances, di is the Euclidean distance of the i-th solution and
its consecutive solution in the Pareto optimal set obtained, and df , dl are the
Euclidean distance between the boundary of solutions obtained and the extreme
solutions.

2.6.2.2 Radial coverage metric:

This metric was proposed by Lewis et al. in 2009 [112] where the objective space
is divided into radial sectors originating from the origin. Then the number of
segments that are occupied by at least one Pareto optimal solution obtained is
calculated as the coverage of an algorithm. The mathematical expression of this
metric is as follows:
Pn
ψi
Ψ = i=1 (2.50)
N
  
1 (Pi ∈ P F ∗ ) ∧ αi−1 ≤ tan f1 (x) ≤ αn
f2 (x)
ψi = (2.51)
0 otherwise

2.6.2.3 Maximum Spread (M ):

This method was proposed by Zitzler and is as follows [180]:


v
u o
uX
M =t max (d(ai , bi )) (2.52)
i=1
2. Related work 65

where o is the number of objectives, and d() calculates the Euclidean distance,
ai is the maximum value in the i-th objective, and b is the minimum value in
the i-th objective
As this equation shows, this metric defines a hyperbox/hypercube using the
Pareto optimal front obtained and finds the maximum diagonal distance.

2.6.3 Success performance indicators:


The number of success metrics is substantially less than convergence and cover-
age measures in the literature. This subsection discusses two of the most popular
ones.

2.6.3.1 Error Ratio (ER):

This metric counts the number of Pareto optimal solutions obtained that belong
to the set of true Pareto optimal solutions and divides it by the total number of
solutions found. The formulation of this metric was proposed by Veldhuizen in
1999 [160] as follows:
Pn
ei
ER = i=1 (2.53)
n


0 P ∈ P F ∗
i
ei = (2.54)
1 otherwise

where n is the number of Pareto optimal solutions obtained and Pi is the i-th
Pareto optimal solution obtained.
The lower value of this measure shows the better approximation of the true
Pareto optimal solutions.

2.6.3.2 Success counting (SCC):

This measure was proposed by Sierra and Coello Coello [148]. This measure
counts the number of solutions obtained that are members of the true Pareto
optimal set. The mathematical formula proposed is as follows:
n
X
SCC = si (2.55)
i=1
66 2. Related work


1 P ∈ P F ∗
i
si = (2.56)
0 otherwise

where n is the number of Pareto optimal solutions obtained and Pi is the i-th
Pareto optimal solution obtained.
In contrast to ER, a high value for this measure shows better performance.
The above-mentioned multi-objective performance indicators have been widely
used in the literature to compare different algorithms. Regardless of the advan-
tages and disadvantages of unary and binary measures, they are all highly suit-
able for the quantitative analysis of results from different perspectives. However,
none of them are able to measure the performance of robust multi-objective al-
gorithms effectively. There is literally no specific performance metric to measure
the robustness of the Pareto optimal solutions obtained. Therefore, the compari-
son of the current robust multi-objective algorithms are qualitative and obviously
not accurate enough, while this is essential when evaluating the performance of
algorithms. This is the motivation of the work in this thesis where several spe-
cific performance metrics will be proposed to quantify the performance of robust
multi-objective algorithms for the first time.

2.7 Summary
This chapter first provided a comprehensive review of the evolutionary single-
objective optimisation methods. Different types of such optimisation techniques,
drawbacks, advantages, and the state-of-the-art were discussed in detail. After
that, multi-objective optimisation using evolutionary algorithms was discussed
as one of the most practical and popular branches in this field. The essential
definitions, recent advances, different techniques, difficulties, most popular al-
gorithms, benchmark problems, and performance metrics in this field were the
main discussions.
The chapter also included two further sections: single-objective robust opti-
misation and multi-objective robust optimisation. The former section was dedi-
cated to the literature review of the current robust single-objective optimisation
techniques, benchmark problems, and robustness measures in single-objective
search spaces. This section also identified diverse types of uncertainties and
2. Related work 67

their impacts on the real systems/problems. In the latter section, the literature
of robust multi-objective optimisation was reviewed in detail. Similarly to robust
single-objective optimisation section, preliminaries, essential definitions, current
robust multi-objective optimisation methods, performance metrics, benchmark
problems, and robustness measures were covered.
Chapter 3

Analysis

The preceding chapter reviewed the literature of robust optimisation and re-
lated fields. Similarly to global optimisation, a robust optimisation process in-
cludes four main phases: benchmark development or preparation, development
or preparation of performance metrics, proposing and testing an algorithm or
improvement, and applying the algorithm to an actual industrial problem to be
optimised. This chapter analyses each of these phases and identifies their current
gaps.

3.1 Benchmark problems


Benchmark problems are essential for testing different algorithms. They provide
test beds with different difficulties to challenge optimisation algorithms. With-
out benchmark problems, we would have to compare algorithms on real problems
that are usually expensive and have unknown search spaces. The computation-
ally expensive nature of real problems makes the design process of an algorithm
significantly longer. However, benchmark problems are computationally cheap
and allow us to compare algorithms conveniently.
Real problems also have unknown search spaces and true optimal solution(s).
These prevent an algorithm designer from testing the different ability of the
algorithm independently. With an unknown optimum, we are never sure if we
have reached it. There is not a way to measure how close we are to it. A real
search space is a mixture of difficulties, so an algorithm has to overcome all of
them simultaneously to estimate the optimal solution(s). Benchmark problems
isolate one of a set of difficulties that an algorithm may face in a real search

68
3. Analysis 69

space. This assists us to test the abilities of an algorithm dealing with isolated
difficulties.
In the literature of global optimisation, there is a significant number of test
suits, test cases, and frameworks for testing algorithms. The number, quality,
and popularity of the test functions in this field assure a designer that compar-
isons on them are reliable. In order to design a robust algorithm, there is a need
for adequate and proper test functions as well. Some of the challenges that an
algorithm may face when searching for robust solutions in a real search space
are: non-robust local optimal solutions, robust local optimal solutions, slow con-
vergence, deceptive non-robust optimal solutions, robust optimal solutions close
to the boundary of the search space, and so on. Unfortunately, a limited number
of these difficulties have been implemented in test functions.
The current test problems are very simple and limited, so they are not ef-
ficient in benchmarking the ability of robust algorithms. They mostly have
few local solutions, symmetric search spaces, and a low number of variables. Al-
though such test problems allow us to observe some of the behaviours of a robust
algorithm, they are readily solvable by most of the algorithms. The robust multi-
objective benchmark problems also suffer from the same drawbacks despite their
use in different studies. The current gaps for both robust single-objective and
multi-objective test functions are lack of difficulties such as bias, deceptiveness,
flatness, large number of local solutions (fronts), and large number of variables.
In addition, there is no framework with alterable parameters to allow a designer
to generate new test functions based on their needs. Therefore:

We must design more challenging test functions, and frameworks to alter their
difficulties.

3.2 Performance metrics


After finding or designing a suitable set of test functions in the systematic design
process, the next step is to find appropriate performance metrics. Test functions
allow us to test and observe the behaviour of an algorithm. In order to compare
different algorithms, however, we need to use performance metrics. There are
two types of performance metrics in the literature: qualitative versus quantita-
tive. Qualitative metrics are usually illustrative in this field and determine if an
algorithm is better than another qualitatively. Although these metrics determine
70 3. Analysis

the superiority of an algorithm, they cannot measure how much an algorithm is


better than another. In other words, a quantitative metric defines the degree of
superiority.
Similarly to benchmark problems, there is a substantial number of works for
evaluating and proposing performance metrics specifically for multi-objective
global optimisers. However, there is no work in the literature to investigate and
propose performance metrics for evaluating robust algorithms. All of the current
works borrow the qualitative metrics from the field of global optimisation and
there is no specific quantitative performance metrics to measure how robust the
solutions obtained are. Therefore:

We must propose and utilise performance metrics to determine the extent to


which minor or major changes in algorithms are beneficial.

3.3 Robust algorithms


The above-discussed two phases of a systematic algorithm design process, devel-
oping benchmark problems and performance metrics, are prerequisite to the last
phase, algorithm invention or improvement. Due to the existence of standard
tests function and performance metrics in the field of global optimisation, there
is again a remarkable number of works in the literature on improving, hybridis-
ing, and proposing different algorithms. However, more works should be done in
the literature of robust optimisation. The number of works is dramatically less
than in the field of global optimisation. A global optimiser cannot determine
robust solutions, so designers usually use additional constraints to guarantee
the robustness of solutions. The advantage of this method is that there is no
need to develop robust algorithms and a global optimiser can be applied to the
problem directly. The disadvantage is that adding constraints increases the dif-
ficulty of problem and changes the boundaries of the search space. A better
way is to use specific operators to look for the robust optimal solution(s) in-
stead of global solution(s). On one hand, this method does not require adding
additional constraints and the optimiser is searching the main search space. On
the other hand, suitable mechanisms should be integrated into the robust algo-
rithm to avoid non-robust, often global optimal solution(s) and search for robust
solutions.
3. Analysis 71

The majority of current methods suffer from significant additional computa-


tional costs due to the need for additional function evaluations to confirm the ro-
bustness of solutions, making them impractical for solving real problems [96, 75].
As an example, antenna design problems are usually very expensive and may
easily take up to 60,000 CPU hours (roughly 357 week or 6 years) to optimise
them [111]. If we want to find the robust solutions using an algorithm that
needs 4 sampled points for every solution, the robust optimisation would take
up to 240,000 CPU hours (1,432 weeks or 27 years). This example shows how
impractical a robust optimisation that relies on additional sampled points is.
Therefore:

We must find a way to reduce the computation time of robust optimisation


algorithms.

There have been two main approaches proposed for reducing the compu-
tational costs (true function evaluations) of robustness handling methods, as
follows:

1. Archive-based methods, in which previously sampled solutions are saved


and re-used during the optimisation in order to define the robustness of
meta-heuristics’ search agents [143, 18]

2. Surrogate-based techniques where a meta-model is employed to approxi-


mate the search space in the neighbourhood of solutions [127, 137]

Generally speaking, surrogate models are approximations of the real search


spaces and computationally cheaper than the real model. They allow designers
to have a rough image of the search landscape and make the design process
faster. However, one of the major problems is that a surrogate model might
not be accurate enough. This means that an optimiser (or a designer) always
investigates the approximated model of the search space instead of the actual
one. Therefore, utilising surrogate models reduces the reliability of the whole
design process although sometimes they can lead us to the real solution faster.
This deteriorates when searching for robust solutions due to the intrinsic
uncertainties involved in meta-models. The errors from meta-models can be
considered mistakenly as the sensitivity of a solution to other uncertainties. In
this case, a solution that is not sensitive to the probable uncertainties can be
discarded by an optimiser due to the noise from the meta-model.
72 3. Analysis

Another disadvantage of surrogate models is that they mostly use one or more
than one [162, 151] model for the entire search space, while a real search space
usually has regions with diverse shapes. This is because surrogate models are
constructed from limited (local) information about the search space, which works
well for some regions but may provide deceptive information to the optimiser
about other areas of the search space. Therefore, surrogate-assisted algorithms
may become unreliable because they can be deceived by the surrogate models
in some regions of the search space. Due to inaccuracy and unreliability, these
techniques are not investigated in this thesis and considered out of scope, so
interested readers are referred to the comprehensive review by Jin [95].
The archive-based methods, which are the focus of this work, rely on pre-
viously evaluated solutions during robust optimisation. The main advantage of
these methods compared to surrogate-assisted approaches is the use of the real
search space. Utilising a real search space prevents an optimiser from making
unreliable comparisons between robust and non-robust solutions. It is worth
mentioning here that the reliability of archive-based methods can be improved
by making more true function evaluations around the solution. This process be-
comes computationally cheaper every year with the improvement of hardware.
It seems both surrogate-assisted and archive-based algorithms have their own
advantages and drawbacks. On one hand, surrogate-assisted algorithms do not
solve the real search space, have intrinsic errors, and rely on local information ex-
tracted from the search space. However, they are cheap, so additional function
evaluations have no substantial additional cost. On the other hand, archive-
based algorithms solve the actual search space but suffer from unreliability be-
cause of the stochastic nature of meta-heuristics. The solutions contained in
the archive are the product of the algorithms’ operations, not a systematic sam-
pling of the search space. Due to the importance of solving the actual search
space (and not the surrogate model), the archive-based methods are investigated
and improved in this thesis. The unreliability of such methods is targeted for
improvement as the only disadvantage.
The usefulness of archive-based methods was investigated and confirmed by a
number of studies [18, 94, 58, 143]. They experimentally proved that previously
sampled points could reduce the number of true function evaluations significantly
and are able to provide good information about the robustness of solutions.
According to the central limit theorem, in addition, a large number of samples
3. Analysis 73

from a population is able to yield a mean equal to that of the population itself.
Therefore, the average of a large number of sampled points in the neighbourhood
of a solution gives us the average of the real search landscape to determine the
robustness.
Although previously evaluated points can provide very useful information
about the robustness of new solutions without the need for additional evalu-
ation [18], the stochastic nature of meta-heuristics prevents this method from
providing the highest reliability. In effect, the reliability of the archive-based
methods is decreased as the number of archive members and true function eval-
uations are reduced. Deb et al. studied the effects of neighbourhood solutions
in terms of finding the analytical robust front and found that finding the ro-
bust front becomes more challenging when the number of sampled solutions (or
neighbouring solutions) decreases [43]. In addition, archive-based methods that
only use previously sampled points are very unstable in the initial steps of opti-
misation due to fewer sampled points.
All these reasons reduce the confidence of designers and decision makers in
the performance of the archive-based methods and the quality of robust de-
signs obtained. What make these methods unreliable are: lack of sufficient
previously sampled points in the archive, lack of a good distribution of sampled
points around particular solutions, and lack of appropriate sampled solutions
in a certain radius around particular solutions. One might think that the un-
reliability of the archive-based approaches is resolved after the initial steps of
meta-heuristic optimisation, but the stochastic nature of meta-heuristics and
the unknown shape of the search space prevent the archive-based methods from
making confident decisions throughout the whole optimisation process.
Fig. 3.1 shows an example of an archive-based method providing misleading
information when relying on previously sampled points. In this figure the solu-
tion S2 is more robust than S1. In the archive, however, there is one sampled
point for S1 and three for S2 to confirm the robustness. Since the sampled
solution and S1 are very close, the robustness measure indicates high robust-
ness. However, the robustness measure for S2 show less robustness due to the
distribution of the solutions around S2 in parameter and objective spaces. In
this case, an archive-based method assumes that S1 is more robust than S2,
while it is not. Such circumstances can happen throughout robust optimisation,
which results in guiding the search agent of meta-heuristics toward misleading,
74 3. Analysis

Previously
Sampled points
S1
x2

f2
S2

x1 f1

Figure 3.1: An example of the failure of archive-based methods in distinguishing


robust and non-robust solutions. Note that the variances shown are the actual
variances, not those detected by the sampling.

non-robust regions of search spaces.


Therefore, it seems these robustness measures are not particularly reliable
metrics for confirming the robustness of solutions when using previously sam-
pled points. In multi-objective robust optimisation, the Pareto dominance based
on robustness measure is also premature since there are possibilities that a non-
robust solution dominates a robust solution only because of the unreliability of
the archive-based methods. The reason for this unreliability of the robustness
measures proposed so far is that the status of previously sampled points in pa-
rameter space is not considered. Only the magnitude of changes in the objective
space are considered.

3.4 Systematic robust optimisation algorithm


design process
As discussed above, the main lack is of a standard and systematic design process
including suitable robust test functions, robust performance metrics, and robust
algorithms. This thesis attempts to propose the first systematic robust optimisa-
tion process in the literature. It proposes frameworks for generating challenging
test functions with different levels of difficulty, performance metrics to quantify
the performance of algorithms and compare them, and techniques to improve
the reliability of archive-based methods. The unreliability originates from the
3. Analysis 75

lack of confidence that we have in the sampled points inside the archive. There-
fore, if a method improves our confidence in the values of the archive members,
we can design more reliable algorithms. This is what the literature lacks and
the specific motivation of this research, in which a novel method is proposed to
measure the confidence level of search agents during optimisation to alleviate the
unreliability of the archive-based algorithms. It should be noted here that we
cannot achieve 100% reliability because a method that improves our confidence
is only an estimate, as it is derived from samples.
The current robust test problems suffer from simplicity and are not able to
test robust algorithms efficiently. There is no performance metric for evaluating
the ability of robust algorithms. In addition, the majority of current methods
suffer from significant additional computational costs due to the need for addi-
tional function evaluations to confirm the robustness of solutions, making them
impractical for solving real problems [96].
To fill these gaps, I propose the systematic design process in the following
chapters. This includes designing challenging robust test problems to compare
algorithms, performance metrics to measure how much a robust algorithm is
better than another, and computationally cheap robust algorithms to find robust
solutions for optimisation problems. As Fig. 3.2 shows, the first two phases of this
systematic process, the development of test functions and performance metrics,
are prerequisite to the third phase, algorithm development.

3.5 Objectives and plan


To answer the main research question, this thesis proposes robust test func-
tions, robust performance metrics, and computationally cheap robust optimisa-
tion techniques. These three components are the foundation of the systematic
process and can be used by other researchers.
For creating challenging test problems, several frameworks are proposed that
allow creating test functions not only for the purpose of this thesis but also for
other research in future. Two performance metrics are also proposed to quantify
the performance of robust multi-objective algorithms. Again, the performance
metrics can be used to compare any two algorithms in the literature.
In addition to benchmark functions and performance metrics, a new metric
called the Confidence (C ) measure is proposed to consider the status of previ-
76 3. Analysis

Robust test
function
design

Robsut
performance
metric
design

Robust
algorithm
design

Figure 3.2: Test functions and performance metrics are essential for systematic
algorithm design

ously sampled points in the parameter space in order to improve the reliability
of robustness measures. Confidence-based relational operators and confidence-
based Pareto optimality/dominance are then proposed using both robustness
and confidence metrics in order to make confident and reliable comparison be-
tween solutions in both single- and multi-objective search spaces. Two novel
and cheap approaches called Confidence-based Robust optimisation (CRO) and
Confidence-based Robust Multi-objective optimisation (CRMO) are established.
The proposed approach improves the reliability of archive-based methods with-
out additional computational costs and assists designers to confidently rely on
points previously evaluated during optimisation. In addition, the proposed ap-
proach allows designing different confidence-based methods for finding robust
Pareto solutions.
The objectives of this thesis are:

1. To propose a confidence measure for quantifying and measuring the confi-


dence level of robust solutions

2. To evaluate the current robust single-objective test functions and propose


frameworks to generate more challenging ones

3. To evaluate the current robust multi-objective test functions and propose


3. Analysis 77

frameworks to generate more challenging ones

4. To propose novel performance metrics for evaluating and comparing robust


multi-objective algorithms

5. To propose a confidence-based robust optimisation approach for finding


the robust solutions of single-objective problems reliably and without extra
computational cost

6. To propose confidence-based robust multi-objective optimisation approach


for finding the robust solutions of multi-objective problems reliably and
without extra computational cost

7. To propose the first systematic robust design process including standard


test functions, performance metrics, and computationally cheap robust al-
gorithms

8. To investigate the application of the proposed confidence-based robust op-


timisation approaches in finding robust solutions of real world problems

The gaps that are going to be filled by the above-mentioned objectives and
consequently where the contributions of this thesis fit are illustrated in Fig. 3.3.

3.6 Contributions and scope


In addition and as a result of this analysis, further contributions and scope of this
thesis are shown in Fig. 3.4. The first contribution is the development of robust
single-objective and multi-objective test functions. The focus will be on the
proposal of frameworks with alterable parameters for generating test functions
with different difficulties. In addition, various test functions will be proposed
with diverse characteristics to benchmark the performance of robust algorithms
from different perspectives. All the test functions will be unconstrained, but can
easily be equipped with constraints.
Proposed robust multi-objective performance measures can be used to com-
pare any two sets of robust Pareto optimal solutions. However, the robustness
of the true front should be known a priori. Therefore, all the robustness curves
are defined for the test functions. The performance metrics are illustrated by
using them to quantify the performance of algorithms on bi-objective problems.
78 3. Analysis

Population-based stochastic optimisation methods

Single-objective optimisation
2 4

1 7
Robust optimisation
3 5

Multi-objective optimisation 6

Engineering problems: marine propeller


design

Figure 3.3: Gaps targeted by the thesis

The confidence measure will be proposed for handling type B uncertainties


only (in parameters). However, perturbations in operating conditions can be
handled in case of parameterising them as well. Such cases will be considered
when solving the real-world test case.
Confidence-based relational operators and confidence-based operators are
only applicable to robust single-objective optimisation algorithms. The focus
will be on unconstrained robust optimisation. The proposed confidence-based
robust optimisation will be demonstrated only on PSO and GA.
The proposed confidence-based Pareto optimality is applicable to compare
two sets of solutions for problems with two or more objectives. Confidence-
3. Analysis 79

Confidence measure

Confidence-based relational
Implicit mathods
operators
CRPSO
Single-objective robust Confidence-based robsust
Population-based stochastic robust

Explicit methods
optimisation optimisation
CRGA
optimisaiton methods

Benchmark problems

Confidence-based Pareto
optimality
Implicit methods
Confidence-based robust
CRMOPSO
multi-objective optimisation
Explicit methods
Multi-objective robust
optimisation
Benchmark problem

Performance metrics

Figure 3.4: Scope and contributions of the thesis

based operators can also be integrated into any multi-objective optimisation


algorithm, but this thesis only proposes a confidence-based MOPSO algorithm
called Confidence-based Robust MOPSO (CRMOPSO).
A real problem is chosen from the field of propeller design. To be exact, the
shape of several marine/submarine propellers is optimised using MOPSO and
CRMOPSO.

3.7 Significance of Study


The confidence measure allows us to define the confidence that we have when
relying on previously sampled points. The proposed confidence measure will
consider the status of the previously sampled points from different perspectives:
the number of sampled points, the distance from the main solutions, and their
distribution. Integrating the confidence measure into the relational operators
(second objective) allows us to perform confidence-based comparison between
the search agents of single-objective algorithms. In addition, confidence-based
relational operators are integrable into different operators of the meta-heuristics,
80 3. Analysis

so specific mechanisms can be constructed for different meta-heuristics for doing


confidence-based robust optimisation.

The confidence-based Pareto dominance concept allows confidence-based com-


parison of solutions in multi-objective search spaces. The proposed confidence-
based Pareto dominance is integrable into different components of multi-objective
meta-heuristics as well. Therefore, it can be utilised to design specific operators
for different meta-heuristics and consequently perform confidence-based robust
multi-objective optimisation.

This thesis also proposes several challenging frameworks and test functions for
single-objective and multi-objective robust algorithms. Due to the lack of such
difficult test functions in the literature, these contributions can be considered
as one of the seminal attempts in designing standard single- and multi-objective
robust test problems. The thesis also considers the proposal of two specific
novel performance measures for comparing the robust multi-objective algorithms.
There are no performance metrics in the field of robust multi-objective optimi-
sation, so the proposed metrics are very important and can be used to quantify
the performance of robust multi-objective algorithms for the first time. In addi-
tion, the proposed systematic robust algorithm design process allows designers
to reliably and confidently propose new algorithms or improve the current ones.

The confidence measure and confidence-based robust optimisation perspec-


tives only utilise previously sampled points during the optimisation process.
Therefore, they would not increase the computational burden of the algorithms.
For example, a real problem, which needs 1 month to be optimised by an algo-
rithm, needs 6 months time to be optimised robustly by a re-sampling method
(supposing 5 new points were re-sampled around each solution). However, the
proposed confidence-based robust optimisation process requires the same period
of time (1 month) to determine the robust solution(s).

The systems designed by computer scientists usually have very powerful com-
putational foundations, but there is often little focus on real application. The
investigated case studies of this thesis are real problems, so there will be an em-
phasis on real applications in addition to theoretical works. Several propellers
will be optimised by MOPSO for the first time. In addition, this thesis is the
first research which tries to design a robust propeller.
3. Analysis 81

3.8 Summary

The discussions of this chapter showed that the state-of-the-art in the field of
single-objective and multi-objective evolutionary optimisation are very mature
since there is considerable research in these two fields. There are many al-
gorithms, benchmark problems, performance metrics, and constraint handling
techniques. The applications of optimisation techniques in both fields can be
found widely in different branches of science and industry. Although most of
the real problems have multiple objectives, the importance of single-objective
optimisation should not be underestimated. Such techniques are very essential
in solving and analysing real problems with one objective. Robust optimisation
is also important in both areas. It does not matter if we look for one solution
or a set of solutions, the presence of uncertainties in real environments is always
a substantial threat for the stability and reliability of the optimal solution(s)
obtained. Robust optimisation in a multi-objective search space seems to be
more challenging and critical than a single-objective search space although it is
essential when solving real problems of any type.
Finding optimal solutions that are less sensitive to perturbations requires a
highly systematic robust optimisation algorithm design process. This includes
designing challenging robust test problems to compare algorithms, performance
metrics to measure how much a robust algorithm is better than another, and
computationally cheap robust algorithms to find robust solutions for optimisa-
tion problems. The first two phases of a systematic algorithm design process,
developing test functions and performance metrics, are prerequisite to the third
phase, algorithm development.
Benchmark functions provide test beds for challenging and testing different
algorithms. They are the foundation of a systematic algorithm design and with-
out them benchmarking of the algorithms is not possible. Despite the large
number of test functions proposed for benchmarking global optimisers, there
are inadequate benchmark functions to effectively test the performance of robust
algorithms.
Also, comparing algorithms on a benchmark function requires qualitative and
quantitative metrics. Despite the significant advancements in multi-objective
performance metrics, it was observed that the literature substantially lacks per-
formance metrics to quantify the performance of robust multi-objective algo-
82 3. Analysis

rithms. For robust single-objective algorithms, the current performance indi-


cators can be employed easily due to similar performance measures: accuracy
and convergence speed. For robust multi-objective algorithms, however, there
are no performance metrics at all. All of the current works in the literature are
qualitative, and not as accurate as quantitative research. Therefore, this gap is
even more significant than the lack of test problems in the literature of robust
optimisation.
With suitable benchmark problems and performance metrics, different ideas
can be implemented and evaluated to propose new/improved techniques as the
main phase of systematic algorithm design. Due to the existence of diverse and
standard test problems and performance metrics, there are a substantial num-
ber of algorithms in the field of global multi-objective optimisation. However,
robust optimisation techniques are in the minority in this field. In addition,
the literature review of this thesis revealed that the current robust optimisation
techniques suffer from high computational cost and low reliability.
The lack of systematic robust algorithm design is a main motivation for this
thesis. This gap is the main reason why robust optimisation field lags behind
the field of global optimisation and even some relevant fields such as dynamic
optimisation or noisy optimisation. As discussed in the preceding paragraphs,
the main components of a systematic design framework are: test function design,
performance metric design, and algorithm design. This thesis contributes to
these areas to establish the first systematic robust optimisation algorithm design
process.
In the next chapters, firstly, a variety of challenging test problems will be
proposed to mimic the challenges in real search space when searching for robust
solutions. The test functions are proposed in both single- and multi-objective
optimisation fields. With these challenging test problems, obviously, the perfor-
mance of a robust algorithm can be benchmarked effectively and reliably from
different perspectives.
Secondly, a set of performance indicators will be proposed for the first time in
the literature of robust multi-objective optimisation. The performance measures
are proposed only for robust multi-objective algorithms since the current single-
objective metrics can be utilised for quantifying the performance of robust single-
objective algorithms. The proposed performance measures allow efficient and
reliable quantitative comparison of algorithms for the first time.
3. Analysis 83

Lastly, a novel indicator called confidence measure is proposed to improve


the reliability of the archive-based methods when searching for robust solutions.
The proposed confidence metric is designed for both single- and multi-objective
algorithms. Therefore, it allows establishing two novel robust optimisation ap-
proaches called confidence-based robust optimisation and confidence-based ro-
bust multi-objective optimisation to search for robust solutions of real problems
with single and multiple objectives.
Robust test
function
design

Robsut
performance
metric
design
Chapter 4 Robust
algorithm
design
Benchmark problems

The first phase of a systematic design process is to find or design suitable


benchmark problems. Benchmark problems are essential for challenging algo-
rithms to observe and verify their performance. Although solving a real problem
can better challenge an algorithm, we do not know how good an algorithm per-
forms due to the unknown position of the optimum, and computational time is
one of the main issues of real optimisation problems. In addition, a real search
space has diverse difficulties combined that prevent us from observing and testing
different ability of an algorithm independently.
If suitable test functions exist in the literature, we have to chose a combi-
nation of different test functions to effectively benchmark the performance of
algorithms. If there is no suitable challenging benchmark problem available in
the literature, we have to propose some to reliably test and verify the perfor-
mance of a given algorithm. Proposal of test functions is important, but having
a framework to generate them with different degree of difficulty can facilitate
the benchmark design process significantly.
Frameworks mostly have alterable parameters that allow designers to gener-
ate test functions with a desired level of difficulty. For instance, a framework
for creating multi-modal test functions has to offer a parameter to define the
number of local optima. Using standard frameworks makes us confident about
the quality of the test functions generated and allows difficulties to challenge an
algorithm to be presented in isolation. This chapter proposes several frameworks
that allow generating test functions with diverse characteristics and difficulties
at desired levels.

84
4. Benchmark problems 85

The design process of a test problem includes two goals. On one hand,
a test problem should be simple and modular in order to allow researchers to
observe the behaviour of meta-heuristics and benchmark their performances from
different perspectives. On the other hand, a test function should be difficult to
be solved in order to provide challenging environments similar to those of real
search spaces for meta-heuristics. These two characteristics are in conflict where
greater simplicity makes a test function readily solvable for meta-heuristics and
the relevant comparison inefficient. In contrast, although a very difficult test
function is able to effectively mimic the real search space, it may be very difficult
to be solved so that the performance of algorithms cannot be clearly observed and
compared. These two conflicting issues make the development of test problems
challenging.
In this chapter several frameworks are proposed and utilised to design test
functions for benchmarking robust single- and multi-objective meta-heuristics.
For designing the frameworks and test functions, the guidelines suggested by
Whitley et al. [166] are followed for creating standard test suites:
• Standard test sets should contain test problems that are resistant to simple
optimisation methods.

• Standard test sets should include test problems with non-linear, non-separable,
and non-symmetric search spaces.

• Standard test sets should have scalable test problems.

• Standard test sets should have test problems with scalable evaluation cost.

• Standard test sets should contain test problem that are of canonical form,
meaning that they should be independent of problem representation.
These essentials were extended by Bäck and Michalewicz [8, 10] (e.g. having
few unimodal and highly multi-modal test functions). However, these guidelines
are very generic and mostly applicable to single-objective test problems. In
the literature, Deb et al. suggested specific recommendations for making multi-
objective problems as follows [10, 55]:

• Simplicity is one of the main factors of a multi-objective test problem.

• There should be scalability in variables to allow making a desirable number


of parameters.
86 4. Benchmark problems

• There should be scalability in objectives for making a desirable number of


objectives.

• The exact shape and location of the Pareto optimal front should be known
and easy to understand.

• The Pareto optimal set in the parameter space should also be known and
understandable.

• There should be controllable mechanisms to provide different level of chal-


lenges for an algorithm to approach the Pareto optimal front (for instance
the number of local Pareto optimal fronts).

• There should be different shapes for the Pareto optimal front and dis-
continuity in order to benchmark the ability of an algorithm in finding
well-distributed Pareto optimal solutions.

A suitable test suite is one that provides different test functions with a variety
of the above-mentioned features. However, capturing all the possible combina-
tions of these features is impractical, as discussed by Huband et al. [92] . In the
following subsections, therefore, these features are captured as much as possible
within various frameworks. The focus is on single-objective and multi-objective
test problems due to the difficulty of many-objective test problems and scope of
the thesis. In addition, the proposed frameworks only generate unconstrained
test problems. Due to the standard modularity of the proposed frameworks,
however, any kind of constraints in the multi-objective test problems proposed
in [52] and other works in the literature can easily be integrated in the proposed
test functions.

4.1 Benchmarks for robust single-objective op-


timisation
This section proposes three novel frameworks for generating different test func-
tions with specific characteristics for effectively benchmarking the performance
of robust single-objective algorithms.
4. Benchmark problems 87

4.1.1 Framework I
This framework is for creating a bi-modal parameter space with two optima.
One of the optima is robust and the other is not robust. The mathematical
formulation of this test function is as follows:
1 x−1.5 2 2 x−0.5 2
f (x) = √ e−0.5( 0.5 ) + √ e−0.5( α ) (4.1)
2π 2π
where α defines the width (robustness) of the global optimum.
This function is illustrated in Fig. 4.1. This figure shows how parameter α
defines the shape of the global valley without changing the fitness values of both
local and global optima.
0

-0.01

-0.02
f(x)

-0.03
 = 0.01
 = 0.05
-0.04  = 0.1
 = 0.15
-0.05  = 0.2
 = 0.25
 = 0.3
-0.06
0 0.5 1 1.5 2
x

Figure 4.1: Proposed function with adjustable local optima robustness parame-
ter. The parameter α changes the landscape significantly

The two dimensional version of this function is defined as follow:


 2  2
(x−1.5)2 +(y−1.5)2 (x−0.5)2 +(y−0.5)2
1 −0.5 0.5 2 −0.5 α
f (x, y) = √ e +√ e (4.2)
2π 2π
Fig. 4.2 shows that the parameter α has a similar effect on the robustness of
the global optimum using Equation 4.2.
Finally, framework I is defined as follows:

 Pn 2
2 !  Pn 2
2 !
i=1 (xi −1.5) i=1 (xi −0.5)
1 −0.5 0.5 1 −0.5 α
M inimise : f (~x) = √ e + √ e
2π 2π
88 4. Benchmark problems
f(x,y)

f(x,y)
f(x,y)
y x y x y x
f(x,y)

f(x,y)
f(x,y)

y x y x y x
f(x,y)

f(x,y)
f(x,y)

y x y x y x

Figure 4.2: Effect α of on the robustness of the global optimum

(4.3)

where : 0 ≤ xi ≤ 2 (4.4)

where n is the maximum number of variables.


The local optimum is always located at (0.5, 0.5, ..., 0.5) and the global op-
timum is at (1.5, 1.5, ..., 1.5). This framework creates a global optimum with
alterable degree of robustness that allows benchmarking the performance of a
robust meta-heuristic in terms of favouring a robust solution. By changing the
robustness of the global optimum, the resistance of a robust meta-heuristic deal-
ing with a non-robust global optima can be observed. In addition, it may be seen
4. Benchmark problems 89

in Equation 4.3 that this framework is able to generate scalable test functions
with a desirable number of variables. The characteristics of the test functions
generated by this framework are summarised as follows:

• Test functions are not readily solvable by simple optimisation methods.

• The search space is non-linear, non-separable, and non-symmetric.

• Test functions are scalable.

• The robustness of the global optimum is alterable

• The robustness of the global optimum does not affect the optimal values
of both local and global optima.

• Both local and global optima have the potential to be the robust optimum
based on the value of the parameter α.

4.1.2 Framework II
The second framework generates a desirable number of local non-robust optima.
In other words, a multi-modal search space with one global optimum, one robust
optimum, and several local non-robust optima can be created by this framework.
The mathematical formulation of this framework is as follows:

M inimise : f (~x) = −G(~x)H(x1 )H(x2 ) + ω (4.5)

2
e−2x sin λ × 2π(x + π


) − xβ
where : H(x) = + 0.5 (4.6)
3

PN
i=3 xi
G(~x) = 1 + 10 (4.7)
N

0 ≤ xi ≤ 1 (4.8)

λ>0 (4.9)
90 4. Benchmark problems

optima

f(x1,x2)

x2 x1

Figure 4.3: Shape of the search landscape with controlling parameters con-
structed by framework II

β>0 (4.10)

As may be seen in Fig. 4.3, this framework allows generating (λ + 1)2 local
optima in the search space. The effect of this parameter on the shape of the
search space can be observed in Fig. 4.4. This figure shows that the search space
becomes more challenging as λ increases.
Another characteristic of this test framework is its parameter scalability. The
function G(~x) is responsible for supporting three or more variables. Since G(~x)
is a kind of penalty function, an algorithm should find zero values for variables
x3 − xn in order to find the best robust optimum.
The characteristics of the test functions generated by this framework are
summarised as follows:

• Test functions are not readily solvable by simple optimisation methods.

• The search space is non-linear, separable, and non-symmetric.

• Test functions are scalable.

• The number of local optima can be adjusted.

• The last, worst local optimum is the most robust optimum and has the
highest distance from the global optimum.
4. Benchmark problems 91

f(x1,x2)

f(x1,x2)
f(x1,x2)

x2 x2 x2
x1 x1 x1
f(x1,x2)

f(x1,x2)

f(x1,x2)
x2 x2 x2
x1 x1 x1

Figure 4.4: Effect of parameter λ on the shape of search landscape

It should be noted here that test problems (including those proposed in this
thesis) that are framed using one subset of decision variables to move around a
surface (or front) and another subset that vary distance to that surface (front)
are effectively separable as these sub-components can be solved separately, and
are therefore biased toward algorithms which propagate this sub-vectors (and, if
these are all at a boundary, those which truncate at boundaries). This is why
rotation matrices are incorporated into the DTLZ problems used in the CEC
multi-objective test suite. A modification with a rotational matrix is required
to convert the resulting problems into non-separable ones.

4.1.3 Framework III

This framework was inspired by some of the current test functions in the field
of global optimisation. It divides the search space into four sections and allows
defining different functions in each section. The mathematical formulation is as
92 4. Benchmark problems

follows:





 f1 (x, y) (x ≤ 0) ∧ (y ≥ 0)

(x ≥ 0) ∧ (y ≤ 0)

f (x, y)
2
M inimise : f (x, y) = (4.11)


 f3 (x, y) (x > 0) ∧ (y > 0)


f4 (x, y) (x < 0) ∧ (y < 0)

Any type of functions with robust and non-robust optima can be utilised as
f1 to f4 . For instance, Fig. 4.5 shows a search space constructed using spherical,
Ackley, Rastrigin, and pyramid-shaped functions. It is evident from the figure
that the spherical function has the most robust optimum.
f(x,y)

y
f(x,y)

y
x
x x

Figure 4.5: An example of the search space that can be constructed by the
framework III

In order to provide scalability for this framework, there can be two possibili-
ties. Each of the sub-functions can be chosen with a different number of variables
or the function G(~x), which was integrated with the second proposed framework,
can be multiplied by the results of each function as follows:




 f1 (x, y) × G(~x) (x ≤ 0) ∧ (y ≥ 0)

f (x, y) × G(~x) (x ≥ 0) ∧ (y ≤ 0)

2
M inimise : f (x, y) = (4.12)


 f 3 (x, y) × G(~
x ) (x > 0) ∧ (y > 0)


f4 (x, y) × G(~x) (x < 0) ∧ (y < 0)

PN
i=3 xi
where : G(~x) = 1 + 10 (4.13)
N
4. Benchmark problems 93

The characteristics of the test functions generated by this framework are


summarised as follows:

• Test functions are not readily solvable by simple optimisation methods.

• The search space is non-linear, non-separable, and non-symmetric.

• Test functions are scalable.

• There can be a desired number of local, global, and robust optima.

4.1.4 Obstacles and difficulties for single-objective robust


benchmark problems
Although the proposed frameworks are able to generate very challenging test
beds, there are other difficulties when solving real problems that should be con-
sidered and simulated. In this subsection, five obstacles and difficulties such
as desired number of variables, bias, deceptiveness, multi-modality, and flatness
are introduced/employed to increase the difficulties of current test problems and
propose several new test beds.

4.1.4.1 Desired number of variables

The majority of robust test problems are of low dimension. In addition, the
robust optimum is moved when the dimensions change (for instance TP1, TP2,
TP3 and TP4 in Fig. 2.14. This sub-section is inspired by the method of adding
multiple variables when designing multi-objective test problems proposed by
Deb et al. [45] and Zitzler et al. [181]. In this method a function called G(~x) is
employed to handle all the variables except x1 and x2 as follows:
N
!
X
G(~x) = 50x2i + 1 (4.14)
i=3

where N is the desired number of variables.


The shape of the function G(~x) is illustrated in Fig. 4.6. This figure shows
that this function is a spherical function with an optimum located at the origin
(xoptimum = [0, 0, ..., 0]). In order to handle multiple variables for a problem, it
is multiplied by the original function as follows:

F (~x) = f (~x)G(~x) (4.15)


94 4. Benchmark problems

Figure 4.6: Search space of the function G(~x)

This equation allows defining the shape of the search space by f (~x) and han-
dling multiple variables by G(~x). To find the optimum of F (~x), an optimisation
algorithm should find the optimal values for x1 and x2 because these two parame-
ters define the search space of the f (~x) function in Equation 4.15. Then, it has to
find the optimal values for x3 and xN which are all equal to 0. It should be noted
that this method works for minimisation problems, yet the negation/inverse of
G(~x) function can make it applicable for maximisation problems.
In fact, the function G(~x) defines a similar search space to the f (~x) above the
main landscape. A set of 10,000 random solutions is generated for a 10-variable
version of TP1 using Equation 4.15 and illustrated in Fig. 4.7 . It may be seen
that the size of search space (range of variables) increases proportional to the
number of variables without changing the shape of the search landscape. In
other words, an unlimited number of parallel layers (surfaces) with shape similar
to f (~x) are constructed above it.

N=5 N=10 N=15 N=30 N=50

Figure 4.7: Search space becomes larger proportional to N and without any
change in the main search landscape

The advantage of this method is the ease of applicability to any test functions
without changing their shape.
4. Benchmark problems 95

4.1.4.2 Biased search space

Density of solutions in the search space is another important characteristic of


a challenging test problem. If the search space has an intrinsic bias towards
its robust optimum, it would not be clear to distinguish whether an algorithm
approximates the robust optimum successfully or it obtains the robust optimum
because of failure to find the global optimum. The following function is proposed
to be multiplied by the test functions (if applicable) to bias the search space away
from the robust optimum.
n

X
B(~x) = |xi + p| (4.16)
i=1

where p indicates the point, the bias is defined toward or away from it, and θ
defines the density.
Equation 4.16 shows that the density of solutions in the search space can be
adjusted by a parameter called θ. The density is uniform when θ = 1, towards
the point p when θ > 1, and away from the point p when θ < 1. The effect of θ
can be observed in Fig. 4.8.
 = 0.14286 =1 x 10
4
=5
1.4 10 10
Density
1.2 Uniform density
8 8
1
0.8 6 6
B(x)

B(x)

B(x)

0.6 4 4 Density
0.4
2 2
0.2
0 0 0
-10 0 10 -10 0 10 -10 0 10
x x x

Figure 4.8: Density of solutions when θ < 1, θ = 1, θ > 1 (p = 0)

These functions can be multiplied or added to the current test problems to


bias their search spaces. The function B(x) can bias the search space away from
the robust optimum when θ < 1 and bias the search space toward non-robust
optima when θ > 1.
As an example of the usage of the function B(x), the search space of TP1 is
biased with the following modifications:
 
 1 − QN H(xi ) + 1 PN x2 G(~x) x2 + x2 < 25
i=1 1000 i=1 i 1 2
f (~x) = (4.17)
2 2
B(~x)G(~x) x + x ≥ 25 1 2
96 4. Benchmark problems


θ P  0 x < 0
P2 N 2 i
where B(~x) = i=1 |xi | , G(~x) = i=3 50x i +1, and H(x i ) =
1 otherwise
Fig. 4.9 illustrates the transformation of the shape of the search space. This
figure shows that the boundaries of the search space are first extended and then
multiplied by the function B(~x). This maintains the original position and shape
of the robust optimum while having bias away from it. It should be noted here
the range of the search space changes due to the multiplication of the function
for bias as shown in the objective function in Equation 4.17.

Figure 4.9: Conversion of an un-biased search space to a biased search space

To experimentally observe the change in the bias of solutions in the search


space, Fig. 4.10 is provided. This figure shows 50,000 randomly generated solu-
tions of the un-biased and biased versions of TP1 function. It may be observed
that the solutions are almost uniformly distributed throughout the un-biased
search space. However, the density of solutions is decreased towards the robust
optimum of the biased search space.

4.1.4.3 Deceptive search space

In a deceptive search space, a large region of the search space favours the unde-
sirable optimum. In global optimisation, a deceptive search space tends towards
local solutions. In this case, the search agents of meta-heuristics are deceived
and converge toward the local optima [48, 167]. There is no deceptive robust
test function in the literature, so the first one is proposed in this subsection.
For constructing a deceptive function, generally speaking, there should be
at least two optima: a deceptive optimum versus a true optimum [46]. The
shape of the search space should be designed to favour the deceptive optimum.
4. Benchmark problems 97

Uniform density
High density

Low density

Figure 4.10: 50,000 randomly generated solutions reveal there is low density
toward the robust optimum in the biased test function, while the density is
uniform in the un-biased test function

The proposed mathematical formulation of a deceptive robust test function is as


follows:

M inimise : f (~x) = (H(x1 ) + H(x2 )) × G(~x) − 1 (4.18)

x−0.2 2 x−0.5 2 x−0.8 2


where : H(x) = 0.5−0.3e−( 0.004 ) −0.5e−( 0.05
)
−0.3e−( 0.004 ) +sin(πx) (4.19)

N
!
X
G(~x) = 50x2i +1 (4.20)
i=3

where N is the number of variables.


The shape of this test function in illustrated in Fig. 4.11.
This figure shows that there are four deceptive global optima on the corners
of the search space. In addition, the search space includes four deceptive local
optima. It may be observed that the entire search space favours global and local
optima, while the robust optimum is located at [0.5, 0.5, 0, 0, ..., 0].

4.1.4.4 Multi-modal search space

Multiple local solutions are another type of difficulty for test problems. Real
search spaces often have a massive number of local solutions that make them
very hard to optimise. In the literature of global evolutionary optimisation,
98 4. Benchmark problems

Robust optima
Local density
optima

Global optima
density
Figure 4.11: Proposed deceptive robust test problem

multi-modal test functions are very popular. There are different test functions
in this field with exponentially increasing numbers of local optima [116, 122, 155,
174, 175, 57]. Since there is no robust test function with many local optima, one
is proposed in this subsection as follows:

M inimise : f (~x) = (H(x1 ) + H(x2 ))) × G(~x) − 1.399 (4.21)

M  
2 x−(0.6+0.02i) 2
−( x−0.5 )2 −( x−0.02i ) ( )
X

where : H(x) = 1.5 − 0.5e 0.04 − 0.8e 0.004 + 0.8e 0.004

i=0
(4.22)

N
X
G(~x) = 50x2i (4.23)
i=3

where N is the number of variables and M indicates the number of non-robust


local optima.
Fig. 4.12 shows the shape of the search space that is created by Equa-
tions 4.21, 4.22, and 4.23. This figure shows that the robust optimum is the
optimum located at [0.5, 0.5, 0, 0, ..., 0]. The function f has 4(M + 1)2 global
optima, which are all non-robust. The number of non-robust global optima can
easily be defined by the parameter M . In should be noted that the search space
includes 4(M + 1) local robust optima in addition to the robust and global op-
tima as well. The function G also extend these optima over other dimensions,
therefore, a very difficult search space is constructed to challenge robust algo-
rithms. An algorithm should avoid all non-robust local and global optima to
4. Benchmark problems 99

approximate the single robust optimum. Note that for very small values of δ,
the middle panel is no longer robust in Fig. 4.12. So, the δ > 0.05 should be
considered for this test function.
Robust optimum
Global optima
Local
optimum

Figure 4.12: Proposed multi-modal robust test function (M = 10)

4.1.4.5 Flat search space

In non-improving or flat test beds, very little information about the possible
location of the optimum solution can be extracted from the search space. A flat
search space might wrongly be assumed very simple because of the very small
number of local solutions. However, it is not very simple since the majority
of meta-heuristics fail in solving such problems, especially if the first random
individuals are all located on the flat regions. It this case, all the individuals
are assigned equal fitness values, so evolutionary operators become ineffective.
For instance, the PSO algorithm fails to update gBest and pBest effectively for
guiding the particles. This deteriorates when searching for the robust optimum
because of the high and consistent robustness level of the flat regions. Therefore,
a robust algorithm may mistakenly assume the flat regions to be the robust
optimum, while the best robust optimum can be somewhere else in the search
space.
There is no robust test problem with a flat search space, so the first one is
proposed as follows:

M inimise : f (~x) = (H(x1 ) + H(x2 )) × G(~x) − 2 (4.24)

x−0.95 2 x−0.05 2
where : H(x) = 1.2 − 0.2e−( 0.03
)
− 0.2e−( 0.01
)
(4.25)
100 4. Benchmark problems

N
X
G(~x) = 50x2i (4.26)
i=3

The shape of the search space constructed by Equations 4.24, 4.25, and 4.26
is illustrated in Fig. 4.13 This figure shows that there are only four optima near
the corners of the search space. One of them (located at [0.05, 0.05]) has the least
robustness, while the optimum located at [0.95, 0.95] has the highest robustness.
The robustness of the optima positioned on [0.05, 0.95] and [0.95, 0.05] have high
robustness along x2 and x1 respectively. It should be noted that the fitness value
of all these optima are equal to 0. This function is deliberately required to have
such optima to challenge robust algorithms in finding optima with equal fitness
but different degree of robustness. In addition, the flat search space proposed
provides very little information about the location of optima.

Figure 4.13: Proposed flat robust test function

With the proposed framework and difficulties, desirable test functions can
be constructed. Several test functions are created for the purpose of this thesis,
which can be found in Appendix A. Note that TP1 to TP9 are taken from the
literature and TP10 to TP20 are proposed by the above discussed frameworks
and difficulties.
This set of test functions provides very challenging environments for robust
algorithms. The proposed test functions may be theoretically effective for bench-
marking the performance of robust meta-heuristics due to the following reasons:

• Scalable test functions allow defining a desired number of variables, in-


crease the dimension of search agents of meta-heuristics, and boost the
difficulty of robust test problems.
4. Benchmark problems 101

• The proposed scaling method does not change the shape of test functions,
so it is readily applicable to any test function.

• Biased test functions change the density of solutions toward non-robust


regions of the search space, so the performance of robust algorithms can
effectively be benchmarked. In other words, a robust algorithm should
resist the intrinsic tendency of solutions toward the non-robust optimum.

• Deceptive test functions entirely mislead search agents of meta-heuristics


toward non-robust optima, so robust algorithms should be equipped with
powerful operators to handle such issues.

• Multi-modal test functions are highly suitable for benchmarking the per-
formance of robust algorithms in terms of avoiding local and less robust
solutions. Since the worst local optimum is the most robust, the bench-
mark functions of this class are very challenging.

• Flat test functions provide the search agents of robust algorithms with very
little information about the robust optimum, so meta-heuristics should
search for the robust optimum without relying on the information provided
by the flat regions.

• Several optima with equal fitness but different degree of robustness can
challenge robust algorithms in finding partial or complete robust optimal
solutions.

• Since the proposed test functions are difficult, scalable, independent of


problem representations, and non-linear, they can be considered as stan-
dard test functions as per the guidelines of Whitley et al. [167].

4.2 Benchmarks for robust multi-objective op-


timisation
This section proposes three frameworks and several difficulties for creating chal-
lenging robust multi-objective test problems.
102 4. Benchmark problems

4.2.1 Framework 1
This framework is for creating a bi-modal parameter space and a bi-frontal ob-
jective space. The core part of this framework is the following mathematical
function:
1 x−1.5 2 2 x−0.5 2
f (x) = √ e−0.5( 0.5 ) + √ e−0.5( α ) (4.27)
2π 2π
where α defines the width (robustness) of the global optimum.
This function is identical to the function utilised in the previous section and
may be seen in Fig. 4.1. This function is employed to propose a multi-objective
framework. A similar framework to that of Deb in 1999 [46] is utilised, which
consists of different controllable components. Without loss of generality, the
framework can be formulated as follows:

M inimise : f1 (~x) = x1 (4.28)

M inimise : f2 (~x) = H(x2 ) × {G(~x) + S(x1 )} + ω (4.29)

1 x−1.5 2 2 x−0.5 2
where : H(x) = √ e−0.5( 0.5 ) + √ e−0.5( α ) (4.30)
2π 2π

N
X
G(~x) = 50x2i (4.31)
i=3

S(x) = −xβ (4.32)

α>0 (4.33)

where α and β are two parameters for adjusting the robustness of the global
Pareto optimal front and the shape of the fronts, and ω indicates a threshold for
moving f2 (~x) (Pareto optimal fonts) up and down.
4. Benchmark problems 103

The shapes of parameter and objective spaces constructed by this framework


are illustrated in Fig. 4.14. Note that in the following figures, the robustness
curve (red line) indicates the robustness of points lying on the same vertical line
(i.e. with the same value of f1 ), using as axis the same axis used for f2 (x) itself.
The robustness is calculated by means of an average in a given neighborhood
with two neighboring solutions.

1.5
G(x)
G(x)
1

α 0.5 G(x)
f2
0

-0.5
0 0.5 1 1.5 2
f1

Figure 4.14: Search space and objective space constructed by proposed frame-
work 1

It may be observed in this figure that the proposed method is able to provide
two Pareto optimal fronts: global and robust. The robustness of the global
front can be defined by α in H(x). The interesting characteristic of the function
H(x) is that the Pareto optimal front does not move when the robustness of
the left valley in the search space is varied. Therefore, the specific behaviour
of robust meta-heuristics while changing the robustness of the global Pareto
optimal front can be observed. The other component of this framework, G(~x),
is responsible for providing an unlimited number of variables for this framework.
As the formulation of this function shows, a weighted addition of all variables
except x1 and x2 is calculated by this function and multiplied by H(x) in the
framework. By increasing the value of H(x), both Pareto optimal fronts are
converted to local fronts. In other words, the G(~x) function drives both fronts
away from their optimal positions. A meta-heuristic has to find zeroes for x3 to
xN in order be able to reach the best trade-offs between f1 (~x) and f2 (~x).
Fig. 4.15 shows the changing shape of the search space with altered values for
α. This figure shows the global valley has the potential to be even more robust
than the local valley with some values of the proposed parameter.
104 4. Benchmark problems

f(x1,x2)
f(x1,x2)

f(x1,x2)
f(x1,x2)

f(x1,x2)
α = 0.01 α = 0.1 α = 0.2 α = 0.3 α = 0.4

Figure 4.15: Effect of α on the robustness of the global Pareto optimal front’s
valley

Another important feature of the proposed framework is the ability to gen-


erate linear, convex, and concave global and robust Pareto optimal fronts. The
function S(x) is responsible for defining the shape of the bases of both valleys
in the search space and consequently the fronts in the objective space. An ad-
justable function, S(x), is proposed in order to generate different shapes with
different complexity for both global and robust optimal fronts. As may be seen in
Equation 4.32, the parameter β defines the shape of both Pareto optimal fronts.
The effects of different values of this parameter on the shape of parameter and
objective spaces are illustrated in Fig. 4.16.
f(x1,x2)
f(x1,x2)

f(x1,x2)

1.5 1.5 1.5

1
1
1
0.5
0.5
f2
f2

f2

0
0.5
0
-0.5

0 -0.5 -1
0 0.5 1 1.5 2 0 0.5 1 1.5 2 0 0.5 1 1.5 2
f1 f1 f1
( ) ( ) ( )

Figure 4.16: Shape of parameter space and Pareto optimal fronts when: β = 0.5,
β = 1, and β = 1.5. Note that the red curve indicates the robustness of the
robust front and black curves are the front.

The curvature of both concave and convex fronts is changed proportionally


4. Benchmark problems 105

1.6 2

1.4 0

1.2 -2

1 -4

f2
f2

0.8 -6

0.6 -8

0.4 -10

0.2 -12
0 0.5 1 1.5 2 0 0.5 1 1.5 2
f1 f1

Figure 4.17: Changing the shape of global and robust Pareto optimal front with
β

to the value of β. Fig. 4.17 shows how the parameter β allows having different
shapes with different robustness for both global and robust Pareto optimal fronts.
With the first proposed framework, a designer is able to create a bi-modal
search space with an unlimited number of parameters that maps to a two-
dimensional objective space with various linear, concave, and convex global and
robust Pareto optimal fronts. However, the proposed framework generates global
and robust Pareto front with similar shapes. In order to provide a more flexible
framework, it is necessary to create different shapes for each of the fronts. The
generalised version of the framework is formulated as follows:

M inimise : f1 (~x) = x1 (4.34)


H(x ) × {G(~x) + S (x )} + ω if x2 < 0.8
2 1 1
M inimise : f2 (~x) = (4.35)
H(x2 ) × {G(~x) + S2 (x1 )} + ω if x2 ≥ 0.8

1 x−1.5 2 2 x−0.5 2
where : H(x) = √ e−0.5( 0.5 ) + √ e−0.5( α ) (4.36)
2π 2π

N
X
G(~x) = 50x2i (4.37)
i=3

S1 (x) = −xβ1 (4.38)


106 4. Benchmark problems

S2 (x) = −xβ2 (4.39)

where α defines the robustness of the global valley, β1 defines the shape of the
global Pareto optimal front, β2 defines the shape of the robust Pareto optimal
front, and ω indicates a threshold for moving f2 (~x) (Pareto optimal front) up
and down.
As can be inferred from these equations, the parameter space is divided into
two parts x2 < 0.8 and x2 ≥ 0.8. Since the global valley is located in x2 < 0.8, the
global Pareto optimal front’s shape follows S1 (x). In addition, the local/robust
valley is in x2 ≥ 0.8 and obeys S2 (x). The parameters β1 and β2 allow adjustment
of the shape of the global and robust Pareto optimal fronts independently. With
this mechanism, nine different combinations of linear, convex and concave global
and robust Pareto optimal fronts can be constructed.
This combination allows designers to investigate the behaviour of different
robust meta-heuristics dealing with different shapes of global and robust fronts.
Benchmark functions with different shapes for robust and global optima would
generally be more difficult to solve because a robust algorithm needs to adapt
to a very different Pareto optimal front with different shape when transferring
from the global/local Pareto optimal front to the robust Pareto optimal front(s).
This was recommended by Deb for multi-objective benchmark problems [46].

4.2.2 Framework 2
In order to provide a more challenging multi-modal test set, another framework is
also proposed in this work. Generally speaking, multi-modal test problems have
a large number of local optima (local Pareto optimal solutions), which make them
suitable for benchmarking the exploration and local optima avoidance ability of
an algorithm. The framework 1 is modified to propose a new framework that
allows designers to construct a search space with some desired number of local
Pareto optimal fronts. The multi-modal framework is formulated as follows:

M inimise : f1 (~x) = x1 (4.40)

M inimise : f2 (~x) = H(x2 ) × {G(~x) + S(x1 )} + ω (4.41)


4. Benchmark problems 107

2
e−x cos(λ × 2πx − x)
where : H(x) = + 0.5 (4.42)
γ

N
X
G(~x) = 50x2i (4.43)
i=3

S(x) = −xβ (4.44)

γ ≥ 1.3 (4.45)

λ≥1 (4.46)

As may be seen in Equation 4.42, there is a new formulation for the H


function with two new control parameters: λ and γ. Fig. 4.18 shows the shapes
of the search space and Pareto optimal fronts.

valleys local fronts


1 Robust
front
0.8

0.6
f2

Local
0.4 fronts

0.2
Global
0 front
0 0.2 0.4 0.6 0.8 1
f1

Figure 4.18: Shape of the parameter space and its relation with the objective
space constructed by the framework 2

Fig. 4.18 shows that the search space has an incremental wave-shaped cur-
vature along x2 . The proposed exponential-based equation of H(x) allows each
valley to have different robustness. The robustness is proportional to the value
108 4. Benchmark problems

of x2 , in which the first value is the least robust valley and the last valley has
the best robustness. The objective space illustrated in Fig. 4.18 shows that each
of the valleys corresponds to a front, so the robustness of the fronts also increase
from bottom to top along f2 . There are four control parameters for this frame-
work: β, ω, λ, and γ. The role of β and ω are identical to those of the first
framework, in which β defines the shape of the Pareto optimal front and ω is
a threshold for moving f2 (~x) (Pareto optimal fronts) up and down and locating
them in a desirable range.
The λ parameter defines the number of valleys in the search space and con-
sequently the number of local Pareto optimal fronts in the objective space. This
control parameter allows designers to provide a multi-modal search space with
λ − 1 local fronts. It should be noted that the robustness of local fronts increases
as they become farther from the global Pareto optimal front. The effect of this
control parameter on parameter and objective spaces is shown in Fig. 4.19.

f(x1,x2)
f(x1,x2)
f(x1,x2)
f(x1,x2)
f(x1,x2)

1 1 1 1 1

0.8 0.8 0.8 0.8 0.8

0.6 0.6 0.6 0.6 0.6


f2

f2
f2

f2
f2

0.4 0.4 0.4 0.4 0.4

0.2 0.2 0.2 0.2 0.2

0 0 0 0 0
0 0.2 0.4 0.6 0.8 1 0 0.5 1 0 0.2 0.4 0.6 0.8 1 0 0.2 0.4 0.6 0.8 1 0 0.5 1
f1 f1 f1 f1 f1

λ=2 λ=4 λ=8 λ = 16 λ = 32

Figure 4.19: Effect of λ on both parameter and objective spaces. Note that the
red curve indicates the robustness of the robust front and black curves are the
fronts.

The last control parameter, γ, defines the distance of the robust Pareto op-
timal front from the line f2 = 1. The greater the value of γ, the flatter the
shape of the robust Pareto optimal front, and the greater the distance of local
and robust fronts from the global Pareto optimal front. In addition, this control
parameter controls the distance between fronts, as fronts become closer as γ is
increased. The effects of this parameter are illustrated in Fig. 4.20.
The second proposed framework allows control of the multi-modality of bench-
mark problems and the number of local Pareto optimal fronts. The shapes of
the fronts are similar in this framework. However, as with the first proposed
4. Benchmark problems 109

1 1 1 1 1

0.5 0.5 0.5 0.5 0.5


f2

f2

f2

f2

f2
0 0 0 0 0

-0.5 -0.5 -0.5 -0.5 -0.5


0 0.5 1 0 0.5 1 0 0.5 1 0 0.5 1 0 0.5 1
f1 f1 f1 f1 f1

( ) ( ) ( ) ( ) ( )

Figure 4.20: Effect of γ on the fronts. Note that the red curve indicates the
robustness of the robust front and black curves are the fronts.

framework, there is the possibility of changing the shape of each front using
Equation 4.35. This capability has not been integrated with this framework in
order to maintain its simplicity. It is worth mentioning that cos(λ × 2πx) in
H(x) can be replaced with cos(deλ e × 2πx) in order to have an exponentially
increasing number of local fronts.

4.2.3 Framework 3
In addition to the shape of Pareto optimal fronts and multi-modality, there is
another issue when solving real engineering problem called discontinuity. This
refers to possible gaps (dominated regions) in the Pareto optimal fronts of a
problem that provide more complexity compared to continuous Pareto optimal
fronts. In this work the ZDT test functions introduced by Deb et al. are modified
to propose a framework that allows creation of a desired number of discontinuous
robust regions in the main Pareto optimal front. The mathematical formulation
of this framework is as follows:

M inimise : f1 (~x) = x1 (4.47)

 r 
x1 x1
M inimise : f2 (~x) = G(~x) × 1 − − sin(ζ × 2πx1 ) H(x2 ) + ω
G(~x) G(~x)
(4.48)

2
e−2x sin(λ × 2π(x + π

)) −x
where : H(x) = + 0.5 (4.49)
γ
110 4. Benchmark problems

N
X
G(~x) = 50x2i (4.50)
i=3

γ≥1 (4.51)

λ≥1 (4.52)

Similarly to the previous framework H(x) is responsible for making a wave-


shaped plane along x2 and making the search space multi-modal with λ − 1 local
valleys. In this case, objective space also has λ − 1 local fronts. The parameter
ω works similarly to the previous frameworks and is for moving the fronts in a
desirable range. Parameter γ is for making fronts closer or farther apart, but a
low value of γ now makes fronts closer, in contrast to the previous framework.
The new parameter here is ζ, which is responsible for defining the number of
discontinuous regions of the Pareto optimal fronts. The effects of this parameter
on both parameter space and objective space are illustrated in Fig. 4.21.
f(x1,x2)
f(x1,x2)

f(x1,x2)
f(x1,x2)
f(x1,x2)

4 4 4 4 4

3 3 3 3
3

2 2 2 2
2
f2
f2

f2
f2
f2

1 1 1 1

1
0 0 0 0

0 -1 -1 -1 -1
0 0.5 1 0 0.2 0.4 0.6 0.8 1 0 0.2 0.4 0.6 0.8 1 0 0.2 0.4 0.6 0.8 1 0 0.2 0.4 0.6 0.8 1
f1 f1 f1 f1 f1

ζ=1 ζ=2 ζ=4 ζ=6 ζ=8

Figure 4.21: Effect of ζ on the parameter and objective spaces. Note that the
red curve indicates the robustness of the robust front and black curves are the
fronts.

This figure shows that there are ζ discontinuous efficient regions in the main
Pareto optimal front. By adjusting λ ad ζ, different test functions can be con-
structed. The key point of this framework is that the robustness of discontinuous
regions decreases from left-top to right-bottom. Therefore, the most robust front
4. Benchmark problems 111

is the last local front and its leftmost region is the most robust area. As Fig. 4.22
shows, the search space is very challenging; a robust meta-heuristic should avoid
optimal regions of the search space and move toward the most non-optimal areas,
which are robust.

valleys Robust front discontinuous regions


local
fronts 4

3
Least robust regions
2

f2
1

0
valleys Most Robust
regions -1 Global front
0 0.2 0.4 0.6 0.8 1
f1
Robustness

Figure 4.22: Parameter and objective spaces constructed by the third framework.
The red curve indicates the robustness of the robust front and black curves are
the fronts.

Note that all the functions proposed in this subsection are separable. As
mentioned above, a rotational matrix is required to make them non-separable
similar to those in the DTLZ problems used in the CEC multi-objective test
suite.

4.2.4 Hindrances for robust multi-objective test prob-


lems
According to Deb [42], a multi-objective optimisation process utilising meta-
heuristics deals with overcoming many difficulties such as infeasible areas, local
fronts, diversity of solutions, and isolation of optima. Difficulties dramatically
increase when searching for robust Pareto optimal solutions. Therefore, a robust
multi-objective algorithm has to be equipped with proper operators to handle
several difficulties in addition to the above-mentioned challenges: robust local
fronts, multiple non-robust fronts, non-improving search space, isolation of ro-
bust fronts, deceptive non-robust fronts, different shapes of robust fronts, and
robust fronts with separate robust regions. Designing an algorithm to handle
all these difficulties in a real search space requires challenging test beds during
development.
112 4. Benchmark problems

In order to simulate these difficulties, the following difficulties are introduced


for proposing new challenging test functions and integration with the current
test functions.

4.2.4.1 Biased search space

The first and simplest method of increasing the difficulty of a multi-objective


test function is to bias its search space [46]. Bias refers to the density of the
solutions in the search space. Almost all of the current test beds in the literature
of robust multi-objective optimisation suffer from a biased search space towards
robust regions [81]. Fig. 4.23 (a) illustrates 50,000 randomly generated points
in the first two dimensions of the RMTP10 in Fig. 2.16 to have an image of
the bias of the search space. This figure shows that the bias is toward the
righthand corner of the front. Therefore, it is hard to determine whether the
robust measure assists an algorithm to find the robust front or the robust front
is obtained because of the failure of the algorithm to find the global front.

(a) (b)

Figure 4.23: A non-biased objective space versus a biased objective space (50,000
random solutions). The proposed bias function requires the random points to
cluster away from the Pareto optimal front.

In order to prevent such issues, the following equation is introduced for the
function g(~x) in the test functions to bias the search space away from the robust
front:
 Pn ψ
i=2 xi
g(~x) = 1 + 10 + (4.53)
n−1
4. Benchmark problems 113

where ψ defines the degree of bias (ψ < 1 causes bias away from the PF) and n
is the maximum number of variables.
In Equation 4.53, ψ is responsible for defining the bias level of the search
space. Fig. 4.23 (b) shows 50,000 random solutions in the same search space of
Fig. 4.23 (a) but with ψ = 0.3. This figure shows that density of solutions is very
low close to the robust Pareto front, and increased further from the front. This
behaviour in a test function effectively assists benchmarking the performance of
robust algorithms in approximating the robust Pareto optimal solutions.
To further observe the effect of ψ on the density of solutions in the search
space, 50,000 random solutions with different values for ψ are illustrated in
Fig. 4.24. This figure shows that the search space is biased inversely proportional
to ψ. In other words, the density of solutions is increased as ψ decreases.

𝜓 = 1/2 𝜓 = 1/4 𝜓 = 1/8 𝜓 = 1/16

Figure 4.24: Bias of the search space is increased inversely proportional to ψ

4.2.4.2 Deceptive search space

According to Deb [46], there are at least two optima in a deceptive search space:
the deceptive optimum and the true optimum. The search space should be
designed in such a way to entirely favour the deceptive optimum. Such prob-
lems are very challenging for evolutionary algorithms since the search agents
are directed automatically towards the deceptive optimum by the search space
while the global optimum is somewhere else [48, 167]. To date there has been
no deceptive robust multi-objective test problem: the first is proposed in this
subsection.
The proposed mathematical formulation for generating deceptive test func-
tion is as follows:

M inimise : f1 (~x) = x1 (4.54)


114 4. Benchmark problems

M inimise : f2 (~x) = H(x2 ) × {G(~x) + S(x1 )} (4.55)

x−0.2 2 x−0.5 2 x−0.8 2


where : H(x) = 0.5−0.3e−( 0.004 ) −0.5e−( 0.05
)
−0.3e−( 0.004 ) +sin(πx) (4.56)

N
X
G(~x) = 50x2i (4.57)
i=3

S(x) = 1 − xβ (4.58)

It may be observed that the framework is similar to those of ZDT [181] and
DTLZ [55, 56]. However, the function H is modified as shown in Fig. 4.25.
This figure shows that the proposed H function has two non-robust deceptive
local optima, two non-robust global optima, and one true robust (which is local)
optimum. The element sin(πx) at the end of this function causes deceptiveness
of the search space, in which the entire search space deceptively favours the
non-robust optima.

1.5
1.4 Robust non-robust optimum
1.3
1.2
1.1
H(x2)

1 Local non-robust optimum


0.9
0.8
0.7 Global non-robust optimum
0.6
0.5
0 0.2 0.5 0.8 1
x2

Figure 4.25: There are four deceptive non-robust optima and one robust opti-
mum in the function H(x)

The combination of H, G, and S functions constructs a deceptive multi-


objective test problem as illustrated in Fig. 4.26. It should be noted that the
4. Benchmark problems 115

function S is also modified to define the shape of non-robust and robust fronts.
The newly added parameter β defines the shape of the fronts. It may be ob-
served that the fronts are concave when β < 1 and convex when β > 1. Fig. 4.26
also shows that the proposed test function has two overlapped non-robust global
fronts, two non-robust local fronts, and one robust local front. The entire search
space favours the non-robust regions, so the non-robust fronts are highly decep-
tive.
f(x1,x2)

f(x1,x2)
f(x1,x2)

β = 0.5 β=1 β=2

Figure 4.26: Different shapes of Pareto fronts that can be obtained by manipu-
lating β

The deceptive non-robust fronts are very attractive for the search agents of
meta-heuristics. Therefore, these test problems have the potential to challenge
robust algorithm significantly. The ability of an algorithm to avoid deceptive
non-robust regions can be benchmarked. Also, the performance of an algorithm
in approximating robust fronts with convex, linear, and non-convex shapes is
benchmarked.

4.2.4.3 Multi-modal search space

Although the first two hindrances introduced can mimic the difficulties of real
search spaces and challenge robust algorithms, there is another important charac-
teristic called multi-modality. Real search spaces may have many local solutions
that make them very challenging to solve. In the field of evolutionary single-
objective and multi-objective optimisation, there is a considerable number of
116 4. Benchmark problems

test problems with local optima. However, there is no multi-modal robust multi-
objective test problem in the literature. The following test function is proposed
in order to fill this gap:

M inimise : f1 (~x) = x1 (4.59)

M inimise : f2 (~x) = H(x2 ) × {G(~x) + S(x1 )} (4.60)

M  
2 x−(0.6+0.02i) 2
−( x−0.5 )2 −( x−0.02i ) ( )
X

where : H(x) = 1.5 − 0.5e 0.04 − 0.8e 0.004 + 0.8e 0.004

i=0
(4.61)

N
X
G(~x) = 50x2i (4.62)
i=3

S(x) = 1 − xβ (4.63)

where N is the number of variables and M indicates the number of non-robust


valleys in the parameter space and non-robust fronts in the objective space.
It may be seem in the formulation that the framework is again identical to
those of ZDT and DTLZ, yet the function H is different. To see the search
space, the shape of function H is shown in Fig. 4.27. This figure shows that
the robust optimum is a local optimum, while there are many non-robust global
fronts. Note that the number of global fronts can be defined by adjusting the
parameter M in the H function.
To see how the parameter space and objective space of the proposed multi-
modal test problem look, Fig. 4.28 is provided. This figure shows that the same
shape and number of optima are created along x2(f 2). So, the parameter space
has many non-robust globally optimal valleys and one robust locally optimal
valley. The objective space shows that the global and local valleys create only
two fronts: a local front and a global front. It should be noted that there are
actually more than two fronts, all the global fronts are overlapped. Obviously,
4. Benchmark problems 117

1.6

1.4

1.2

H(x2)
1

0.8

0 0.5 1
x2

Figure 4.27: H(x) creates one robust and 2M global Pareto optimal fronts
f(x1,x2)

Figure 4.28: Parameter space and objective space of the proposed multi-modal
robust multi-objective test problem

the local front is the robust front, which should be approximated by robust
algorithms.

Similarly to the deceptive test functions, the function S is required to define


the shape of the fronts as well. There is again a parameter β that is able to
change the shape of search space and Pareto optimal fronts as shown in Fig. 4.29.
This mechanism challenges robust algorithms to approximate different shapes for
non-robust and robust Pareto optimal fronts.

This set of test functions provides non-robust fronts as hindrances for robust
multi-objective test functions. The search agents of robust algorithms should
avoid all the local fronts to entirely approximate the robust front.
118 4. Benchmark problems
f(x1,x2)

f(x1,x2)
f(x1,x2)

β = 0.5 β=1 β=2

Figure 4.29: Different shapes of Pareto fronts that can be obtained by manipu-
lating β

4.2.4.4 Flat (non-improving) search space

As mentioned above, a large portion of such landscapes is featureless, so there


is no useful or deceptive information about the location of optima. For creating
a robust multi-objective test function with a flat search space, the following
equations are proposed:

M inimise : f1 (~x) = x1 (4.64)

M inimise : f2 (~x) = H(x2 ) × {G(~x) + S(x1 )} (4.65)

x−0.95 2 x−0.05 2
where : H(x) = 1.2 − 0.2e−( 0.03
)
− 0.2e−( 0.01
)
(4.66)

N
X
G(~x) = 50x2i (4.67)
i=3

S(x) = 1 − xβ (4.68)
4. Benchmark problems 119

This function H is modified to construct this test function. As illustrated


in Fig. 4.30, this function has two global optima close to the boundaries. This
function deliberately provides two optima to challenge robust algorithms in terms
of favouring fronts with different degrees of robustness. In addition, a large
portion of the function H is flat, so no information about the location of fronts
can be extracted from the search space. Another challenge here would be the
robustness of the flat region. Since the variation is consistent in the second
objective, a robust algorithm may be trapped in the flat region, assuming it as
the robust optimum, and failing to approximate the actual robust front.

1.3

1.2
H(x2)

1.1 Non-robust optimum

0.9
Robust optimum
0.8
0.05 0.95
x2

Figure 4.30: H(x) makes two global optima close to the boundaries

Again the test function is equipped with a parameter called β which is respon-
sible for defining the shape of the fronts. Three variations of this test function are
constructed as shown in Fig. 4.31. Therefore, different shapes for both fronts are
also another challenge for robust algorithms when solving these test functions.
All the proposed robust multi-objective test functions are provided in Ap-
pendix B.

4.3 Summary
This section tackled the lack of suitable and challenging test functions in the
literature of robust optimisation as the first phase of a systematic robust op-
timisation process. Three frameworks were first proposed to generate different
single-objective test functions. The frameworks allowed creation of test func-
120 4. Benchmark problems
f(x1,x2)

f(x1,x2)
f(x1,x2)

Figure 4.31: H(x) makes two global optima close to the boundaries

tions with desired levels of difficulty: optima with alterable robustness level
(Framework I), multiple local non-robust solutions (Framework II), and alter-
able number of variables (Framework III). These frameworks allow us to create
test functions with diverse difficulties to challenge robust algorithms. A frame-
work is more beneficial than a single test function because it allows creating new
test functions. It is easy to use, reliable, and creates test functions with single
feature varying in degree of difficulty. In other words, frameworks assist testing
a specific ability of an algorithm at different levels of difficulty.
In addition, diverse difficulties were integrated with the test functions: de-
sired number of variables, biased search space, deceptive non-robust local solu-
tions, multiple non-robust local solutions, and flat search space. The charac-
teristics of each test function and difficulty were investigated theoretically, by
generating random solutions, and observing the shape of search space. A set
of 11 test functions were proposed including very challenging test functions as
the first test suite in the literature of robust single-objective optimisation. The
details of these test functions can be found in Appendix A (TP9 to TP20).
The second part of this chapter covered the proposal of three multi-objective
frameworks for creating robust multi-objective benchmark functions and integra-
tion of several hindrances with the current test functions. There is no framework
in the literature, so these three frameworks are the first. Framework 1 allowed us
to create test functions with a robust front with different degree of robustness.
4. Benchmark problems 121

Framework 2 was designed to create a search space with a desired number of


non-robust local fronts. Framework 3 was for designing a robust front with dis-
connected regions. These difficulties are the main challenges that an algorithm
may face when solving real problems and can be simulated efficiently by the
frameworks proposed. Similarly to the single-objective benchmarks, test func-
tions with the following difficulties were proposed for the first time: multi-modal
search space, multiple local non-robust fronts, deceptive non-robust local front,
robust front with different shapes (concave, convex, and linear), disconnected
robust front, biased search space, and non-improving (flat) search space. The
characteristics of each test function were inspected and confirmed theoretically
in detail.
This chapter also considered increasing the level of difficulty of current test
problems. The main contribution was the proposal of frameworks as the core for
the first phase of a systematic robust algorithm design process, but the current
test functions were also improved. This is due to the drawbacks of the current
robust test functions: low number of local solutions, low number of local fronts,
low number of variables, and simplicity. The frameworks allow us to create
challenging test functions to efficiently challenge robust algorithms. They can
also be used by other researchers to generate test problems with different levels
of difficulty. The improved test functions are also more challenging and better
mimic the difficulties of the real search spaces compared to the current ones.
Although the frameworks are beneficial in terms of generating test functions,
there should be a common test suite as well to allow comparing research from
different contributors. Therefore, a total of 34 challenging robust multi-objective
test problems were proposed as the most difficult test functions in the literature.
The proposed test suite will provide suitable test beds to compare the main
method of this thesis with other methods in the literature. The details of these
test functions can be found in Appendix B.
Due to the difficulties and diversity of the proposed frameworks and bench-
mark problems, they can confidently be used in the first phase of a systematic
robust algorithm design process to assure testing and challenging new algorithms,
including the method proposed of this thesis.
Robust test
function
design

Robsut
performance
metric
design
Chapter 5 Robust
algorithm
design
Performance measures

The second phase of a systematic robust algorithm design process is perfor-


mance metric selection or proposal. Benchmark functions provide test beds,
but performance metrics allow comparing algorithms quantitatively. Perfor-
mance metrics are essential when evaluating an algorithm during the design
process [34, 149] because they measure how much better an algorithm is. In ad-
dition they determine the extent to which minor or major changes in algorithms
are beneficial. If suitable performance metrics are available in the literature, we
can easily employ some of them to be able to evaluate and compare algorithms.
If such metrics are not available, however, we have to propose new ones or ex-
tend the current ones. Due to the lack of performance metrics for quantifying the
performance of robust algorithms, this chapter proposes two for the first time.
For robust single-objective algorithms, the current performance indicators
can be employed easily due to similar performance measures: accuracy and
convergence speed. However, the current performance metrics in the field of
multi-objective optimisation cannot be employed to measure the performance of
a robust multi-objective algorithm. In this chapter, therefore, specific perfor-
mance metrics are proposed only for robust multi-objective algorithms.
As discussed in the related work, the literature shows that there are a con-
siderable number of performance metrics in the field of EMOO [184]. Due to
the complexity of the optimisation process and multi-objectivity in the EMOO
field, there should be several performance metrics when comparing algorithms
in order to provide a fair and objective comparison [112, 145].
The literature shows that different branches of EMOO also need specific or

122
5. Performance measures 123

adapted metrics for effectively quantifying the performance of algorithms. For


instance, Dynamic Multi-Objective Optimisation (DMOO) [49] and Interactive
Multi-Objective Optimisation (IMOO) [50] need their own modified metrics as
discussed in [86, 87] and [53] respectively. To date, however, there is no per-
formance metric in the field of RMOO despite its significant importance. This
is the motivation of this chapter, in which two novel performance metrics are
proposed for robust multi-objective algorithms.
Similar to the performance metrics in EMOO, there should be more than one
unary metric in order to efficiently evaluate and compare the performance of an
algorithm. The three main performance characteristics for an algorithm when
finding an approximation of the robust front are: convergence, distribution, and
the number of robust Pareto optimal solutions obtained. The first characteristic
refers to the convergence of an algorithm towards the true robust Pareto optimal
solutions. In this case, the ultimate goal is to find very accurate approximations
of the robust Pareto optimal solutions. The second feature of performance is the
ability of an algorithm in finding uniformly distributed robust Pareto optimal
solutions. Finally, the number of robust and non-robust Pareto optimal solutions
obtained is also important. A robust multi-objective algorithm should be able
to find robust Pareto optimal solutions as much as possible and avoid returning
non-robust Pareto optimal solutions.
Obviously, all these performance characteristics cannot be measured effec-
tively by one unary metric. For the convergence, this thesis employs the IGD
to measure how close the solutions are to the expectation of the Pareto optimal
front considering the possible perturbations. However, the current convergence
and success ratio measures are not efficient because they do not consider the
robustness of different regions of the expectation of the front.
In the following subsections two novel metrics are proposed in order to mea-
sure coverage and the success ratio of robust multi-objective algorithms. It is
assumed that the robustness of the true Pareto optimal front has been previously
calculated (and is represented by the grey curves in the figures). Technically
speaking, if the robustness has not been previously defined for each true Pareto
optimal solution, it can be calculated easily by generating and averaging a set
of random solutions (perturbing parameters with probable δ error) in the neigh-
bourhood of the true Pareto solutions. Another assumption is that a smaller
value of a robustness metric for a solution means less sensitivity to perturbation.
124 5. Performance measures

5.1 Robust coverage measure (Φ)


The main idea of the proposed coverage measure of this thesis has been inspired
by that of Lewis et al. in [112]. The general concepts of the proposed coverage
measure are illustrated in Fig. 5.1. As may be seen in this figure, the objective
Robust segments

S1 S2 S3 S4 S5
S6

Robust
segments
S7

Pareto
front
S8 Robustness

S9
Minimise (f2)

Minimum
robustness

S10
Minimise (f1)

Figure 5.1: Schematic of the proposed coverage measure (Φ)

space is divided into several, equal sectors from the origin in the proposed mea-
sure. The sectors are divided into two groups: robust and non-robust sectors.
Needless to say, the robustness of the entire regions of the Pareto front should be
known in order to identify the robust and non-robust sectors. After defining the
number of robust segments, the number of segments that contain at least one
Pareto optimal solutions obtained should be counted and divided by the total
number of robust segments. The mathematical formulation is as follows (note
that this performance measure only works for bi-objective problems):
n
1 X
Φ= φn (5.1)
N n=1


1 ∃~x ∈ P S, α f1 (~
x)
n−1 ≤ tan f2 (~
x)
≤ αn , R(Pn ) ≤ Rmin
φn = (5.2)
0 otherwise

where N is the number of robust sectors, ~x is an approximation of the Pareto


optimal solutions obtained, αn is the angle of the right line of a segment, Pn
5. Performance measures 125

indicates the closet true Pareto optimal solution to ~x, and Rmin is a minimum
robustness value defined by the user. Note that there should be an exception
when there is no robust segment in order to prevent division by zero.
The accuracy of this measure is increased by the number of segments. It
should be noted that some of the segments can be partially robust and partially
non-robust (S8 in Fig. 5.1). It is assumed that a segment is robust if and only
if it is completely robust even if the solution obtained lies on the robust part of
the segment.
In the example of Fig. 5.1, there are 10 segments and 9 Pareto optimal
solutions obtained. The segments S1, S8, S9, and S10 are not robust, whereas
S2 to S7 segments are robust. Among the six robust sectors, three of them are
occupied by at least one solution. Therefore, the coverage measure is Φ = 63 =
0.5, meaning that 50% of the robust segments (approximately the robust Pareto
optimal front) are covered by the given Pareto optimal solutions obtained.
Some comments on the theoretical effectiveness of the proposed measure are
written in the following paragraphs. Note that the grey line determines the ro-
bustness of points on the front; it gives no details regarding points away from the
front. It also assumes one-to-one mappings, whereas in many real-world prob-
lems there are many-to-one mappings, which may have different robustnesses.
In the following figures, a segment bounded on at least one side with a red line
is not robust. By contrast, a segment with two blue lines is robust.

• The greater the number of occupied robust segments, the higher the cov-
erage (see Fig. 5.2):

N
X N
X
φ1n > φ2n −→ Φ1 > Φ2 (5.3)
n=1 n=1

• The higher the value of Φ, the greater the coverage of an algorithm on


robust regions of a Pareto optimal front.

• The number of the robust segments are considered by Φ, so the proposed


measure indicates the coverage of robust regions (without considering non-
robust segments) (see Fig. 5.3).
126 5. Performance measures

 = 0.33333  = 0.58333
1 1

0.8 0.8

0.6 0.6
f2

f2
0.4 0.4

0.2 0.2

0 0
0 0.2 0.4 0.6 0.8 1 0 0.2 0.4 0.6 0.8 1
f1 f1

Figure 5.2: Effect of the number of occupied robust segments on the proposed
coverage measure

=1 =1 Non-robust


1 1 solutions

0.8 0.8

0.6 0.6
f2

f2

0.4 0.4

0.2 0.2

0 0
0 0.2 0.4 0.6 0.8 1 0 0.2 0.4 0.6 0.8 1
f1 f1

Figure 5.3: Zero effect of occupied non-robust segments on the proposed coverage
measure

• Since the segments with partially robust regions are omitted, they have no
effect on the final value of Φ (see Fig. 5.4).

• Since Φ counts the number of segments (not the solutions in the segments),
a large number of solutions obtained does not necessarily increase Φ. For
example, an algorithm that finds a large set of solutions in a single segment
shows low coverage by the proposed measure.

• A greater number of segments results in greater accuracy of the coverage


measure. In addition, increasing the number of segments does not increase
the value of Φ since the occupied robust segments are divided by the num-
ber of robust segments (see Fig. 5.5).
5. Performance measures 127

=0
1

0.8

0.6

f2 0.4

0.2

0
0 0.2 0.4 0.6 0.8 1
f1

Figure 5.4: Segments that are partially robust do not count when calculating Φ

 = 0.33333  = 0.090909
1 1

0.8 0.8

0.6 0.6
f2

f2

0.4 0.4

0.2 0.2

0 0
0 0.2 0.4 0.6 0.8 1 0 0.2 0.4 0.6 0.8 1
f1 f1

Figure 5.5: The accuracy of the proposed coverage measure is increased propor-
tional to the number of segments

• Since the number of occupied segments is divided by the total number of


robust segments, the values of Φ always lie in [0, 1].

• Use of the minimum robustness (Rmin ) assists in defining the degree of


robustness when calculating Φ.

• The coverage measure cannot be calculated when Rmin < min (R(robustness curve)),
and this is considered an exceptional case (see Fig. 5.6).

• The Φ measure converts to a normal coverage measure (all the segments


are assumed as robust) when Rmin > max (R(robustness curve)) (see
Fig. 5.7).
128 5. Performance measures

=0
1

0.8

0.6

f2 0.4
Rmin
0.2

0
0 0.2 0.4 0.6 0.8 1
f1

Figure 5.6: Effect of the minimum robustness on the number of robust segments
and Φ
 = 0.73684
1

0.8

0.6
Rmin
f2

0.4

0.2

0
0 0.2 0.4 0.6 0.8 1
f1

Figure 5.7: All segments are converted to robust and counted when Rmin >
max (R(robustness curve))

5.2 Robust success ratio (Γ)


With IGD and the measure proposed so far, the convergence and coverage of a
given set of robust Pareto optimal solutions can be measured. However, none
of these measures can quantify the success of an algorithm in terms of finding
a number of robust solutions and avoiding non-robust solutions. Although dif-
ferent numbers of robust Pareto optimal solutions obtained affects the proposed
measures, it seems there should be another measure that specifically assesses the
number of robust and non-robust solutions. This is the motivation of a novel
measure called robustness success ratio (Γ) in this subsection. A conceptual
model of the Gamma measure is depicted in Fig. 5.8.
As can be seen in this figure, the first step of calculating this measure is
5. Performance measures 129

Robust segments

S1 S2 S3 S4 S5 S6 S7 S8 S9

Pareto
front

Robustness
Maximise (f2)

Minimum
robustness

Maximise (f1)

Figure 5.8: Conceptual model of the proposed success ratio measure

very similar to that of the coverage measure. The objective space is divided into
vertical segments. The reason of choosing vertical sectors instead of diagonal
segments is the importance of the robustness of each robust optimal solution
obtained. In other words, a vertical line is drawn from each point to intersect
the robustness curve. If the intersection point lies below the minimum desired
robustness, the solution is robust. When calculating the Φ measure, the coverage
of a given set of solutions is important no matter how far they are from the true
robust Pareto optimal solutions. So a solution can be non-robust itself but in a
robust segment due to its corresponding robust reference point. An example of
this phenomenon is illustrated in Fig. 5.9. This figure shows that the solution
obtained is located in a robust segment when calculating Φ and in a non-robust
segment when defining the Γ measure.
After vertically dividing the objective search into N segments, the division
of the number of solutions obtained in the robust sectors by the number of
solutions obtained in the non-robust sectors is calculated as the success ratio of
an algorithm. The success ratio is mathematically expressed as:

γR
Γ= (5.4)
1 + γN R

M
X
where : γR = pR
m (5.5)
m=1
130 5. Performance measures

Pareto Pareto
front front

Robustness Robustness

Maximise (f2)
Maximise (f2)

Minimum Minimum
robustness robustness

Maximise (f1) Maximise (f1)

Figure 5.9: An example of a probable problem in case of using diagonal segments


when calculating Γ


1 ∃~x ∈ P S, β
R m−1 ≤ f1 (~
x) ≤ βm , R(Pn ) ≤ Rmin
pm = (5.6)
0 otherwise

M
X
γN R = pN
m
R
(5.7)
m=1


1 ∃~x ∈ P S, β
m−1 ≤ f1 (~
x) ≤ βm , R(Pn ) > Rmin
pN
m
R
= (5.8)
0 otherwise

where M is the number of Pareto optimal solutions obtained, ~x is an approx-


imation of the Pareto optima solutions obtained, βn is the f1 value of the right
line of a segment, Pn is the closet true Pareto optimal solution to ~x, and Rmin
is the minimum robustness value.
In addition to the different segmentation mechanism, another difference of
this measure compared to the proposed coverage measure is that Γ counts the
number of solutions obtained in the robust segments, whereas the Φ measures
counts the number of segments that have at least one solution.
Note that γ N R is incremented by one in Equation 5.4 in order to prevent
division by zero in case an algorithm does not find any non-robust solutions. In
order to see how the proposed measure can be effective some highlights are:
5. Performance measures 131

• The success ratio of an algorithm equals zero if there are no robust solutions
found (see Fig. 5.10):

γ R = 0 −→ Γ = 0 (5.9)

=0
1

0.8

0.6
f2

0.4 Rmin

0.2

0 0.2 0.4 0.6 0.8 1


f1

Figure 5.10: Success ratio is zero if there is no robust solution in the set of
solutions obtained

• The success ratio of an algorithm with no non-robust solutions obtained is


equal to the number of robust solutions obtained (see Fig. 5.11):

γ N R = 0 −→ Γ = γ R (5.10)

=4
1

0.8

0.6
f2

0.4 Rmin

0.2

0 0.2 0.4 0.6 0.8 1


f1

Figure 5.11: Example of the success ratio for a set that contains only robust
solutions

• The greater the number of robust solutions obtained, the higher the success
ratio (see Fig. 5.12):

(γ1N R = γ2N R ) ∧ (γ1R > γ2R ) −→ Γ1 > Γ2 (5.11)


132 5. Performance measures

=1  = 1.5
1 1

0.8 0.8

0.6 0.6
f2

f2
0.4 0.4 Rmin

0.2 0.2

0 0

0 0.2 0.4 0.6 0.8 1 0 0.2 0.4 0.6 0.8 1


f1 f1

Figure 5.12: Success ratio increases proportional to the number of robust solu-
tions obtained

• The fewer non-robust solutions obtained, the higher the success ratio (see
Fig. 5.13):
(γ1R = γ2R ) ∧ (γ1N R > γ2N R ) −→ Γ1 < Γ2 (5.12)

 = 0.6  = 0.75
1 1

0.8 0.8

0.6 0.6
f2

f2

0.4 0.4 Rmin

0.2 0.2

0 0

0 0.2 0.4 0.6 0.8 1 0 0.2 0.4 0.6 0.8 1


f1 f1

Figure 5.13: Success ratio is inversely proportional to the number of non-robust


solutions obtained

• The relative success ratios of two algorithms without non-robust solutions


are defined by the respective number of solutions obtained in robust seg-
ments.
(γ1N R = γ2N R = 0) ∧ (γ1R > γ2R ) −→ Γ1 > Γ2 (5.13)
5. Performance measures 133

(γ1N R = γ2N R = 0) ∧ (γ1R < γ2R ) −→ Γ1 < Γ2 (5.14)

• The use of minimum robustness (Rmin ) assists in defining the degree of


robustness when calculating Γ.

• The success ratio equals zero when Rmin < min (R(robustness curve)) (see
Fig. 5.14).

=0
1

0.8

0.6
f2

0.4
Rmin
0.2

0 0.2 0.4 0.6 0.8 1


f1

Figure 5.14: Effect of minimum robustness on success ratio when Rmin <
min (R(robustness curve))

• The Γ measure counts all Pareto optimal solutions obtained and converts
to a normal success measure (all the segments are assumed as robust) when
Rmin > max (R(robustness curve)) (see Fig. 5.15).

• A greater number of segments results in greater accuracy of the Γ measure.

It should be noted the proposed performance measures are designed for the-
oretical studies, in which the robust Pareto front of test functions are known.
However, these measures can also be employed to quantify the performance of
algorithms in solving real problems subject to availability of a known true Pareto
optimal front. If the true Pareto optimal front is unknown (which is generally
the case in real problems), the following steps should be completed to be able to
use the proposed performance measures:

• Solve the problem with a robust algorithm using the maximum possible
number of search agents, iterations, and sampling points to find an accurate
approximation of the true robust Pareto optimal front.
134 5. Performance measures

 = 16
1

0.8

0.6 Rmin

f2 0.4

0.2

0 0.2 0.4 0.6 0.8 1


f1

Figure 5.15: Effect of minimum robustness on success ratio when Rmin >
max (R(robustness curve))

• Calculate the robustness of each obtained solution in the first step by re-
sampling n perturbed solutions around it.

After these two steps, robust multi-objective algorithms can be employed to


approximate the robust front and then compared with each other quantitatively
using the reference set obtained and the proposed performance measures.
Another point that may be noted is that the proposed performance measures
are helpful for quantifying the performance of robust multi-objective algorithms.
They can be employed to analyse the results of any general multi-objective opti-
misation algorithms. An algorithm need not be Pareto-based to be measured by
the proposed performance indicators. For instance, objective aggregation-based
algorithms can also be employed to approximate the robust Pareto optimal front
and then the performance measures applied to the set of solutions obtained.
These measures cannot be utilised directly to improve the performance of robust
multi-objective algorithms, but they are helpful for comparing the results of dif-
ferent algorithms after the optimisation process. The main point of the proposed
performance metrics is that they can work with any set of reference points. No
matter if the reference point is in the Pareto optimal front defined by the ex-
pectations or nominal objective functions, the performance metrics quantify the
performance of an algorithm.
5. Performance measures 135

5.3 Summary
Due to the lack of performance measures in the field of robust optimisation, we
cannot systematically design a robust algorithm. As the second phase of a sys-
tematic robust algorithm design process, this chapter proposed two performance
measures for quantifying the performance of robust multi-objective algorithms
for the first time. Both proposed measures are able to evaluate the performance
of a robust multi-objective algorithm from different perspectives quantitatively.
For one, the coverage measure proposed quantifies the distribution of Pareto op-
timal solutions obtained by algorithms along the robust front. For another, the
proposed success ratio allows calculating the number of robust and non-robust
solutions obtained.
For each of the proposed performance indicators, several tests were conduced
on manually created robust Pareto optimal fronts. The tests showed that: the
coverage and success measures are able to quantify the spread of the robust
Pareto optimal solutions across the robust regions and the number of robust and
non-robust solutions obtained.
There is no specific performance measure in the field of robust multi-objective
optimisation, so the proposed measures in this chapter fill this substantial gap.
Without these measures, we can only observe which algorithm is better in a
qualitative sense. However, the proposed measures allow us to reliably investi-
gate and confirm how much better an algorithm is. In addition, they are helpful
in determining the extent to which changes in algorithms are beneficial. With
such measures, therefore, systematic robust algorithm design process is possible
not only in this thesis but also in other works in future.
The remarks for each of the measures suggested that the proposed measures
allow designers to benchmark their algorithms effectively and quantitatively.
They will be the main comparison measures in this thesis as well. In Chapter 8,
experimental results will demonstrate the effectiveness of the proposed measures
in practice.
Robust test
function
design

Robsut
performance
metric
design
Chapter 6 Robust
algorithm
design
Improving robust
optimisation techniques

The last phase of a systematic robust design process is algorithm design. With-
out benchmark problems and performance metrics, we cannot compare and find
out which ideas are better than others quantitatively. The proposed phases in
Chapter 4 and 5, benchmark problems and performance metrics, allow us to
reliably and confidently compare and evaluate new ideas.
Although algorithm evaluation/verification requires test functions and per-
formance metrics, the algorithm design itself includes several steps. An algorithm
design or improvement process starts with new ideas. An idea might be to hy-
bridise algorithms, to integrate new operators in an algorithm, or to propose a
novel approach.
Currently, the two approaches of robust optimisation in this field, explicit
versus implicit, suffer from two main drawbacks: high computational cost and
low reliability. Therefore, they are less applicable to real problems with computa-
tionally expensive cost function(s). In addition, unreliability of implicit methods
prevents us from finding robust solution(s) confidently, which is very critical for
real problems. In order to alleviate such shortcomings, this chapter proposes
several new ideas and establishes novel approaches for finding robust solution(s)
in both single-objective and multi-objective search spaces reliably without the
need for additional true function evaluations.
Firstly, a confidence measure is proposed to define the degree of robustness of
solutions during optimisation when using meta-heuristics. Secondly, confidence-
based relational operators are proposed to establish Confidence-based Robust

136
6. Improving robust optimisation techniques 137

Optimisation (CRO) using single-objective optimisation algorithms. The pro-


posed operators have been integrated with PSO and GA as the first confidence-
based robust algorithms in the literature. Finally, confidence-based Pareto opti-
mality is proposed to establish Confidence-based Robust Multi-objective Optimi-
sation (CRMO) using multi-objective algorithms. The mechanism of performing
CRMO using the MOPSO algorithm is introduced and discussed at the end.

6.1 Confidence measure


In this section a novel metric called the Confidence (C ) measure is proposed in
order to calculate the confidence we have in the robustness of particular solu-
tions based on the location of solutions already known in the neighbourhood.
Later, different confidence-based relational operators are proposed utilising the
C measure for robust meta-heuristics. Finally, the confidence-based operators
are employed to establish a novel approach for finding robust solutions, called
CRO. As case studies, some different methods of performing CRO using PSO
and GA are introduced to demonstrate the general applicability of the proposed
approach.
In order to be useful, the confidence metric should be able to define the
confidence level of a robust solution. Needless to say, the highest confidence
is achieved when there are a large number of solutions available with greatest
diversity within a suitable neighbourhood around the solution (search agent)
in the parameter space. These three descriptive factors can be mathematically
expressed as follows:
• C ∝ the number of sampled points in the neighbourhood (n)

• C ∝ the inverse of the radius of the neighbourhood (r)

• C ∝ the inverse of the distribution of the available points in the neighbour-


hood (σ)
The proposed confidence equations are as follows:
n
C= (6.1)
r.σ + 1

sP
n ¯ di )2
i=1 (d −
σ= (6.2)
n−1
138 6. Improving robust optimisation techniques

+Std.y

P1
P2

-Std.x +Std.x

r
P3
Current solution
Sampled solution
Average point
Std. boundaries
-Std.y

Figure 6.1: Confidence measure considers the number, distribution, and distance
of sampled point from the current solution

where n ≥ 2, d¯ is the average of the distance between the current solution and all
the sampled points within the neighbourhood, and di is the Euclidean distance
of the i-th sampled point to the current solution.
Note that due to the stochastic nature of meta-heuristics, we assume that the
distribution of sampled points within r radius around a solution is approximately
uniform. Therefore, if the sampled points are closer to the solution, they give
better confidence about the robustness. The concepts of the proposed metric
and components involved are illustrated in Fig. 6.1.
This figure shows that the proposed confidence measure defines and assumes
a neighbourhood with radius r around every solution during optimisation. Defin-
ing this radius allows investigating different level of perturbations in the param-
eter space. By assuming a neighbourhood, an algorithm is able to differentiate
between the confidence level of neighbouring solutions closer to or farther from
the main solutions. Obviously, previously sampled solutions closer to the main
solutions are able to better assist us in confirming the robustness of the main
solution. The proposed confidence measure formulation considers this fact by
dividing the factors of number of solutions and distribution by r.
Fig. 6.1 also shows that the proposed confidence measure considers the num-
ber of previously sampled points in the neighbourhood as well as their distri-
bution. This figure shows that considering both of these factors is essential
6. Improving robust optimisation techniques 139

because the number of solutions is not able to show the status of neighbouring
solutions in terms of distribution. One solution may have more sampled points
but without broad distribution. As may be seen in Fig. 6.1, the distance of each
previously sampled solution is first calculated with respect to the main solu-
tion. The standard deviation is then employed to indicate the dispersion of the
previously sampled solutions.
Technically, in order to calculate the confidence measure there is a need to
calculate the Euclidean distance of a particular position and each previously
sampled point for finding those in a desirable distance. Therefore, the computa-
tional complexity of the proposed metric is of O(ns d) where ns is the number of
previously sampled points and d indicates dimension. This is the computational
complexity of calculating Euclidean distance between each previously sampled
solutions and the main solution.
To see how the proposed C metrics can be theoretically efficient some remarks
are:

• The confidence level of a search agent without previously evaluated neigh-


bouring solutions is equal to zero:

n = 0 =⇒ C = 0 (6.3)

• The confidence level of those solutions with the same number of neigh-
bouring samples evaluated within equal radii are differentiated based on
the dispersion:

n1 = n2 ∧ rr = r2 ∧ σ1 > σ2 =⇒ C1 < C2 (6.4)

• The confidence levels of two solutions with an equal number of neighbour-


ing solutions and similar distributions are defined based on the radii of
their neighbourhoods. The closer the neighbourhood, the higher the level
of confidence:

n1 = n2 ∧ σ1 = σ2 ∧ r1 > r2 =⇒ C1 < C2 (6.5)

• The greater the value of C, the greater the level of confidence.

• Due to using Euclidean distance, the C measure can easily be extended to


problems with diverse numbers of dimensions
140 6. Improving robust optimisation techniques

• The proposed measure is independent of the objective function(s), so it is


compatible with any kind of robustness measure.

• The overall computational complexity of calculating C is O(ns d), so it is


considered a cheap metric.

• Confidence measure is easy to implement.

6.2 Confidence-based robust optimisation


As discussed, the confidence measure is able to calculate the confidence level of
solutions with respect to the location of neighbouring evaluated solutions. This
measure has the potential to be integrated with other mathematical operators
for single- and multi-objective optimisation. In this thesis five new confidence-
based relational operators are proposed as follows. Please note that the term
robustness indicator (R) is utilised to refer to any kind of robustness measure.
So, R(x) and C(x) calculates the robustness and confidence of the solution x
respectively.

6.2.1 Confidence-based relational operators


~x is said to be confidently less than ~y (denoted by ~x <c ~y ) iff :

(R(~x) < R(~y )) ∧ (C(~x) > C(~y )) (6.6)

~x is said to be confidently less than or equal to ~y (denoted by ~x ≤c ~y ) iff :

(R(~x) ≤ R(~y )) ∧ (C(~x) ≥ C(~y )) (6.7)

~x is said to be confidently greater than ~y (denoted by ~x >c ~y ) iff :

(R(~x) > R(~y )) ∧ (C(~x) > C(~y )) (6.8)

~x is said to be confidently greater than or equal to ~y (denoted by ~x ≥c ~y ) iff :

(R(~x) ≥ R(~y )) ∧ (C(~x) ≥ C(~y )) (6.9)

~x is said to be confidently equal to ~y (denoted by ~x =c ~y ) iff :

(R(~x) = R(~y )) ∧ (C(~x) = C(~y )) (6.10)


6. Improving robust optimisation techniques 141

As can be seen in the proposed confidence-based relational operators, the


comparison is not solely based on the robustness indicator. This gives us the
opportunity to design different mechanisms to confidently evaluate the search
agent of meta-heuristics and consequently direct them toward the robust regions
of the search space.

6.2.2 Confidence-based Particle Swarm Optimisation


The proposed operators have the possibility to be integrated with any kind
of meta-heuristics. Various combinations of these operators can be incorpo-
rated into different components of meta-heuristic algorithms. The main idea of
performing CRO is to employ confidence-based relational operators to design
confidence-based components for meta-heuristics and guide the search agents
confidently towards robust solution(s). Depending on the structure of a meta-
heuristic, different CRO scenarios can be defined. In EA, for instance, confidence-
based reproduction processes (the crossover component) can be designed to gen-
erate offspring of an individual with robustness of highest confidence. Moreover,
elitism can be re-designed totally based on the confidence level of robustness of
individuals.
The general framework of the proposed confidence-based robust optimisation
is illustrated in Fig. 6.2. The most important steps of robust optimisation that
are influenced by the proposed methodology are highlighted in grey. It may be
seen that the initial random population is first evaluated. Before modifying so-
lutions, which is based on the mechanism of the algorithm, the confidence level
of each solution is calculated based on the current status of previously sam-
pled points. By calculating the confidence level of all solutions, they can be
compared by the confidence-based relational operators proposed in the preced-
ing subsection. These operators allow us to decide whether a solution is confi-
dently better than another. The two actions that can be taken are illustrated
in Fig. 6.2. On one hand, the confidently better solutions are normally modi-
fied/evolved/combined by the operators of the algorithm. On the other hand, the
non-confident solutions can either be discarded or randomly initialised/modified.
These steps are repeated until the satisfaction of an end criterion. It is evident
from this flow chart diagram that the proposed confidence-based robust opti-
misation prevents non-confident solutions from participating in improving the
population. Therefore, the reliability of an algorithm can be improved signifi-
142 6. Improving robust optimisation techniques

Start

Initial random
population

Evaluate solutions

Calculate confidence
level of solutions

Compare the solutions


based on confidence-
based relational operators

No
Confidently
No
better?

Yes
Discard non-
Modify/evolve/ confident solutions
combine solutions and/or randomly
modify solutions

Termination
condition
satisfied?

Yes

End

Figure 6.2: Flow chart of the general framework of the proposed confidence-based
robust optimisation

cantly when utilising the previously sampled points.


The proposed method is readily applicable for handling uncertainties in op-
erating conditions as well. However, the process of handling such uncertainties
6. Improving robust optimisation techniques 143

is subject to one condition. Since operating conditions are secondary inputs for
a system and not usually considered as a parameter to be optimised (they are
always considered as a fixed value), they have to be parametrised first and then
may be optimised by a confidence-based robust optimiser. This means that the
operating conditions will be changed and optimised by the optimiser in addition
to other parameters.
In this thesis, the PSO algorithm is chosen as the first case study. Later,
the CRO method will be applied using GA. With the proposed metric, generally
speaking, there would be two metrics to find robust solutions: robustness and
confidence metrics. The former metric defines the robustness of the search agent
of meta-heuristics, whereas the latter metric defines how confident we are in the
robustness of the solution.
In the simple robust PSO algorithm, the particles are compared as follows:

~x < ~y ⇐⇒ R(~x) < R(~y ) (6.11)

where R(.) is the robustness indicator. The gBest and pBests are updated when a
particle finds a better solution in the search space. With the proposed confidence-
based operators, however, two new Confidence-based Robust PSOs (CRPSO) are
proposed as follows:
In CRPSO1, a particle is replaced with the best particle obtained so far if
and only if it is confidently better than the best solution as follows:
For minimisation:

gBest := ~x ⇐⇒ ~x ≤c gBest (6.12)

where ~x is a particle.
For maximisation:

gBest := ~x ⇐⇒ ~x ≥c gBest (6.13)

where ~x is a particle.
In order to implement this the confidence of the best solution obtained so
far (gBest) is stored in a new variable called cgBest. It is clear that the gBest
is updated if and only if the confidence level and robustness indicator are both
better in CRPSO1.
In CRPSO2, the personal best solutions (pBest) found so far are also updated
(in addition to the gBest) based on the confidence level of solutions as follows:
144 6. Improving robust optimisation techniques

For minimisation:

pBesti := ~x ⇐⇒ ~x ≤c pBest (6.14)

For maximisation:

pBesti := ~x ⇐⇒ ~x ≥c pBest (6.15)

A new vector called cpBest is defined to store the best confidence level of
particles over the course of iterations.
As may be inferred from these equations, two confidence-based update proce-
dures were designed to perform CRO using CRPSO. The two CRPSOs proposed
show how the proposed confidence-based operators establish a new way of robust
optimisation using meta-heuristics.

6.2.3 Confidence-based Robust Genetic Algorithms


In a Robust GA (RGA) every individual is evaluated based on its corresponding
robustness measure and allowed to participate in the production of following gen-
erations. The greater the robustness of an individual, the higher the probability
of mating. When RGA uses previously sampled points in the neighbourhood to
define the robustness of each individual, it is prone to favour non-robust indi-
viduals, especially in the initial iterations because there is not enough sampled
points to decide on the robustness of individuals. In this thesis, the confidence
measure is employed to prevent such circumstances and drive the individuals to-
ward robust optima. In order to integrate the confidence measure and confidence
operators in the RGA, the following confidence-based components are proposed:

1. Confidence-based elitism: This component gives higher priority for an indi-


vidual with high confidence. Without loss of generality, the mathematical
model for a minimisation problem is as follows:

E := Ii ⇐⇒ ∀k | Ii ≤c Ik (6.16)

where E is the elite and Ii is the i-th individual.


The purpose of this component is to save and replicate the most confi-
dently robust solutions throughout generations and guide the rest of the
individuals towards it.
6. Improving robust optimisation techniques 145

2. Confidence-based cross-over: this component prevents children with low


confidence to replace their parents in the next generation. In other words,
the parents are replaced by their children if and only if they are confidently
more robust than them. The mathematical model for a minimisation prob-
lem is as follows:

Pi := CHi ⇐⇒ CHi ≤c Pi (6.17)

where Pi is i-th parent and CHi indicates the i-th child.


The purpose of this component is to preserve highly confidently robust
individual(s) in each generation and allow them to guide other individuals
toward promising highly confidently robust regions of search spaces.

These two components are incorporated in the simple RGA and named
as Confidence-based Robust Genetic Algorithms (CRGA). In the first version,
CRGA1, the best individual in each generation is selected by using Equation 6.16
and moved directly to the next generation based on the confidence-based elitism
component. Note that if two solutions are better than the elite, the first one re-
places the elite. If the second solution has a higher confidence and better fitness,
the elite is updated again.
In the second version, CRGA2, the individuals of each generation are allowed
to move to the next generation subject to Equation 6.17.

6.3 Confidence-based robust multi-objective op-


timisation
6.3.1 Confidence-based Pareto optimality
With the Confidence measure, the concepts of Pareto dominance, Pareto opti-
mality, Pareto solution set, and Pareto front for robust optimisation can now be
modified as follows (Please note that the term robustness indicator (R) is utilised
to refer to any kind of robustness measure. So, R(x) and C(x) calculates the
robustness and confidence of the solution x respectively. ):

Definition 6.3.1 (Confidence-based Pareto Dominance for minimisation): Sup-


pose that there are two vectors: ~x = (x1 , x2 , ..., xk ) and ~y = (y1 , y2 , ..., yk ).
146 6. Improving robust optimisation techniques

Vector ~x confidently dominates vector ~y (denoted as ~x ≺c ~y ) if and only if:

∀i ∈ (1, 2, ..., o)

[Ri (~x) ≤ Ri (~y )] ∧ [∃i ∈ (1, 2, ..., o) : Ri (~x) < Ri (~y )]


∧ [C(~x) ≥ C(~y )]

Note that confidence-based Pareto dominance in a maximisation problem


can be achieved by converting ≤ and < to ≥ and >. Since the concept of
confidence measure does not change in minimisation and maximisation problems,
the confidence-based Pareto dominance is defined as follows:

Definition 6.3.2 (Confidence-based Pareto Dominance for maximisation): Sup-


pose that there are two vectors: ~x = (x1 , x2 , ..., xk ) and ~y = (y1 , y2 , ..., yk ).
Vector ~x confidently dominates vector ~y (denoted as ~x c ~y ) if and only if:

∀i ∈ (1, 2, ..., o)

[Ri (~x) ≥ Ri (~y )] ∧ [∃i ∈ (1, 2, ..., o) : Ri (~x) > Ri (~y )]


∧ [C(~x) ≥ C(~y )]

This definition results in the expression [C(xi ) ≥ C(yi )] being identical in


maximisation and minimisation problems.

Definition 6.3.3 (Confidence-based Pareto Optimality): A solution ~x ∈ X is


called confidence-based Pareto optimal if and only if:

{6∃ ~y ∈ X|~y ≺c ~x}

Definition 6.3.4 (Confidence-based Pareto set): The set of all confidence-based


Pareto optimal solutions:

CPs = {~x ∈ X|6∃ ~y ∈ X, ~y ≺c ~x}

Definition 6.3.5 (Confidence-based Pareto front): The set containing the value
of objective functions for confidence-based Pareto solutions:

∀i ∈ (1, 2, ..., o)

CPf = {Ri (~x)|~x ∈ CPs }


6. Improving robust optimisation techniques 147

Note that Definition 6.3.5. means that it is possible to have a ‘thick’ front
(as in e.g. probabilistic domination).
Some comments on the proposed Pareto optimality concepts are:

• A solution is not able to confidently dominate another if it has less confi-


dence:

~x ≺ ~y ∧ C(~x) < C(~y ) =⇒ ~x 6≺c ~y (6.18)

• If the confidence of a solution is greater than another, the confidence-based


Pareto dominance become equivalent to the normal Pareto dominance for
that particular solution:

C(~x) > C(~y ) =⇒≺c ≡≺ (6.19)

• If two solutions have equal confidence, the confidence-based Pareto dom-


inance becomes equivalent to the normal Pareto dominance for that par-
ticular solution:

C(~x) = C(~y ) =⇒≺c ≡≺ (6.20)

• If two solutions are non-dominated with respect to each other, they are
also confidently non-dominated with respect to each other:

~x 6≺ ~y ∧ ~y 6≺ ~x =⇒ (~x 6≺c ~y ) ∧ (~y 6≺c ~x) (6.21)

• The confidence-based Pareto solution set contains all the confident solu-
tions and none of them can confidently dominate another.

The proposed confidence-based Pareto optimality, dominance, set, and front


can be integrated with different meta-heuristics. In other words, various com-
binations of these confidence-based concepts can be incorporated into different
modules of meta-heuristic algorithms in order to perform a reliable robust opti-
misation. As mentioned above, this type of robust optimisation is named CRMO.
148 6. Improving robust optimisation techniques

6.3.2 Confidence-based Robust Multi-Objective Particle


Swarm Optimisation
In this thesis the MOPSO algorithm is chosen as the case study. Confidence-
based Pareto dominance is integrated with the MOPSO algorithm in order to
allow it to perform confidence-based robust multi-objective optimisation. The
archive controller module of MOPSO is targeted as the main tool for integra-
tion. In the MOPSO algorithm, the archive controller module is responsible for
deciding if a solution should be added to the archive or not. If a new solution is
dominated by one of the archive member it should be omitted immediately. If
the new solution is not dominated by the archive members, it should be added
to the archive. If a member of the archive is dominated by a new solution, it is
removed. Finally, if the archive is full an adaptive grid mechanism is triggered.
In the proposed Confidence-based Robust MOPSO (CRMOPSO), however, the
following rules are proposed to provide a confidence-based archive controller:

• If a new solution is confidently dominated by one of the archive member


it should be omitted immediately.

• If the new solution is not confidently dominated by the archive members,


it should be added to the archive.

• If a member of the archive is confidently dominated by a new solution, it


is removed.

• If the archive is full the adaptive grid mechanism is triggered.

There is also a modification to the archive itself in order to maintain the


reliability of the archive. The CRMOPSO algorithm is required to update the
confidence level of archive members at each iteration. This mechanism allows
archive members to improve their confidence level based on the current status
of the previously sampled solutions. In this case, an archive member may be
omitted if its confidence level does not improve over the iterations. Note that this
method should be employed when saving and using all the previously sampled
points. If CRMOPSO only utilises a set of recent previously sampled points,
this method would not be effective since the highly confident robust solutions
will also be prone to being omitted from the archive.
6. Improving robust optimisation techniques 149

It also may be noticed that the dominance of a solution in CRMOPSO is


identical to that of RMOPSO, and the confidence-based dominance is only ap-
plied to the archive controller. This prevents CRMOPSO from showing degraded
exploration.

6.4 Summary
This chapter began with the proposal of a novel confidence measure for calcu-
lating the confidence level of robust solutions. Five new confidence-based rela-
tional operators were defined using the confidence measure. In addition, a new
approach of robust optimisation called CRO was established in order to employ
the confidence measure and relational operators for performing confident robust
heuristic optimisation. The novel approach was employed to design two new
variants of robust PSO and GA. Several theoretical comments and discussions
were made about the potential success of the proposed concepts in finding robust
optimal solutions at the end of the first part.
The second part of this chapter was dedicated to the proposal of confidence-
based Pareto optimality concepts using the confidence metric. The confidence-
based Pareto dominance was integrated with the archive update mechanism of
RMOPSO as a case study. The main contribution of the second part was the
proposal and establishment of a novel perspective called CRMO. Similarly to
the first part, several remarks were discussed to investigate and theoretically
prove the effectiveness of the proposed CRMO approach in finding robust Pareto
optimal solutions without extra function evaluations.
In summary, the proposed concepts in this chapter have the potential to as-
sist different algorithms in finding robust solutions in single- and multi-objective
search spaces reliably and without extra function evaluations. In the follow-
ing chapters the CRPSO, CRGA, and CRMOPSO algorithms are employed to
solve the test functions proposed in Chapter 4 as well as a real problem. These
algorithms are compared with a diverse range of algorithms in the literature
quantitatively and qualitatively using the proposed benchmark problems and
performance metrics in Chapter 4 and 5.
Chapter 7

Confidence-based robust
optimisation

The algorithm design is the last phase in a systematic design process. It can be
considered as the main phase since the other two phases are essential for starting
the last phase. Although an idea can be expressed in this phase without the need
for the other two phases, we need test functions and performance metrics to
investigate and prove the usefulness of the idea. To prove the merits of the ideas
proposed in Chapter 6, a number of experiments are systematically undertaken
in this chapter and the next chapter utilising tools proposed in chapters 4 and
5.

7.1 Behaviour of CRPSO on benchmark prob-


lems
The PSO algorithm was inspired by the social behaviour of bird flocking. It
uses a number of particles (candidate solutions) which fly around in the search
space to find the best solution. Meanwhile, they all trace the best location
(best solution) in their paths. In other words, particles consider their own best
solutions as well as the best solution the whole swarm has obtained so far. The
confidence-based robust version of this algorithm is proposed in this thesis and
examined in the following paragraphs.
To see the behaviour of the proposed CRPSO algorithms Fig. 7.1 and Fig. 7.2
are provided. These figures show the behaviour of the proposed CRPSO1 dealing

150
7. Confidence-based robust optimisation 151

with some of the benchmark functions. The experiment was conducted using five
particles over 100 iterations. The other initial parameters were as follows:

• C1 = 2

• C2 = 2

• w was decreased linearly from 0.9 to 0.4

• Topology: fully connected

• Initial velocity: 0

• Maximum velocity: 6

The radii of neighbourhoods were considered fixed and equal to the maximum
perturbation in parameters. The maximum perturbation of each benchmark
problem is available in Appendix A. Note that a simple expectation measure,
which is based on the mean of the neighbouring solutions, is calculated as the
robustness indicator.
The first column of Fig. 7.1 and Fig. 7.2 shows the benchmark function,
the search history, and the optimum finally obtained. The number of sampled
points detected around each particle in each iteration is provided in the second
column. The Cbests, expectation measure (mean of particle and its neighbour-
ing solutions), and convergence curves are provided in the last three columns.
Other results demonstrated are available in Table 7.1 which were obtained by
CRMOPSO1 when the maximum number of iterations was increased to 500.
This table shows the number of times that the confidence operators and confi-
dence measures were triggered over the course of iteration. This table reveals
how many times the proposed measure and operators assisted CRPSO1 to make
confident decisions (updated gBest) during optimisation.
In Fig. 7.1 and Fig. 7.2, the second columns of Test Problems (TP) show
that the number of sampled points increased over the course of iterations. As
mentioned in the previous chapters, the confidence measure is highly propor-
tional to the number of neighbouring solutions. Moreover, the PSO algorithm
tends to search mainly around the global best solution obtained so far. These
are the reasons for the incremental behaviour in cBest curves of TP1 to TP10.
However, this incremental trend stopped for a period of iterations occasionally.
This shows that there was no confidence to update gBest in those iterations.
152 7. Confidence-based robust optimisation

History of search Samples in neighbourhoods Cbest Robust measure Convergence curve


TP1 10 200 600 1.5 1.5

5 150
400 1 1
x2 0 100
200 0.5 0.5
-5 50

-10 0 0 0 0
-10 0 10 0 50 100 0 50 100 0 50 100 0 50 100
x1 iteration iteration iteration iteration

History of search Samples in neighbourhoods Cbest Robust measure Convergence curve


TP2 4 100 1500 1.6 1.6

2 1.4 1.4
1000
x2 0 50 1.2 1.2
500
-2 1 1

-4 0 0 0.8 0.8
-4 -2 0 2 4 0 50 100 0 50 100 0 50 100 0 50 100
x1 iteration iteration iteration iteration

History of search Samples in neighbourhoods Cbest Robust measure Convergence curve


10 300 400 1 1
TP3
5 300 0.8 0.8
200
x2 0 200 0.6 0.6
100
-5 100 0.4 0.4

-10 0 0 0.2 0.2


-10 0 10 0 50 100 0 50 100 0 50 100 0 50 100
x1 iteration iteration iteration iteration

History of search Samples in neighbourhoods Cbest Robust measure Convergence curve


2 400 4000 0.4 0.45
TP4
0.38
1 300 3000 0.4
0.36
x2 0 200 2000 0.35
0.34
-1 100 1000 0.3
0.32

-2 0 0 0.25
-2 0 2 0 50 100 0 50 100 0 50 100 0 50 100
x1 iteration iteration iteration iteration

History of search Samples in neighbourhoods Cbest Robust measure Convergence curve


TP5 0.5 300 15000 -1 -1

200 10000
x2 0 -1.5 -1.5
100 5000

-0.5 0 0 -2 -2
-0.5 0 0.5 0 50 100 0 50 100 0 50 100 0 50 100
x1 iteration iteration iteration iteration

Figure 7.1: Behaviour of CRPSO1 finding the robust optima of TP1, TP2, TP3,
TP4, and TP5

For instance, the Cbest curve of TP1 shows that the confidence of gBest
remains unchanged in almost half of the iterations. This shows that the PSO
algorithm did not confidently find better robust solutions to replace at least one
of them with gBest from nearly the 25-th to 75-th iterations. The same pattern
can be observed in the figures for TP2, TP3, TP6, TP7, TP8, TP9, and TP10.
The results of Table 7.1 show that gBest was confidently updated 29, 23, and 34
7. Confidence-based robust optimisation 153

History of search Samples in neighbourhoods Cbest Robust measure Convergence curve


TP6 200 6000 0 0
1
150 -0.5
4000 -1
x2 0 100 -1
2000 -2
50 -1.5
-1
0 0 -2 -3
-1 0 1 0 50 100 0 50 100 0 50 100 0 50 100
x1 iteration iteration iteration iteration

TP7 4
History of search Samples in neighbourhoods x 10 Cbest Robust measure Convergence curve
1 300 3 -0.5 -0.5

0.5
200 2 -1 -1
x2 0
100 1 -1.5 -1.5
-0.5

-1 0 0 -2 -2
-1 0 1 0 50 100 0 50 100 0 50 100 0 50 100
x1 iteration iteration iteration iteration

4
History of search Samples in neighbourhoods x 10 Cbest Robust measure Convergence curve
TP8 1 150 15 -0.2 -0.2

-0.4 -0.4
100 10
x2 0.5 -0.6 -0.6
50 5
-0.8 -0.8

0 0 0 -1 -1
0 0.5 1 0 50 100 0 50 100 0 50 100 0 50 100
x1 iteration iteration iteration iteration

TP9 History of search Samples in neighbourhoods Cbest Robust measure Convergence curve
300 300 0 0
10
-1 -1
5 200 200
x2 -2 -2
0 100 100
-3 -3
-5
0 0 -4 -4
-5 0 5 10 0 50 100 0 50 100 0 50 100 0 50 100
x1 iteration iteration iteration iteration

TP10 History of search Samples in neighbourhoods 4


Cbest Robust measure Convergence curve
x 10
4 300 3 2 2

2
200 2 1.5 1.5
x2 0

100 1 1 1
-2

-4 0 0 0.5 0.5
-4 -2 0 2 4 0 50 100 0 50 100 0 50 100 0 50 100
x1 iteration iteration iteration iteration

Figure 7.2: Behaviour of CRPSO1 finding the robust optima of TP6, TP7, TP8,
TP9, and TP10

times on TP1, TP2, and TP3, respectively. The third column of Table 7.1 shows
that 130, 69, and 180 times the confidence of a particle became better than that
of gBest on the same benchmark functions, but there was no superiority in terms
of the robustness.
It may be noted that the third column of the Table 7.1 shows that the confi-
dence measure assists the CRPSO1 algorithm to confidently guide its particles in
154 7. Confidence-based robust optimisation

Table 7.1: Number of times that the confidence operators and confidence measure
triggered
Test function Pi ≤c gBest C(Pi ) > C(gBest)
TP1 29/500 130/500
TP2 23/500 69/500
TP3 34/500 180/500
TP4 63/500 272/500
TP5 9/500 434/500
TP6 12/500 166/500
TP7 13/500 243/500
TP8 21/500 85/500
TP9 20/500 217/500
TP10 9/500 95/500

solving all the benchmark functions approximately 190 times (on average). This
is almost 40% of iterations on average for all the benchmark functions, showing
that a normal PSO algorithm was prone to make non-confident decisions on these
iterations when not using a confidence metric. This suggests the merit of the
proposed confidence measure, confidence-based operators, and CRO in guiding
search agents of CRPSO (or any other meta-heuristics) toward robust optima
confidently.

Another interesting point that can be inferred from Table 7.1 is the correla-
tion between making a confident decision and the number of local robust/non-
robust optima. The third column of Table 7.1 shows that TP4, TP6, TP8, TP9,
and TP10 have the least number of confident decisions compared to TP1, TP2,
TP3, and TP5. Other points worth noting can be observed in TP6, TP7, and
TP9 of Fig. 7.2. The convergence curves on these benchmark functions show
that the CRPSO1 algorithm faces degrading fitness functions over the course of
iterations. In TP6, TP7, and TP9 this phenomenon happens near iteration 40,
30, and 10 respectively. This shows that the proposed method assists CRPSO1
to converge to robust optima located at non-global valleys/peaks. In addition,
the search agents in CRPSO1 are able to diverge from non-robust solutions by
using the proposed confidence measure.
7. Confidence-based robust optimisation 155

7.2 Comparative Results for Confidence-based


Robust PSO
In this section the proposed CRPSO1 and CRPSO2 are compared with an Im-
plicit averaging RPSO (IRPSO) and Explicit averaging RPSO (ERPSO). The
CRPSO1, CRPSO2, and IRPSO algorithms utilise the previously sampled points
when calculating the robustness measure. The robustness is calculated by an ex-
pectation measure with simple averaging of the neighbouring solutions. Since
the robustness of a candidate solution is evaluated based on the available so-
lutions in the neighbourhood during optimisation, IRPSO shows less reliability
and its use will allow comparison of CRPSO1 and CRPSO2 in terms of improved
reliability.
In ERPSO, as the expectation measure for all algorithms, the H number of
sampled solutions is created by the Latin Hypercube Sampling (LHS) method
around every candidate solution to investigate and confirm robustness during
optimisation. This provides the highest reliability and is helpful for verifying
the performance of the proposed CRPSO1 and CRPSO2 algorithms as the most
reliable reference. It does, however, markedly increase the computational load
per iteration.
In order to provide a fair comparison and see if the proposed method is
reliable and effective, the same number of function evaluations (1000) is used for
each of the algorithms. The number of trial solutions in the population was 5 for
all algorithms, so the maximum iterations for CRPSO1, CRPSO2, and IRPSO
was 200. Since ERPSO uses explicit averaging, however, 4 re-sampling points
and 40 iterations are used in order to achieve the total number of 1000 function
evaluations.
Each algorithm is run 30 times on the benchmark functions in Appendix A
and the statistical results (average, standard deviation, and median) are pro-
vided in Table 7.2. Note that the results are expected values and presented
in the form of (ave ± std(median)). In addition, the Wilcoxon ranksum test
was conducted at 5% significance level to determine the significance of discrep-
ancy in the results. The p-values that are less than 0.05 could be considered as
strong evidence against the null hypothesis. The results of this statistical test
are provided in Table 7.3. For the statistical test, the best algorithm in each
test function is chosen and compared with other algorithms independently. For
156 7. Confidence-based robust optimisation

example, if the best algorithm is CRPSO1, the pairwise comparison is done be-
tween CRPSO1/CRPSO2, CRPSO1/IRPSO, and CRPSO1/ERPSO. The same
approach is followed throughout the thesis.
As the results of Table 7.2 and Table 7.3 suggest, generally speaking the
proposed CRPSO algorithms are able to successfully outperform IRPSO on the
majority of benchmark functions. The statistical results show that this superior-
ity is statistically significant in some of the cases. This shows that the reliability
of the IRPSO can be improved with the concepts of confidence measure pro-
posed. The results of CRPSO algorithms are very competitive compared to
ERPSO as the algorithm with highest reliability, which again show the merits
of the proposed algorithms.
It is worth discussing the higher performance of CRPSO1 compared to CRPSO2.
Table 7.2 show that CRPSO1 outperforms CRPSO2 on the majority of test
functions. The results of these two algorithms are very close on some of the
test problems. This shows that the idea of just confidently updating the global
best obtained so far is better than confidently updating global best and personal
bests found so far. A possible reason for this is that systematic exploration is
weakened when CRPSO2 is required to confidently update the pBests obtained
so far as well as the gBest. However, CRPSO1 allows the particles to update
their pBests normally and only restricts update of the gBest. This assists the
particles to search freely and find promising regions of a search space while they
have to find a highly confident solution in order to be able to update gBest.
However, it is very hard for particles of CRPSO2 (especially in initial iterations)
to find confident solutions and update their pBests, so the particles tend to ran-
domly fly around the search space rather being somewhat anchored around their
pBests.
In order to provide further analysis in terms of exploration and exploitation,
the benchmark functions can be divided into two groups: uni-modal versus multi-
modal. The uni-modal functions have one global robust optimum and are highly
suitable for benchmarking the exploitation of the robust algorithm. However,
multi-modal functions have many local optima (robust or non-robust) with the
number increasing with dimension, making them suitable for benchmarking the
exploration of the robust algorithm. Note that TP1, TP2, TP5, and TP7 are
uni-modal, whereas the rest of benchmark functions are multi-modal.
The results on uni-modal benchmark problems show that the proposed algo-
7. Confidence-based robust optimisation 157

Table 7.2: Statistical results of the RPSO algorithms over 30 independent runs:
(ave ± std(median))
Algorithm TP1 TP2 TP3
CRPSO1 0.516 ± 0.011(0.519) 1.01 ± 0.0224(0.999) 0.525 ± 0.126(0.603)
CRPSO2 0.521 ± 0.0138(0.527) 0.992 ± 0.0525(0.967) 0.526 ± 0.127(0.604)
IRPSO 0.52 ± 0.0138(0.523) 1.01 ± 0.0238(1) 0.526 ± 0.126(0.603)
ERPSO 0.251 ± 0.0888(0.24) 1.02 ± 0.0374(1.01) 0.565 ± 0.185(0.63)
Algorithm TP4 TP5 TP6
CRPSO1 0.47 ± 0.0673(0.477) −11.7 ± 1.32(−11.9) −9.29 ± 1.39(−8.87)
CRPSO2 0.504 ± 0.0914(0.569) −6.44 ± 1.41(−6.23) −3.99 ± 0.937(−4.04)
IRPSO 0.525 ± 0.0534(0.522) −6.63 ± 1.28(−6.75) −3.07 ± 1.42(−2.75)
ERPSO 0.485 ± 0.0708(0.476) −6.22 ± 1.24(−6.74) −3.42 ± 1.32(−3.28)
Algorithm TP7 TP8 TP9
CRPSO1 −3.88 ± 1.23(−4.1) −0.652 ± 0.0584(−0.626) −30.4 ± 1.6(−30.3)
CRPSO2 −3.97 ± 0.483(−3.94) −0.838 ± 0.0523(−0.84) −29.7 ± 1.24(−30)
IRPSO −3.65 ± 0.658(−3.55) −0.623 ± 0.00589(−0.623) −30 ± 2.3(−29.8)
ERPSO −9.41 ± 1.13(−9.38) −0.599 ± 0.0806(−0.623) −27 ± 1.6(−27.1)
Algorithm TP10 TP11 TP12
CRPSO1 2.23 ± 0.52(2.48) 6358.8 ± 3553.3(7267.6) 9.5E3 ± 1.9E3(8.8E3)
CRPSO2 2.47 ± 0.63(2.56) 5027.1 ± 2433.2(4720.1) 1.3E4 ± 3.5E3(1.4E4)
IRPSO 1.85 ± 0.26(1.86) 7092.8 ± 2982.7(7447.5) 1.1E4 ± 3.8E3(9.0E3)
ERPSO 2.81 ± 0.65(2.67) 3326.7 ± 3599.5(1082.7) 2.2E5 ± 1.4E5(2.3E5)
Algorithm TP13 TP14 TP15
CRPSO1 35.1 ± 1.27(35) 50.2 ± 19.1(45.1) 434.89 ± 5.006(433.33)
CRPSO2 36.1 ± 2.45(36.1) 43.4 ± 2.02(43.3) 458.70 ± 55.61(433.81)
IRPSO 50.5 ± 49.2(35) 60.3 ± 49.8(44.3) 433.08 ± 0.99(433.22)
ERPSO 36 ± 0.05(36) 44.3 ± 1.11(44.3) 449.44 ± 30.30(435.08)
Algorithm TP16 TP17 TP18
CRPSO1 1.4 ± 0.342(1.31) 0.558 ± 0.333(0.458) 0.107 ± 0.068(0.0858)
CRPSO2 1.55 ± 0.493(1.42) 4.4 ± 11.8(0.768) 1.06 ± 2.9(0.133)
IRPSO 1.56 ± 0.551(1.42) 0.645 ± 0.518(0.72) 0.127 ± 0.0802(0.0863)
ERPSO 1.14 ± 0.139(1.07) 0.86 ± 0.62(0.857) 0.35 ± 0.123(0.403)
Algorithm TP19 TP20
CRPSO1 −0.27 ± 0.088(−0.29) −1.81 ± 0.898(−1.43)
CRPSO2 −0.28 ± 0.060(−0.29) −0.589 ± 2.04(−0.711)
IRPSO −0.26 ± 0.049(−0.23) −0.498 ± 1.76(−0.906)
ERPSO −0.37 ± 0.0021(−0.39) −1.38 ± 2.01(−1.76)
158 7. Confidence-based robust optimisation

Table 7.3: Results of Wilcoxon ranksum test for RPSO algorithms


Test function CRPSO1 CRPSO2 IRPSO ERPSO
TP1 0.000182672 0.000182672 0.000182672 N/A
TP2 0.10410989 N/A 0.16197241 0.241321593
TP3 N/A 0.570750388 0.623176224 0.10410989
TP4 N/A 0.520522883 0.16197241 0.969849977
TP5 N/A 0.000182672 0.000182672 0.000182672
TP6 N/A 0.000182672 0.000182672 0.000182672
TP7 0.000182672 0.000182672 0.000182672 N/A
TP8 0.000246128 N/A 0.000182672 0.000182672
TP9 N/A 0.570750388 0.733729996 0.000768539
TP10 0.140465048 0.025748081 N/A 0.002827272
TP11 0.10410989 0.140465048 0.064022101 N/A
TP12 N/A 0.037635314 0.212293836 0.000182672
TP13 N/A 0.185876732 0.000182672 0.344704222
TP14 0.212293836 N/A 0.185876732 0.384673063
TP15 0.79133678 0.570750388 N/A 0.000182672
TP16 0.031209013 0.014019277 0.011329697 N/A
TP17 N/A 0.007566157 0.27303634 0.04515457
TP18 N/A 0.001706249 0.733729996 0.212293836
TP19 0.000182672 0.000182672 0.000182672 N/A
TP20 N/A 0.088973012 0.088973012 0.909721889
7. Confidence-based robust optimisation 159

rithms are not significantly better than IRPSO. This originates from the fact the
unreliabilty of the IRPSO is a bonus in a unimodal search space since this algo-
rithm quickly converges towards the global optimum, which is most of the cases
the robust optimum as well. Obviously, this leads the IRPSO algorithm towards
a local solution in a multi-modal search space. However, the proposed CRPSO
algorithm are limited in terms of updating global best and personal bests, which
slows down the convergence speed of these algorithms. This originates from the
proposed confidence measure that prevents CRPSO1 and CRPSO2 from prema-
ture exploitation and consequently stagnation in local robust/global optima.

In contrast to the results on the uni-modal functions, those on multi-modal


robust benchmark functions are different. The results on TP3, TP4, TP6, TP8,
TP9, and TP10 to TP20 show that CRPSO1 and CRPSO2 tend to provide much
better results compared to the uni-modal test functions. The significance of these
results can be observed in the p-values in Table 7.3. The results suggest that
the proposed algorithms are very capable of avoiding local robust/global optima.
This originates from the proposed confidence measure that assists CRPSO1 and
CRPSO2 to find promising robust area(s) of the search space confidently. The
better results of CRPSO1 compared to CRPSO2 are again due to the greater
exploration of CRPSO1.

To sum up, firstly, the results demonstrated that the proposed confidence
measure is able to assist optimisation algorithms and improves their reliabil-
ity. Secondly, the results of CRPSO1 and CRPSO2 revealed the merits of the
proposed confidence-based relational operators and new CRO approach in find-
ing robust optima. Thirdly, the results on the uni-modal benchmark functions
showed that both proposed algorithms show slow convergence behaviour, pre-
venting them from easily stagnating in local solutions (robust or non-robust)
especially in the initial iterations. Finally, the results on the multi-modal func-
tions indicate that the proposed methods are able to avoid local robust and
non-robust optima as well.

In the next section the proposed confidence measure and relational operators
are applied to a GA in order to further investigate the applicability of these novel
concepts to different meta-heuristics.
160 7. Confidence-based robust optimisation

7.3 Comparative Results for Confidence-based


Robust GA
These CRGA1 and CRGA2 algorithms are applied to the benchmark functions
and the results are reported in Table 7.4. The CRGA1 and CRGA2 algorithms
are compared with an Implicit averaging RGA (IRGA) and Explicit averaging
RGA (ERGA).
Similarity to the algorithms in the preceding section, the CRGA1, CRGA2,
and IRGA algorithms utilise the previously sampled points when calculating the
robustness measure. The robustness is calculated by an expectation measure
with simple averaging of the neighbouring solutions. Since the robustness of a
candidate solution is evaluated based on the available solutions in the neigh-
bourhood during optimisation, IRGA shows less reliability and its use will allow
comparison of CRGA1 and CRGA2 in terms of improved reliability.
In ERGA, as the expectation measure for all algorithms, the H number of
sampled solutions is created by the LHS method around every candidate solution
to investigate and confirm robustness during optimisation. This provides the
highest reliability and is helpful for verifying the performance of the proposed
CRGA1 and CRGA2 algorithms as the most reliable reference. It does, however,
markedly increase the computational load per iteration.
In order to provide a fair comparison and see if the proposed method is
reliable and effective, the same number of function evaluations (1000) is used for
each of the algorithms. The number of trial solutions in the population was 5
for all algorithms, so the maximum number of iterations for CRGA1, CRGA2,
and IRGA was 200. Since ERGA uses explicit averaging, however, 4 re-sampling
points and 40 iterations are used in order to achieve the total number of 1000
function evaluations.
Each algorithm is run 30 times and the statistical results (average, standard
deviation, and median) are provided in Table 7.4. Also, the Wilconxon ranksum
test was conducted at 5% significance level to decide on the significance of dis-
crepancy in the results. The p-values that are less than 0.05 could be considered
as strong evidence against the null hypothesis. Table 7.5 shows the results of
the Wilcoxon ranksum test.
Table 7.4 Table 7.5 suggests that IRGA provides the worst results on the
majority of benchmark problems. This is evidence that utilising previously eval-
7. Confidence-based robust optimisation 161

Table 7.4: Statistical results of the RGA algorithms over 30 independent runs:
(ave ± std(median))
Algorithm TP1 TP2 TP3
CRGA1 2.36 ± 0.799(2.15) 2.02 ± 0.256(2.07) 1.43 ± 0.38(1.42)
CRGA2 0.443 ± 0.359(0.278) 1.17 ± 0.162(1.13) 0.682 ± 0.128(0.722)
IRGA 2.35 ± 0.96(2.17) 2.09 ± 0.217(2.04) 1.56 ± 0.317(1.8)
ERGA 1.92 ± 0.332(1.92) 2.05 ± 0.175(2.04) 1.6 ± 0.23(1.67)
Algorithm TP4 TP5 TP6
CRGA1 0.817 ± 0.158(0.788) −9.06 ± 1.39(−8.67) −5.32 ± 1.14(−5.24)
CRGA2 0.468 ± 0.0682(0.443) −12.5 ± 1.01(−12.3) −10.9 ± 1.65(−10.6)
IRGA 0.74 ± 0.167(0.712) −9.44 ± 1.49(−9.3) −6.64 ± 0.67(−6.49)
ERGA 0.712 ± 0.124(0.692) −8.98 ± 1.65(−8.49) −5.29 ± 1.66(−5.23)
Algorithm TP7 TP8 TP9
CRGA1 −5.13 ± 1.15(−5.02) −0.876 ± 0.0323(−0.879) −27.6 ± 1.61(−27.8)
CRGA2 −9.81 ± 0.461(−9.64) −0.615 ± 0.113(−0.617) −23.2 ± 1.72(−23.2)
IRGA −6.62 ± 0.844(−6.23) −0.564 ± 0.137(−0.531) −23.8 ± 2.51(−24.1)
ERGA −5.87 ± 1.33(−5.46) −0.532 ± 0.146(−0.494) −23.3 ± 2.06(−22.3)
Algorithm TP10 TP11 TP12
CRGA1 4.27 ± 1.44(3.98) 1.3E4 ± 7.9E3(1.3E4) 2.2E6 ± 2.8E6(1.3E6)
CRGA2 1.76 ± 0.421(1.61) 1.2E4 ± 7.8E3(1.2E4) 2.2E8 ± 1.8E8(1.6E8)
IRGA 4.17 ± 1.03(4.35) 1.8E4 ± 9.6E3(1.9E4) 4.1E8 ± 2.8E8(3.6E8)
ERGA 3.87 ± 1.77(3.67) 5.1E2 ± 3.9E2(3.3E2) 2.4E8 ± 8.8E7(2.2E8)
Algorithm TP13 TP14 TP15
CRGA1 81.5 ± 10.9(81.7) 254.97 ± 40.81(264.21) 673.37 ± 66.86(671.51)
CRGA2 218 ± 43.2(211) 226.76 ± 51.62(219.80) 665.84 ± 35.17(674.43)
IRGA 259 ± 46.3(270) 239.51 ± 59.23(229.67) 704.03 ± 67.23(708.44)
ERGA 224 ± 49.5(236) 88.15 ± 10.87(91.28) 510.41 ± 25.51(518.36)
Algorithm TP16 TP17 TP18
CRGA1 340.99 ± 89.11(343.56) 341.21 ± 150.36(313.37) 238.01 ± 43.49(260.77)
CRGA2 245.17 ± 65.42(250.17) 257.94 ± 96.35(259.78) 39.53 ± 14.49(38.61)
IRGA 229.27 ± 117.56(208.46) 287.02 ± 97.76(271.76) 284.93 ± 107.78(300.44)
ERGA 36.88 ± 13.41(38.67) 31.01 ± 15.60(28.53) 223.70 ± 42.16(228.25)
Algorithm TP19 TP20
CRGA1 −0.368 ± 0.0608(−0.395) +2.53 ± 1.16(2.49)
CRGA2 −0.21 ± 0.181(−0.228) +2.3 ± 0.958(2.2)
IRGA −0.211 ± 0.151(−0.233) +2.38 ± 0.97(2.25)
ERGA −0.216 ± 0.151(−0.232) −1.32 ± 0.64(−1.45)
162 7. Confidence-based robust optimisation

Table 7.5: Results of Wilcoxon ranksum test for RGA algorithms


Test function CRGA1 CRGA2 IRGA ERGA
TP1 0.000182672 N/A 0.000182672 0.000182672
TP2 0.000182672 N/A 0.000182672 0.000182672
TP3 0.000182672 N/A 0.000182672 0.000182672
TP4 0.000246128 N/A 0.000246128 0.000329839
TP5 0.00058284 N/A 0.00058284 0.000246128
TP6 0.000182672 N/A 0.000182672 0.000182672
TP7 0.000182672 N/A 0.000182672 0.000182672
TP8 N/A 0.000329839 0.000182672 0.000182672
TP9 N/A 0.000439639 0.001314945 0.004586392
TP10 0.000329839 N/A 0.000329839 0.000768539
TP11 0.000182672 0.00058284 0.000182672 N/A
TP12 N/A 0.000182672 0.000182672 0.000182672
TP13 N/A 0.000182672 0.000182672 0.000182672
TP14 0.000182672 0.000182672 0.000182672 N/A
TP15 0.000182672 0.000182672 0.000182672 N/A
TP16 0.000182672 0.000182672 0.000182672 N/A
TP17 0.000182672 0.000182672 0.000182672 N/A
TP18 0.000182672 N/A 0.000182672 0.000182672
TP19 N/A 0.002827272 0.007284557 0.009108496
TP20 0.000182672 0.000182672 0.000182672 N/A
7. Confidence-based robust optimisation 163

uated points may give satisfactory knowledge about the robustness of search
space, but does not guarantee having reliable neighbouring solutions with uni-
form distribution during optimisation. This is the reason for the poor results
of IRGA, in which individuals might crossover with non-robust individuals and
evolve toward non-robust regions of search space. An individual might become
very robust in case of a small number of neighbouring solutions since an average
of the neighbourhood is considered as the fitness of individual in IRGA. The
comparison of individuals based on effective mean fitness becomes more unfair
when the neighbouring solutions are very close to the individual. These phenom-
ena happened quite often during optimisation especially in the initial generations
when there are few evaluated points.
Table 7.4 and Table 7.5 show that the results of CRGA1 and CRGA2 are
better than those of IRGA on the majority of benchmark functions. Generally
speaking, the superiority of the results is due to the confidence measure em-
ployed. The confidence measure assists CRGA1 and CRGA2 to consider not
only the robustness measure but also the number of neighbouring solutions, the
radius of the neighbourhood, and the distribution of neighbouring solutions in
distinguishing robust solutions. Therefore, the possibility of favouring a non-
robust solution due to few and close neighbouring solutions is much less than
the previous methods. The role of the confidence measure in driving the indi-
viduals toward robust areas of search space is significant, especially in the initial
generations when the individuals have very few neighbouring solutions to prove
their robustness.
As may be observed in Table 7.4 and Table 7.5, the CRGA2 algorithm out-
performs CRGA1 on the majority of the benchmark problems. The CRGA1
algorithm has a confidence-based elitism component, in which the most robust
and confident individual obtained so far is saved and allowed to move without
any modification to the next generation(s). The reason for better results of this
algorithm compared to IRGA is that the confidence-based component assists
CRGA1 to favour robust solutions. The confidence-based elitism also prevents
the best individual (elite) from corruption by the mutation operator. The ad-
vantage of this method is that there is no significant loss of exploration since
all the individuals except one are selected without considering the confidence
measure. However, the confident and robust elite is only one individual in the
population so it is able to crossover with one of the other individuals. There are
164 7. Confidence-based robust optimisation

Table 7.6: Number of times that CRGA2 makes confident and risky decisions
over 100 generations
Function Child < P arent Confident decision Risky decision
TP1 125 78 47
TP2 120 54 66
TP3 130 59 71
TP4 133 58 75
TP5 145 82 63
TP6 64 58 6
TP7 133 57 76
TP8 96 45 51
TP9 156 59 97
TP10 77 73 4

n − 1 other individuals in the populations that might not be confidently robust,


but are allowed to participate in the generation of the next population.
In contrast to CRGA1, the CRGA2 algorithm compares each child produced
with its corresponding parents based on the confidence measure. The advan-
tage of this method is that the overall confidence level of individuals increases
over the course of generations. This guarantees that there is no possibility of
selecting a non-confident robust solution in the production process. In other
words, two parents keep mating until they produce a confidently better robust
child. Although this method maintains the high confidence level of the popu-
lation and produces confidently robust children, the exploration of search space
is decreased. However, the confidence measure is essential in preventing risky
crossover of individuals at each generation. The number of confident and risky
crossovers during optimisation for the first 10 test functions is shown in Table 7.6.
This table shows that the confidence measure assists CRGA2 to make con-
fident decisions in more than half of the total decisions. For instance, CRGA2
makes 78 confident decisions out of 125 when solving the first test function. This
shows that CRGA2 would be prone to choose non-confidently robust solutions 47
times if the confidence measure were not used. The same trend can be observed
in the rest of the benchmark problems. The higher performance of CRGA2 is
due to these confident decisions, in which non-confidently robust solutions are
discarded in order to prevent risky crossovers between individuals.
To further observe the effects of the confidence measure and operators in
driving individuals toward robust regions of search space, Fig. 7.3 and Fig. 7.4 are
7. Confidence-based robust optimisation 165

Obtained optimum Exploited region Obtained optimum

Figure 7.3: Search history of GA and IRGA. GA converges towards the global
non-robust optimum, while IRGA failed to determine the robust optimum

Exploited region Obtained optimum Exploited region Obtained optimum

Figure 7.4: Search history of CRGA1 and CRGA2. The exploration of CRGA2
is much less than that of CRGA1

provided. Note that the fourth benchmark function is chosen as the test bed, so
there are one global, two local, and one robust optima. The history of evaluated
points in the GA shows that this algorithm is able to find the global optimum
with satisfactory exploration of the search space. Fig. 7.3 shows individuals of
IRGA tend to move toward the robust region of the fourth benchmark function.
However, the best robust solutions found are near local optimum on the right.
This figure shows that there is no guarantee of finding robust optima despite
the tendency of individuals to move toward the robust regions of search space
in IRGA. The search history of CRGA1 and CRGA2 are shown in Fig. 7.4. It
may be seen that the exploration of CRGA2 is much less than that of CRGA1.
166 7. Confidence-based robust optimisation

Although the exploration of CRGA2 is less, there is a good exploitation of the


robust part of the test function. This is the reason for more accurate results
from CRGA2 compared to CRGA1. The search history of CRGA1 shows that
this algorithm shows high exploration with the tendency toward robust optima.
However, there are a lot of risky crossovers during the search which result in
finding a solution far from the real robust solution.
These results show that the performance of IRGA can also be increased by
using the proposed confidence measure. In addition, the confidence-based oper-
ators, crossover and elitism components are readily incorporable in evolutionary
algorithms. In summary, the results of this section demonstrate that the pro-
posed confidence measure and operators are applicable to different algorithms.

7.4 Comparison of CRPSO and CRGA


Although PSO and GA have different structures that prevent us from distin-
guishing whether the superior results of one algorithm are due to the confidence
measure employed or its mechanism, the results and behaviour of both algo-
rithms are compared in this section. At first glance, from the table of results
in the preceding sections it is apparent that the results of RGAs are generally
worse than those of RPSOs. This is due to the simple version of GA used
and initial parameters. The GAs and CRGAs employed are real coded with
single-point crossover operators, selection rate of 0.5, mutation rate of 0.3, and
population size of 20. Fine-tuning these parameters is beyond the scope of this
thesis. Another reason for the lower performance originates from the nature of
GAs (and evolutionary algorithms in general). In contrast to PSO, the crossover
mechanisms in GA, RGA, and CRGAs cause abrupt changes in the candidate
solutions that result in enhancing the exploration ability. So the probability of
having previously evaluated points around the best individual obtained so far
is much less than with PSO. Moreover, the GA, RGA, and CRGA algorithms
are not equipped with adaptive parameters, so there is no particular emphasis
on exploitation as the number of iterations increases, in contrast to PSO with
the adaptive inertial weight. This section does not compare PSO and GA in
detail since both of these algorithms have their own advantages/disadvantages,
and the focus of this chapter is to investigate the applicability of the proposed
confidence measure/operators to a different algorithm.
7. Confidence-based robust optimisation 167

7.5 Summary
This chapter experimentally investigated the performance of the proposed confidence-
based robust optimisation. The CRPSO and CRGA were employed to solve the
challenging test beds proposed in Chapter 4. The results demonstrated the value
of the proposed concepts:

• The proposed C measure can be employed to design different confidence-


based relational operators for meta-heuristics.

• The proposed confidence-based relational operators give us the opportunity


to design different CRO mechanisms for meta-heuristics.

• A CRO meta-heuristic can be highly efficient in terms of guiding its search


agents towards robust optima.

• The proposed CRO approach is able to give the meta-heuristic accelerated


convergence behaviours.

• The accelerated convergence prevents meta-heuristics from easily stagnat-


ing in local robust or non-robust solutions.

• Since CRO utilises both robustness and confidence measures, it is more


reliable than normal robust optimisation techniques.

• CRO is computationally very cheap, so its is highly suitable for computa-


tionally expensive real engineering problems.

The results of this chapter also showed the merits of the proposed challenging
test problems when comparing different algorithms. Due to the similar charac-
teristics of the proposed test functions compared to the real search spaces, the
performance of the CRO algorithms were verified and confirmed confidently.
Also, it can be stated that the CRO techniques are able to find robust optima
of real problems.
Chapter 8

Confidence-based robust
multi-objective optimisation

The previous chapter proved the merits of the confidence measure and confidence-
based robust optimisation techniques systematically. This chapter presents and
discusses the results of CRMOPSO as the first CRMO technique when solving
the current and proposed test functions in Chapter 4. The performance metrics
proposed in Chapter 5 are employed to compare the algorithms. Therefore, this
chapter systematically investigates and proves the merits of the confidence mea-
sure and confidence-based robust optimisation in multi-objective search spaces.

8.1 Behaviour of CRMOPSO on benchmark prob-


lems
Three versions of RMOPSO are implemented and employed: an Explicit aver-
aging Robust MOPSO (ERMOPSO), Implicit averaging Robust MOPSO (IR-
MOPSO), and Confidence-based Robust MOPSO (CRMOPSO). The method of
explicit averaging in ERMOPSO is identical to that of RNSGA-II proposed by
Deb and Gupta [44]. In this method H number of sampled solutions is created by
the Latin Hypercube Sampling (LHS) method around every candidate solution
to investigate and confirm robustness during optimisation. This provides the
highest reliability and is helpful for verifying the performance of the proposed
algorithm as the most reliable reference. It does, however, markedly increase the
computational load per iteration. In contrast, IRMOPSO utilises the previously

168
8. Confidence-based robust multi-objective optimisation 169

sampled solutions to confirm robustness. In this case, robustness of a candidate


solution is evaluated based on the available solutions in the neighbourhood dur-
ing optimisation. All the sampled solutions are saved during optimisation, in a
manner similar to Branke [18] and Dippel [58]. As was discussed in the gaps
and motivation for this thesis, IRMOPSO shows less reliability and its use will
allow comparison of CRMOPSO in terms of improved reliability. Robustness is
calculated by an expectation measure with simple averaging of the neighbouring
solutions for all algorithms. All three optimisers aim to minimise the expecta-
tion of the objective functions in a neighborhood of the nominal solution (with
maximum noise of 10% of the range).
In order to provide a fair comparison and see if the proposed method is
reliable and effective, the same number of function evaluations (100,000) is used
for each of the algorithms. The number of trial solutions in the population was
100 for all algorithms, so the maximum number of iterations for IRMOPSO and
CRMOPSO was 1000. Since ERMOPSO uses explicit averaging, however, 4 re-
sampling points and 250 iterations are used in order to achieve the total number
of 100,000 function evaluations. Another assumption was 10% fluctuation in the
parameters to simulate uncertainties in parameters.
Each algorithm was run 30 times and the statistical results of performance
metrics are reported in Table 8.1, 8.3, and 8.5. For quantifying the convergence
of algorithms, the IGD performance measure is utilized in Table 8.1. For the
coverage and success ratio, the proposed performance measures in this thesis
are used to collect and present the results in Table 8.3 and 8.5. Also, the
Wilconxon ranksum test was conducted at 5% significance level to decide on the
significance of discrepancy in the results. The p-values that are less than 0.05
could be considered as strong evidence against the null hypothesis. Table 8.2,
8.4, and 8.6 show the results of the Wilcoxon ranksum test for IGD, Φ, and Γ
respectively.
Note that all test functions have 10 variables and the quantitative results are
presented in the form of (average ± standard deviation). For the performance
measures, 10% noise was considered and Rmin was calculated using the equa-
tion minrobustness + (maxrobustness − minrobustness × 0.1) where minrobustness is the
minimum of the robustness curve and maxrobustness indicates the maximum of
robustness in the test functions.
In addition, the best robust fronts obtained by each algorithm are illustrated
170 8. Confidence-based robust multi-objective optimisation

in Fig. 8.1, 8.2 and 8.3. Note that the results on some of the test functions
are illustrated in this chapter, but all the results are presented in Appendix
C. The results are not compared with other meta-heuristics since the different
mechanisms of the algorithms would prevent us from distinguishing whether
the superior results of one algorithm were due to CRMO or the algorithm’s
underlying mechanism.
CRMOPSO IRMOPSO ERMOPSO
3 3 3

2 2 2
f2

f2

f2
RMTP1 1 1 1

0 0 0

-1 -1 -1
0 0.5 1 0 0.5 1 0 0.5 1
f1 f1 f1
CRMOPSO IRMOPSO ERMOPSO
1 1 1

0.8 0.8 0.8

0.6 0.6 Obtained less robust 0.6


solutions
f2

f2

f2
RMTP7 0.4 0.4 0.4

0.2 Density in the robust 0.2 0.2


region
0 0 0
0 0.5 1 0 0.5 1 0 0.5 1
f1 f1 f1
CRMOPSO IRMOPSO ERMOPSO
1 1 1

0.8 0.8 0.8

0.6 0.6 0.6


f2

f2

f2

RMTP9 0.4 0.4 0.4

0.2 0.2 0.2


Gap
0 0 0
0 0.5 1 0 0.5 1 0 0.5 1
f1 f1 f1
CRMOPSO IRMOPSO ERMOPSO
1 1 1

0.8 0.8 0.8

0.6 0.6 0.6


f2

f2

f2

0.4 0.4 0.4


RMTP27
0.2 0.2 0.2

0 0 0
0 0.5 1 0 0.5 1 0 0.5 1
f1 f1 f1

Figure 8.1: Robust fronts obtained for RMTP1, RMTP7, RMTP9, and
RMTP27, one test case per row.

8.2 Discussion of results


It may be observed in Fig. 8.1 that the most well-distributed robust front for
RMTP1 is that of IRMOPSO. This is because the robust front is identical to
the main Pareto front (the first case shown in Fig. 2.13). Since IRMOPSO uses
8. Confidence-based robust multi-objective optimisation 171

Table 8.1: Statistical results of RMOPSO algorithms using IGD


Test function CRMOPSO IRMOPSO ERMOPSO
RMTP1 0.338 ± 0.169 0.061 ± 0.05 7.261 ± 6.528
RMTP2 0.112 ± 0.106 0.21 ± 0.13 8.057 ± 5.329
RMTP3 0.035 ± 0.03 0.002 ± 0.001 0.106 ± 0.206
RMTP6 0.017 ± 0.007 0.004 ± 0.006 0.013 ± 0.012
RMTP7 0.038 ± 0.028 0.01 ± 0.004 0.007 ± 0.007
RMTP8 0.033 ± 0.009 0.011 ± 0.007 0.007 ± 0.006
RMTP9 0.037 ± 0.027 0.004 ± 0.004 0.022 ± 0.026
RMTP10 0.034 ± 0.018 0.004 ± 0.003 0.014 ± 0.017
RMTP11 0.037 ± 0.015 0.032 ± 0.019 0.939 ± 1.343
RMTP12 0.034 ± 0.014 0.049 ± 0.016 0.079 ± 0.098
RMTP13 0.024 ± 0.007 0.034 ± 0.009 0.249 ± 0.347
RMTP14 0.03 ± 0.012 0.026 ± 0.015 1.197 ± 1.743
RMTP15 0.039 ± 0.013 0.056 ± 0.008 0.533 ± 0.825
RMTP16 0.027 ± 0.008 0.042 ± 0.009 0.695 ± 1.408
RMTP17 0.041 ± 0.021 0.06 ± 0.032 0.814 ± 1.112
RMTP18 0.06 ± 0.026 0.062 ± 0.043 0.59 ± 1.699
RMTP19 0.033 ± 0.002 0.034 ± 0.004 0.447 ± 0.843
RMTP20 0.039 ± 0.042 0.007 ± 0.005 0.094 ± 0.215
RMTP21 0.037 ± 0.059 0.004 ± 0.003 0.051 ± 0.087
RMTP22 0.052 ± 0.122 0.007 ± 0.006 0.024 ± 0.036
RMTP23 0.072 ± 0.016 0.124 ± 0.025 0.084 ± 0.016
RMTP24 0.064 ± 0.014 0.1 ± 0.013 0.08 ± 0.014
RMTP25 0.069 ± 0.013 0.104 ± 0.009 0.087 ± 0.01
RMTP26 0.038 ± 0.013 0.028 ± 0.041 0.008 ± 0.004
RMTP27 0.006 ± 0.028 0.007 ± 0.003 0.044 ± 0.061
RMTP33 0.021 ± 0.007 0.001 ± 0 0.081 ± 0.06
RMTP34 0.04 ± 0.015 0.013 ± 0.008 0.047 ± 0.027
RMTP35 0.113 ± 0.03 0.007 ± 0.01 0.105 ± 0.042
RMTP36 0.011 ± 0.008 0.026 ± 0.001 0.05 ± 0.078
RMTP37 0.015 ± 0.012 0.036 ± 0.008 0.235 ± 0.274
RMTP38 0.015 ± 0.015 0.019 ± 0.023 0.555 ± 0.739
RMTP39 0.01 ± 0.001 0.037 ± 0.033 0.202 ± 0.251
RMTP40 0.014 ± 0.002 0.087 ± 0.064 0.127 ± 0.314
RMTP41 0.056 ± 0.087 0.082 ± 0.05 0.68 ± 1.206
RMTP42 0.004 ± 0.003 0.002 ± 0.002 0.032 ± 0.043
RMTP43 0.005 ± 0.005 0.001 ± 0.001 0.265 ± 0.788
RMTP44 0.005 ± 0.005 0.001 ± 0.002 0.51 ± 0.91
172 8. Confidence-based robust multi-objective optimisation

Table 8.2: P-values of Wilcoxon ranksum test for the RMOPSO algorithms in
Table 8.1
Test function CRMOPSO IRMOPSO ERMOPSO
RMTP1 0.032983954 N/A 0.000246128
RMTP2 N/A 0.212293836 0.000182672
RMTP3 0.021133928 N/A 0.000329839
RMTP6 0.001706249 N/A 0.005795359
RMTP7 0.000439639 0.000439639 N/A
RMTP8 0.000182672 0.27303634 N/A
RMTP9 0.000246128 N/A 0.005390255
RMTP10 0.000246128 N/A 0.185876732
RMTP11 0.520522883 N/A 0.185876732
RMTP12 N/A 0.003447042 0.002827272
RMTP13 N/A 0.025748081 0.000246128
RMTP14 0.850106739 N/A 0.007284557
RMTP15 N/A 0.04515457 0.001706249
RMTP16 N/A 0.010410989 0.001402210
RMTP17 N/A 0.031209013 0.021133928
RMTP18 N/A 0.427355314 0.307489457
RMTP19 N/A 0.79133678 0.002827272
RMTP20 0.001706249 N/A 0.000850106
RMTP21 0.005707503 N/A 0.002202220
RMTP22 0.121224503 N/A 0.677584958
RMTP23 N/A 0.000768539 0.04515457
RMTP24 N/A 0.00058284 0.021224503
RMTP25 N/A 0.000439639 0.012293836
RMTP26 0.000182672 1.0000 N/A
RMTP27 N/A 0.049721889 0.000246128
RMTP33 0.000182672 N/A 0.000182672
RMTP34 0.000182672 N/A 0.000182672
RMTP35 0.000182672 N/A 0.000182672
RMTP36 N/A 0.049849977 0.017257456
RMTP37 N/A 0.020522883 0.003610514
RMTP38 N/A 0.79133678 0.009108496
RMTP39 N/A 0.017257456 0.000472675
RMTP40 N/A 0.001858767 0.000246128
RMTP41 N/A 0.677584958 0.04515457
RMTP42 0.384673063 N/A 0.000182672
RMTP43 0.427355314 N/A 0.001706249
RMTP44 0.472675594 N/A 0.000182672
8. Confidence-based robust multi-objective optimisation 173

Table 8.3: Statistical results of RMOPSO algorithms using Φ


Test function CRMOPSO IRMOPSO ERMOPSO
RMTP1 0.3 ± 0.09 0.6 ± 0.05 0.2 ± 0.2
RMTP2 0.3 ± 0.1 0.4 ± 0.2 0.1 ± 0.1
RMTP3 0.4 ± 0.1 0.5 ± 0.1 0.5 ± 0.2
RMTP6 0.4 ± 0.1 0.5 ± 0.1 0.5 ± 0.1
RMTP7 0.3 ± 0.1 0.2 ± 0.05 0.4 ± 0.1
RMTP8 0.19 ± 0.1 0.2 ± 0.1 0.34 ± 0.08
RMTP9 0.46 ± 0.15 0.48 ± 0.12 0.53 ± 0.15
RMTP10 0.34 ± 0.12 0.44 ± 0.08 0.56 ± 0.14
RMTP11 0.388 ± 0.159 0.417 ± 0.113 0.215 ± 0.151
RMTP12 0.442 ± 0.18 0.488 ± 0.162 0.642 ± 0.245
RMTP13 0.469 ± 0.117 0.465 ± 0.206 0.458 ± 0.224
RMTP14 0.248 ± 0.057 0.285 ± 0.102 0.183 ± 0.159
RMTP15 0.591 ± 0.089 0.551 ± 0.103 0.391 ± 0.331
RMTP16 0.533 ± 0.212 0.463 ± 0.138 0.575 ± 0.204
RMTP17 0.381 ± 0.092 0.356 ± 0.138 0.11 ± 0.112
RMTP18 0.515 ± 0.123 0.46 ± 0.075 0.235 ± 0.238
RMTP19 0.838 ± 0.077 0.625 ± 0.155 0.704 ± 0.22
RMTP20 0.385 ± 0.136 0.366 ± 0.114 0.44 ± 0.265
RMTP21 0.202 ± 0.045 0.225 ± 0.083 0.264 ± 0.109
RMTP22 0.639 ± 0.208 0.689 ± 0.131 0.7 ± 0.109
RMTP23 0.596 ± 0.147 0.378 ± 0.072 0.548 ± 0.124
RMTP24 0.543 ± 0.184 0.333 ± 0.103 0.471 ± 0.141
RMTP25 0.586 ± 0.127 0.448 ± 0.075 0.633 ± 0.149
RMTP26 0.42 ± 0.1 0.33 ± 0.12 0.41 ± 0.24
RMTP27 0.53 ± 0.22 0.37 ± 0.15 0.54 ± 0.18
RMTP33 0.257 ± 0.075 0.333 ± 0.08 0.18 ± 0.104
RMTP34 0.223 ± 0.052 0.483 ± 0.102 0.366 ± 0.105
RMTP35 0.17 ± 0.035 0.565 ± 0.106 0.12 ± 0.035
RMTP36 0.557 ± 0.063 0.611 ± 0.028 0.524 ± 0.127
RMTP37 0.2 ± 0.258 0.2 ± 0.35 0.1 ± 0.211
RMTP38 0.311 ± 0.187 0.605 ± 0.165 0.427 ± 0.309
RMTP39 0.174 ± 0.208 0.636 ± 0.049 0.321 ± 0.171
RMTP40 0±0 0±0 0.1 ± 0.211
RMTP41 0.103 ± 0.125 0.573 ± 0.219 0.519 ± 0.316
RMTP42 0.447 ± 0.13 0.642 ± 0.051 0.405 ± 0.136
RMTP43 0.25 ± 0.354 0.55 ± 0.497 0.05 ± 0.158
RMTP44 0.432 ± 0.262 0.711 ± 0.134 0.668 ± 0.289
174 8. Confidence-based robust multi-objective optimisation

Table 8.4: P-values of Wilcoxon ranksum test for the RMOPSO algorithms in
Table 8.3
Test function CRMOPSO IRMOPSO ERMOPSO
RMTP1 0.000329839 N/A 0.000246128
RMTP2 0.212293836 N/A 0.000182672
RMTP3 0.313298369 N/A 0.4621133928
RMTP6 0.317062499 N/A 0.4635795359
RMTP7 0.104109894 0.000439639 N/A
RMTP8 0.231826742 0.27303634 N/A
RMTP9 0.053902557 0.121224503 N/A
RMTP10 0.011329697 0.185876732 N/A
RMTP11 0.520522883 N/A 0.185876732
RMTP12 0.344704222 0.909721889 N/A
RMTP13 N/A 0.025748081 0.022886128
RMTP14 0.850106739 N/A 0.007284557
RMTP15 N/A 0.04515457 0.001706249
RMTP16 0.969849977 0.10410989 N/A
RMTP17 N/A 0.031209013 0.003763531
RMTP18 N/A 0.047584958 0.027355314
RMTP19 N/A 0.03133678 0.05108496
RMTP20 0.890465048 0.850106739 N/A
RMTP21 0.522022235 0.570750388 N/A
RMTP22 0.520522883 0.677584958 N/A
RMTP23 N/A 0.000768539 0.04515457
RMTP24 N/A 0.00058284 0.121224503
RMTP25 0.212293836 0.000439639 N/A
RMTP26 N/A 0.10410989 1.00000
RMTP27 0.909721889 0.16197241 N/A
RMTP33 0.018267235 N/A 0.0063344
RMTP34 0.001526336 N/A 0.003843352
RMTP35 0.000182672 N/A 0.000182672
RMTP36 0.969849977 N/A 0.911329697
RMTP37 N/A 0.520522883 0.003610514
RMTP38 0.009108496 N/A 0.79133678
RMTP39 0.017257456 N/A 0.472675594
RMTP40 0.053902557 0.185876732 N/A
RMTP41 0.05515457 N/A 0.677584958
RMTP42 0.384673063 N/A 0.000182672
RMTP43 0.427355314 N/A 0.001706249
RMTP44 0.046721824 N/A 0.472675594
8. Confidence-based robust multi-objective optimisation 175

Table 8.5: Statistical results of RMOPSO algorithms using Γ


Test function CRMOPSO IRMOPSO ERMOPSO
RMTP1 42.2 ± 16.6 6.2 ± 7.8 9.2 ± 6.1
RMTP2 41.7 ± 18.4 61.8 ± 42.4 9.2 ± 9.7
RMTP3 20.8 ± 11.3 39.6 ± 22.8 28.1 ± 25.4
RMTP6 1.2 ± 0.6 2.1 ± 0.9 1.8 ± 0.5
RMTP7 1.7 ± 1 0.9 ± 0.6 0.7 ± 0.4
RMTP8 0.61 ± 0.3 0.49 ± 0.2 0.62 ± 0.2
RMTP9 0.75 ± 0.29 7.72 ± 5.05 5.85 ± 1.68
RMTP10 0.64 ± 0.25 1.98 ± 0.97 2.51 ± 0.71
RMTP11 1.35 ± 0.503 1.193 ± 0.463 1.986 ± 1.49
RMTP12 3.718 ± 10.994 1.413 ± 4.013 8.368 ± 15.047
RMTP13 1.173 ± 0.47 2.378 ± 1.941 1.924 ± 1.399
RMTP14 0.984 ± 0.268 2.57 ± 3.674 2.474 ± 2.482
RMTP15 6.652 ± 3.888 8.118 ± 2.414 3.187 ± 3.585
RMTP16 9.316 ± 19.185 3.749 ± 10.981 1.209 ± 1.525
RMTP17 0.852 ± 0.216 1.128 ± 0.503 2.893 ± 2.777
RMTP18 9.574 ± 5.173 7.786 ± 4.993 2.955 ± 5.589
RMTP19 3.636 ± 2.595 6.318 ± 17.476 2.143 ± 1.993
RMTP20 20.753 ± 19.771 21.183 ± 16.183 24.79 ± 26.788
RMTP21 7.964 ± 6.86 5.918 ± 4.399 4.23 ± 1.62
RMTP22 2.624 ± 3.218 0.984 ± 1.2 3.789 ± 7.222
RMTP23 8.034 ± 3.164 20.64 ± 14.587 14.808 ± 13.073
RMTP24 15.722 ± 10.593 14.833 ± 8.38 17.97 ± 12.006
RMTP25 39.5 ± 21.671 28.25 ± 9.39 44.9 ± 20.102
RMTP26 21.14 ± 13.65 7.75 ± 6.99 6.53 ± 6.61
RMTP27 7.52 ± 8.09 0.45 ± 0.44 1.54 ± 1.98
RMTP33 2.466 ± 1.56 0.245 ± 0.123 2.415 ± 3.936
RMTP34 0.98 ± 0.447 2.408 ± 1.225 1.167 ± 0.458
RMTP35 0.612 ± 0.67 0.275 ± 0.07 0.268 ± 0.089
RMTP36 57.2 ± 33.923 41.359 ± 29.695 6.729 ± 2.836
RMTP37 0.04 ± 0.028 0.019 ± 0.009 0.033 ± 0.021
RMTP38 1.213 ± 2.088 10.933 ± 30.247 20.729 ± 24.349
RMTP39 14.469 ± 25.914 21.712 ± 7.661 4.404 ± 2.369
RMTP40 0.019 ± 0.031 0.031 ± 0.01 0.021 ± 0.019
RMTP41 6.408 ± 8.04 21.562 ± 24.385 9.573 ± 8.91
RMTP42 29.412 ± 14.821 34.107 ± 22.813 5.926 ± 3.245
RMTP43 0.021 ± 0.017 0.028 ± 0.015 0.031 ± 0.022
RMTP44 6.475 ± 19.515 0.239 ± 0.133 3.659 ± 4.194
176 8. Confidence-based robust multi-objective optimisation

Table 8.6: P-values of Wilcoxon ranksum test for the RMOPSO algorithms in
Table 8.5
Test function CRMOPSO IRMOPSO ERMOPSO
RMTP1 N/A 0.000329839 0.005795359
RMTP2 0.212293836 N/A 0.000182672
RMTP3 0.000329839 N/A 0.021133928
RMTP6 0.001706249 N/A 0.005795359
RMTP7 N/A 0.000439639 0.000182672
RMTP8 0.27303634 0.000182672 N/A
RMTP9 0.000246128 N/A 0.121224503
RMTP10 0.000246128 0.011329697 N/A
RMTP11 0.241321593 0.185876732 N/A
RMTP12 0.909721889 0.344704222 N/A
RMTP13 0.025748081 N/A 0.10410989
RMTP14 0.007284557 N/A 0.850106739
RMTP15 0.427355314 N/A 0.001706249
RMTP16 N/A 0.044022101 0.001041098
RMTP17 0.021133928 0.031209013 N/A
RMTP18 N/A 0.04273553 0.003074894
RMTP19 0.79133678 N/A 0.002827272
RMTP20 0.140465048 0.850106739 N/A
RMTP21 N/A 0.00220222 0.001750388
RMTP22 0.677584958 0.520522883 N/A
RMTP23 0.000768539 N/A 0.00220222
RMTP24 0.121224503 0.00058284 N/A
RMTP25 0.212293836 0.000439639 N/A
RMTP26 N/A 0.000182672 0.000104109
RMTP27 N/A 0.000246128 0.16197241
RMTP33 N/A 0.000182672 0.000768539
RMTP34 0.000182672 N/A 0.000182672
RMTP35 N/A 0.000273036 0.000182672
RMTP36 N/A 0.069849977 0.000017257
RMTP37 N/A 0.003610514 0.520522883
RMTP38 0.009108496 0.03108496 N/A
RMTP39 0.472675594 N/A 0.017257456
RMTP40 0.000246128 N/A 0.185876732
RMTP41 0.04515457 N/A 0.067758495
RMTP42 0.384673063 N/A 0.000182672
RMTP43 0.021133928 0.041706249 N/A
RMTP44 N/A 0.000182672 0.042675594
8. Confidence-based robust multi-objective optimisation 177

an implicit averaging method, the convergence is greater than other algorithms.


However, the greatest density of solutions is in the middle of the robust front,
which has less robustness compared to the left end of the main front. Fig. 8.1
shows that the robust front obtained by CRMOPSO for RMTP1 is slightly more
widely distributed than that of ERMOPSO. In addition, a high density of solu-
tions can be observed on the left end of the robust font. Despite the apparent
similarity of CRMOPSO and ERMOPSO in Fig. 8.1, the above tables show that
the CRMOPSO algorithm provided much better statistical results compared to
ERMOPSO. This shows that the confidence-based Pareto dominance operators
allow CRMOPSO to provide more reliable performance within an equal number
of function evaluations. The statistical results of IRMOPSO for this test prob-
lem were better that CRMOPSO, in terms of convergence, due to the nature of
RMTP1. However, the results show that the confidence-based operators were
able to provide high reliability without significant negative impact on conver-
gence.
The results for RMTP7 are slightly different to those of RMTP1, for which
ERMOPSO showed the highest convergence. Fig. 8.1 shows that ERMOPSO ap-
proximated the entire Pareto optimal front and there was no tendency to favour
robust regions of the front. This resulted in this algorithm showing the worst
coverage and success ratio on RMTP7. In contrast, CRMOPSO and IRMOPSO
found better approximations of the robust front. The solutions obtained by CR-
MOPSO were clustered on the left side of the robust front and it seems there
is resistance to generating any solutions on the less robust regions of the front.
Although the robust Pareto optimal solutions obtained by IRMOPSO followed
a similar pattern, some of them are located on the least robust regions of the
front.
The RMTP9 test function has three separate robust regions. The robustness
curve shows that the robust regions are around f 1 = 0, 0.4, 1. The results in
Table 8.1 demonstrate that the convergence of CRMOPSO was again slightly
worse than IRMOPSO and ERMOPSO. The shape of the approximated robust
Pareto front in Fig. 8.1 shows that ERMOPSO behaved similarly on this test
function, in contrast to RMTP7 where there was no particular tendency to favour
robust regions of the Pareto optimal front. A slight bias to robust regions can
be observed in the robust Pareto optimal solutions obtained by IRMOPSO. The
approximated robust Pareto optimal front of CRMOPSO, however, shows that
178 8. Confidence-based robust multi-objective optimisation

this algorithm had a greater ability to guide its solutions toward robust regions
of the Pareto optimal front. There is a gap between the solutions obtained on
the least robust region of the Pareto optimal front.
The behaviour of algorithms in terms of finding robust regions of the Pareto
optimal front can be observed more clearly with RMTP27. In fact, this test
function is arranged deliberately to have a stair-shaped Pareto optimal front
in order to benchmark the ability of algorithms in terms on converging toward
robust regions of the front and refraining from finding non-robust solutions.
The quantitative results of the algorithms on RMTP27 show that the results of
CRMOPSO are significantly better than IRMOPSO and ERMOPSO. The CR-
MOPSO algorithm only approximates the first two stairs, which are considered
the most robust regions of the Pareto optimal front, as shown in Fig. 8.1. How-
ever, the robust solutions obtained by IRMOPSO and ERMOPSO tend to be
distributed on other, less robust regions as well.
Test functions RMTP11 to RMTP19 are bi-modal. The local front is the ro-
bust front, so the behaviour of algorithms in favouring a robust front instead of a
global front can be investigated. These test functions are designed with different
shapes of robust and global fronts in order to extensively test the performance
of the algorithms. In RMTP13 the shape of both local and global fronts is
convex. In contrast to the quantitative results in Fig. 8.1, the CRMOPSO al-
gorithm showed the highest convergence on this test function. This shows that
although the convergence of the proposed method is slightly low, this may be
favourable when solving multi-modal test functions. The coverage and success
ratio of CRMOPSO were also very good on this test function. The best robust
solutions obtained for all algorithms are again illustrated in Fig. 8.2. It may be
seen that the best front obtained was from CRMOPSO. This algorithm did not
approximate even one non-robust solution for most of the test functions, proving
the merits of the proposed method. IRMOPSO was not able to approximate the
robust front over 30 runs for this problem showing the potential unreliability
of using previous samples without a confidence measure. The ERMOPSO algo-
rithm also mostly tended to approximate the global front, but there were some
solutions found on the robust front.
As Fig. 8.2 shows, RMTP15 and RMTP16 have identical linear global fronts
that make the approximation of the global front easy. Deliberately, these two
test functions have been arranged to have linear global fronts to observe what
8. Confidence-based robust multi-objective optimisation 179

CRMOPSO IRMOPSO ERMOPSO


1.5 1.5 1.5

1 1 1

f2

f2

f2
RMTP13
0.5 0.5 0.5

0 0 0
0 0.5 1 1.5 2 0 0.5 1 1.5 2 0 0.5 1 1.5 2
f1 f1 f1
CRMOPSO IRMOPSO ERMOPSO
1.5 1.5 1.5

1 1 1
f2

f2

f2
RMTP14
0.5 0.5 0.5

0 0 0
0 0.5 1 1.5 2 0 0.5 1 1.5 2 0 0.5 1 1.5 2
f1 f1 f1
CRMOPSO IRMOPSO ERMOPSO
1.5 1.5 1.5

1 1 1
f2

f2

f2
RMTP15
0.5 0.5 0.5

0 0 0
0 0.5 1 1.5 2 0 0.5 1 1.5 2 0 0.5 1 1.5 2
f1 f1 f1
CRMOPSO IRMOPSO ERMOPSO
1.5 1.5 1.5

1 1 1
f2

f2

f2

RMTP16
0.5 0.5 0.5

0 0 0
0 0.5 1 1.5 2 0 0.5 1 1.5 2 0 0.5 1 1.5 2
f1 f1 f1
CRMOPSO IRMOPSO ERMOPSO
1.5 1.5 1.5

1 1 1
f2

f2

f2

RMTP19 0.5 0.5 0.5

0 0 0
0 0.5 1 1.5 2 0 0.5 1 1.5 2 0 0.5 1 1.5 2
f1 f1 f1

Figure 8.2: Robust fronts obtained for RMTP13 to RMTP16 and RMTP19, one
test case per row. Note that the dominated (local) front is robust and considered
as reference for the performance measures.

resistance the algorithms provided in avoiding non-robust but easily obtained


fronts. RMTP15 has a convex robust front, whereas RMTP16 has a concave ro-
bust front. The approximated robust solutions in Fig. 8.2 show that IRMOPSO
again failed to approximate the robust front on both test cases. This is due to the
simplicity of the global front and unreliability of the IRMOPSO algorithm. As
discussed in the problem statement of this thesis, there is a significant number of
180 8. Confidence-based robust multi-objective optimisation

times during the optimisation that fewer neighbouring solutions mislead an algo-
rithm toward non-robust regions. The archive-based mechanism of IRMOPSO
deteriorated on these problems, with cases in which a non-robust solution entered
the archive and was never dominated by robust solutions outside the archive. In
contrast ERMOPSO which has an explicit averaging mechanism never favoured
a non-robust solution. This caused a well-distributed robust front in Fig. 8.2
for ERMOPSO on RMTP15. The robust solutions obtained by CRMOPSO are
very competitive with those found by ERMOPSO. Tables 8.1 and 8.3 show that
CRMOPSO had superior convergence and coverage on RMTP15.
The approximate solutions of the algorithms on RMTP16 were somewhat
different; the coverage of solutions obtained by ERMOPSO was very low and
the CRMOPSO algorithm also found non-robust solutions. Generally speaking,
approximation of a concave front is more challenging, and this phenomenon can
be seen for the RMTP16 test function. The IRMOPSO algorithm again failed
to approximate the robust front, whereas CRMOPSO and ERMOPSO tended
to find robust solutions.
RMTP19 has two fronts with opposite shapes: a convex global front and
a concave robust/local front. The results of algorithms on this test function
show the difficulty of RMTP16 as none of the algorithms approximate the entire
robust front. In Fig. 8.2, some resolution of the robust front can be observed in
the solutions obtained by both CRMOPSO and ERMOPSO. The CRMOPSO
algorithm shows a slightly greater tendency to correct selection in this case.
Generally speaking, benchmark functions with different shapes for robust and
global optima would be more difficult to solve because a robust algorithm needs
to adapt to a very different Pareto optimal front with different level of robustness
when transferring from the global/local Pareto optimal front to the robust Pareto
optimal front(s). Deb asserted this for multi-objective benchmark problems [46],
and it is the reason for the poor performance of all algorithms on RMTP19.
The last two test functions, RMTP24 and RMTP25, have multiple discon-
tinuous local fronts. They provide the most challenging test cases for the al-
gorithms. It should be noted that the robustness increases from right to left
and bottom to top in the figure. The results of Tables 8.1, 8.3, and 8.5 and
Fig. 8.3 show that the proposed CRMOPSO algorithm was the best in terms
of favouring robust regions of the fronts. The Pareto optimal solutions obtained
by CRMOPSO for RMTP24 indicate that there is more coverage on the most
8. Confidence-based robust multi-objective optimisation 181

CRMOPSO IRMOPSO ERMOPSO


1 1 1

0.8 0.8 0.8

0.6 0.6 0.6

f2

f2

f2
RMTP21 0.4 0.4 0.4

0.2 0.2 0.2

0 0 0
0 0.5 1 0 0.5 1 0 0.5 1
f1 f1 f1
CRMOPSO IRMOPSO ERMOPSO
1 1 1

0.8 0.8 0.8

0.6 0.6 0.6


f2

f2

f2
RMTP22 0.4 0.4 0.4

0.2 0.2 0.2

0 0 0
0 0.5 1 0 0.5 1 0 0.5 1
f1 f1 f1
CRMOPSO IRMOPSO ERMOPSO
3 3 3

2 2 2
f2

f2

f2
RMTP24 1 1 1

0 0 0

-1 -1 -1
0 0.5 1 0 0.5 1 0 0.5 1
f1 f1 f1
CRMOPSO IRMOPSO ERMOPSO
3 3 3

2 2 2
f2

f2

f2

1 1 1
RMTP25
0 0 0

-1 -1 -1
0 0.5 1 0 0.5 1 0 0.5 1
f1 f1 f1

Figure 8.3: Robust fronts obtained for RMTP21 to RMTP25 one test case per
row. Note that the worst front is the most robust and considered as reference
for the performance measures.

robust region of the local fronts. However, IRMOPSO and ERMOPSO do not
display such behaviours. The results of algorithms on RMTP25, which has more
discontinuous robust regions, were also consistent with those of RMTP24: the
CRMOPSO algorithm was able to find more robust solutions.
It should be noted that the quantitative results of RMTP26 to RMTP44 (bi-
ased test functions) in Tables 8.1, 8.3, and 8.5 as well as obtained robust Pareto
optimal fronts in Appendix C also show that the CRMOPSO algorithm outper-
forms IRMOPSO and ERMOPSO on the majority of the biased test functions.
Theses results are consistent with those of other test functions and confirm the
merits of the proposed CRMO approach in solving difficult problems.
Overall, CRMOPSO and IRMOPSO show better results on 18 and 16 test
182 8. Confidence-based robust multi-objective optimisation

functions respectively considering the IGD values. As the p-values show, how-
ever, the superiority was statistically significant in 13 cases for CRMOPSO but
10 cases for IRMOPSO. Another fact is that IRMOPSO shows better results
mostly on unimodal test functions, which is due to the unreliably high exploita-
tion and convergence speed of this algorithm. The proposed confidence measure
prevents CRMOPSO from converging towards the global front, which is advan-
tageous in multi-modal test functions.
Considering the results of the coverage measure, it may be seen that the CR-
MOPSO shows statistically better results in 7 out of 9 test functions. However,
the results of IRMOPSO are statistically better in 7 out of 16 test functions.
This is again due to the impacts of the confidence measure on movement of par-
ticles in CRMOPSO. It seems that the lesser movement of particles decreases the
coverage of solutions, but the results show that CRMOPSO is still competitive
compared to the IRMOPSO.
The results of algorithms on the success ratio measure show CRMOPSO
yields statistically better results in 12 out of 37 functions. The IRMOPSO is
better in 14 out of 37 test functions. However, the results were only statistically
significant in 9 cases. This shows that CRMOPSO is slightly more reliable in
terms of finding robust solutions and avoiding non-robust ones.
To further investigate the merits of the proposed method, Table 8.7 deter-
mines the number of times that a solution dominated an archive member but was
not allowed to enter the archive with the proposed confidence measure on some
of the test functions. This table shows that a significant number of times during
optimisation (approximately 19% of attempts) the proposed confidence-based
Pareto optimality concepts prevented solutions with low confidence from being
added to the archive. Firstly, these results demonstrate that often IRMOPSO
was likely to make unreliable decisions and consequently favour non-robust so-
lutions. Secondly, the proposed method is able to prevent unreliable decisions
throughout optimisation, providing more reliability than the current archive-
based robustness-handling methods.
In summary, the results show that the convergence of CRMOPSO is slower
than IRMOPSO because less confident solutions were ignored over the course of
iterations. Although this prevents CRMOPSO from providing superior results on
uni-modal test functions, it can be very helpful in optimising real problems since
CRMOPSO has less probability of premature convergence toward non-robust
8. Confidence-based robust multi-objective optimisation 183

Table 8.7: Number of times that the proposed confidence-based Pareto domi-
nance prevented a solution entering the archive
Test function (Pi ≺ Archivem ) ∧ (Pi 6≺c Archivem )
RMTP1 18,918 / 100,000
RMTP7 28,210 / 100,000
RMTP9 19,240 / 100,000
RMTP27 17,157 / 100,000
RMTP13 13,938 / 100,000
RMTP14 14,703 / 100,000
RMTP15 15,768 / 100,000
RMTP16 17,054 / 100,000
RMTP24 26,566 / 100,000
RMTP25 17,570 / 100,000

regions compared to IRMOPSO. The results of CRMOPSO were also superior


to ERMOPSO on the majority of test functions. The proposed performance
indicators quantitatively showed the greater coverage and success ratio of the
CRMOPSO algorithm. In addition, the proposed algorithm showed very good
convergence on the robust regions of challenging, multi-modal test cases.

8.3 Summary
This chapter was dedicated to the results and discussion of the proposed CRMO
approach. The CRMOPSO algorithm was employed to solve the proposed chal-
lenging test functions in Chapter 4 and compared to other algorithms in the
literature qualitatively and quantitatively (using the performance measures pro-
posed in Chapter 5). The results showed that the novel approach proposed is
able to deliver very promising results in terms of convergence, coverage, and pro-
portion of robust Pareto optimal solutions obtained. The findings demonstrated
the value of the proposed concepts:

• The proposed confidence-based Pareto optimality provides the opportunity


to design different CRMO mechanisms for meta-heuristics.

• The proposed confidence-based Pareto optimality improves the reliability


of archive-based robust optimisation techniques.

• CRMO is able to confidently find robust solutions.


184 8. Confidence-based robust multi-objective optimisation

• CRMO is computationally cheap since there is no need for additional true


function evaluations.

• CRMO is able to approximate different types of true robust Pareto optimal


fronts.

• CRMO is suitable for computationally expensive real engineering problems.

The results of this chapter also showed the merits of the proposed challenging
robust multi-objective test problems when comparing different algorithms. Due
to the different characteristics of the proposed test functions, the performance
of the CRMOPSO algorithm was observed and investigated in detail. It is also
worth mentioning here that the proposed performance measures allowed quan-
titatively comparing different algorithms in this chapter for the first time in the
literature.
Chapter 9

Real world applications

In the previous two chapters, the merits of the ideas proposed in Chapter 6 have
been investigated systematically. This chapter demonstrates the application of
the proposed CRMOPSO to the design of propellers using a reduced-order model.
The case study is a propeller design problem. In the following sections, this prob-
lem is first solved by a MOPSO algorithm. The results are then analysed mostly
in terms of the effects of uncertainties on structural parameters and operating
conditions. Finally, the CRMOPSO algorithm is employed to determine the
robust Pareto optimal front for this problem.

9.1 Marine Propeller design and related works


9.1.1 Propeller design
Due to the relatively high density of water, the efficiency of propellers for marine
vehicles is very important. The efficiency of propellers refers to the amount of the
power of the motor(s) that is converted to thrust. In addition to the efficiency,
this conversion should be done with a minimum level of vibration and noise. The
third characteristic of a good propeller is low surface erosion which is caused by
cavitation. Finding a balance between these three features is a challenging task
which should be considered during the design process of a propeller. The main
part of a propeller is its blades. The geometric shape of these blades should
satisfy all the above-mentioned requirements.
The propeller adds velocity (∆v) to an incoming velocity (v) of the surround-
ing fluid. This acceleration is created in two places: the first half in front of the

185
186 9. Real world applications

propeller and the second half behind the propeller. A propeller is rotated, which
swirls the outflow. The amount of this swirl is based on the rotation speed of the
motor and energy loss. Efficient propellers lose 1% to 5% of their power because
of swirl. The thrust of propellers is calculated as follows [26]:

π 2 ∆v
T = D (v + )ρ∆v (9.1)
4 2
where T is thrust, D is the propeller diameter, v is the velocity of the incoming
flow, ∆v is the additional velocity which is created by the propeller, and ρ is the
density of the fluid.
It may be seen in Equation 9.1 that the final thrust depends on the volume
of the incoming stream which has been accelerated per unit of time, the amount
of this acceleration, and the density of the medium.
Power is defined as force times distance per time. The required power to
drive a vehicle with a velocity of v using the available thrust is calculated as
follows:

Pa = T v (9.2)

One of the objectives of optimisation in propellers is to create as much thrust as


possible with the smallest amount of power. This is the efficiency of propellers
which can be expressed as follows:

Pa Tv
η= = (9.3)
Pengine Pengine

where Pengine is the engine power.


The efficiency of a propeller can be calculated as follows:

JKT (x)
η(x) = (9.4)
2πKQ (x)

where J is the advance number, KT is the propeller thrust coefficient, and KQ


is the propeller torque coefficient. J is defined as follows:

Va
J= (9.5)
nD
where Va is the axial velocity, n is rotational velocity, and D is the diameter of
the propeller.
9. Real world applications 187

By substitution of terms the efficiency can also be presented as follows:


Va KT (x)
η(x) = (9.6)
2πnD KQ (x)
The thrust coefficient (KT ) and torque coefficient (KQ ) are calculated as
follows:
39
X P tn Ae un
KT = CTn (J)sn ( ) ( ) (Z)vn (9.7)
n=1
D Ao

47
X P tn Ae un
KQ = CQn (J)sn ( ) ( ) (Z)vn (9.8)
n=1
D Ao

where P/D is the pitch ratio, Ae /Ao is the disk ratio of the propeller, Z is
the number of blades, and CTn , CQn , sn , tn , un , vn are corresponding regression
coefficients.
There is another issue in propellers called cavitation. When the blades of a
propeller move through water at high speed, low pressure regions form as the
water accelerates and moves past the blades. This can cause bubbles to form,
which collapse and can cause strong local shockwaves which result in erosion of
propellers. The sensitivity of the propeller to cavitation is calculated as follows:

(pa + pgh0.8 − pv )
σn,0.8 = (9.9)
0.5ρ(πnD)2
where pa is the atmospheric pressure, pv indicates the vapour pressure of water,
g is the acceleration due to gravity, and h0.8 shows immersion of 0.8 blade radius
when the blade is at the position of 12:00.
The ultimate goal here is to design a propeller with the highest efficiency and
the lowest cavitation sensitivity.
In order to find the final geometrical shape of the blade, standard National
Advisory Committee for Aeronautics (NACA) airfoils are selected as shown in
Fig. 9.1. It may be seen in this figure that two parameters define the shape of
the airfoil: maximum thickness and chord length. In this chapter ten airfoils are
considered along the blade, so the total number of parameters is 20.
The final parameter vector is as follows:

~ = (T1 , C1 , T2 , C2 , ..., T10 , C10 )


X (9.10)
188 9. Real world applications

Maximum thickness

Chord length

Figure 9.1: Airfoils along the blade define the shape of the propeller (NACA
a = 0.8 meanline and NACA 65A010 thickness)

where Ti and Ci indicates the thickness and chord length of the i-th airfoil along
the blade.
Finally, the problem can be formulated as follows:

~ = (Ti , Ci ), i = 1, 2, ..., 10
Suppose : X (9.11)

~
M aximise : η(X) (9.12)

~
M inimise : Vc (X) (9.13)

Subject to : T hrust ≥ 40000 (9.14)

where Vc is a function to calculate the cavitation.


In order to calculate the objective functions, a freeware called OpenProp is
utilised as the simulator. Details of the model are provided in the following
sources [65, 64].

9.1.2 Related work


Work using a heuristic algorithm to optimise the shape of a B-series propeller has
been reported in the literature [171]. The NSGA-II algorithm was employed to
optimise the shape of a propeller with specific performance in given conditions.
9. Real world applications 189

Designing a propeller for ships was considered as a multi-objective problem with


two objectives: minimising propeller efficiency and maximising the thrust co-
efficient. The author considered two main constraints for this problem. These
constraints were wake friction and thrust deduction. The author did not specify
the exact number of variables. NGSA-II provided 15 Pareto optimal solutions.
Finally, a decision making technique was used to select one of the Pareto so-
lution as the best solution. One other study has investigated the application
of the NSGA-II multi-objective optimisation algorithm to the design of marine
propellors [132]. Difficulties were reported with the nature of the design space:
constraints applied isolated feasible results into “small islands” and the optimi-
sation algorithm failed to converge.

9.2 Results and discussion


A MOPSO algorithm is employed to estimate the Pareto optimal front. A pop-
ulation of 100 search agents and maximum number of 200 iterations are chosen
for MOPSO. The main case study is a ship propeller with 2 metre diameter as
shown in Fig. 9.2.

2 meters

Figure 9.2: Propeller used as the case study

The   experiments undertaken using MOPSO are as follow:

1. Observing the behaviour of MOPSO in finding an accurate approximation


and well-spread Pareto optimal solutions

2. Observing the effect of the number of blades on the efficiency and cavitation
of the propeller
190 9. Real world applications

3. Finding the optimal number of blades

4. Observing the effects of Revolutions Per Minute (RPM) on the efficiency


and cavitation of the propeller

5. Finding the optimal values (range) for RPM

6. Post analysis of the result to extract the possible physical behaviour and
impacts of the parameters on the efficiency and cavitation of the propeller

7. Observing the effects of uncertainties in operating conditions (RPM) on


the the Pareto optimal fronts obtained by MOPSO

8. Observing the effects of uncertainties in structural parameters on the Pareto


optimal fronts obtained by MOPSO

The following subsections present and discuss the results for each of these
experiments.

9.2.1 Approximating the Pareto Front using the algo-


rithm
The MOPSO algorithm was run 4 times on the problem and the best estimate
of Pareto Front obtained is illustrated in Fig. 9.3. Note that RPM=200 in this
experiment. The blue points in the figure, at left, show that the MOPSO al-

20000 function evaluations 20000 function evaluations

7 blades

6 blades

5 blades

4 blades

3 blades

Figure 9.3: (left) Pareto optimal front obtained by the MOPSO algorithm (6
blades), (right) Pareto optimal fronts for different numbers of blades
9. Real world applications 191

gorithm was able to find a set of highly distributed estimations for the Pareto
optimal solutions across both objectives. The search history of points sampled
by particles during optimisation is illustrated by black points. The search history
also shows that the MOPSO algorithm explored and exploited the search space
efficiently, which results in obtaining this highly distributed and accurately con-
verged estimation for the Pareto optimal front. The accurate convergence of the
solutions obtained is due to the intrinsic high exploitation of the MOPSO algo-
rithm around the selected leaders, gBest and pBests, in each iteration. The uni-
form distribution originates from the selection mechanism of leaders in MOPSO.
Since particle guides were selected from the less populated parts of the archive,
there was always a high tendency toward finding Pareto optimal solutions along
the regions of the Pareto optimal front with lower distribution.

9.2.2 Number of blades


The effects of the number of blades on efficiency and cavitation were investigated.
Five problems were first formulated by altering the number of blades from 3 to 7.
The MOPSO algorithm was then employed to approximate the Pareto optimal
fronts. The MOPSO algorithm was run 4 times on each of the problems and the
best Pareto optimal fronts obtained are illustrated in Fig. 9.3, at right.
This figure shows that the efficiency increases with to the number of blades,
up to a limit of 5 blades. Beyond this number, efficiency decreases. The figure
also shows that cavitation decreases in proportion to the number of blades. The
reason why the majority of ship propellers are made of 5 or 6 blades is due to
the shape of the fronts in Fig. 9.3. The highest efficiency, which is the main
objective in ship propellers, is achieved by 5 or 6 blades. Therefore, 5 blades are
chosen by default unless cavitation is a major issue.

9.2.3 Revolutions Per Minute (RPM)


RPM is one of the most important operating conditions for propellers. In order
to observe the effects of this parameter on the efficiency and cavitation of the
propeller, a 5-blade version of the propeller investigated in the previous sub-
section was selected. The RPM considered was limited to the range of 150 to
250. Since changing the RPM changes the operating conditions of the propeller
significantly, two types of experiments were done, as follows:
192 9. Real world applications

1. Finding the Pareto optimal front for the propeller at RPM increments of
10.

2. Parametrising the RPM and finding the optimal front for it using MOPSO.

The MOPSO algorithm was employed to find the Pareto optimal front for
the propeller at each of the 11 RPM varying from 150 to 250. The algorithm
was run 4 times on each case and the best Pareto optimal fronts obtained are
illustrated in Fig. 9.4, at left. This figure first shows that there is no feasible
Pareto optimal solution when RPM = 150, 160, or 250. For the remaining RPMs,
it may be observed that increasing RPM generally results in decreasing efficiency
and increasing cavitation. Although increasing the RPM seems to increase the
thrust, these results show that high RPM is not very effective and risks increased
damage to the propeller in long term use due to the high cavitation. The peak
of the high efficiency and low cavitation occurred between RP M = 170 and
RP M = 180. Therefore, such RPM rates can be recommended when using a
5-blade version of the ship propeller investigated.

RPM=150,160 all infeasible


RPM=170

RPM=180

RPM=190

RPM=200

RPM=210

RPM=220

RPM=230

RPM=240

RPM=250 all infeasible

Figure 9.4: (left) Best Pareto optimal fronts obtained for different RPM (right)
Optimal RPM

To find the optimal values for the RPM, this operating condition was parametrised
and optimised by MOPSO as well. The number of parameters increases to 21
when considering RPM as a parameter, but the same number of particles and
iteration were chosen to approximate the Pareto optimal front. The best Pareto
optimal front is illustrated in Fig. 9.4, at right.
9. Real world applications 193

The Pareto optimal front estimated shows that the approximations of Pareto
optimal solutions mostly tend to the best Pareto optimal front found for RP M =
170. Almost 20% of the solutions are distributed between the Pareto optimal
fronts for RP M = 170 and RP M = 180. The search history of the MOPSO
algorithm is also illustrated in Fig. 9.4 to make sure that all of the Pareto fronts
obtained in the previous experiment have been explored. The search history
clearly illustrates that the fronts have been found by MOPSO, but all of them
are dominated by the Pareto optimal front for RP M = 170 and the solutions
between RP M = 170 and RP M = 180. This can be seen in Fig. 9.5.

Figure 9.5: PF obtained when varying RPM compared to PFs obtained with
different RPM values

A parallel coordinates visualisation of the solutions from the Pareto optimal


front in Fig. 9.4(right) is shown in Fig. 9.6, for RPM between 170 and 180.
Parallel coordinates are a very helpful tool for analysing high-dimensional data.
The parallel coordinates in Fig. 9.6 show all the parameters and objectives. Each
line indicates one of the Pareto optimal solutions obtained in Fig. 9.4.
It may be observed in this figure that the range of the RPM is between
170 to 180. However, the density of solutions is higher close to RP M = 170.
194 9. Real world applications

𝑀𝑖𝑛 𝑅𝑃𝑀 = 170, 𝑀𝑎𝑥 𝑅𝑃𝑀 = 180

0.5
Normalized values

-0.5

-1
P1 P2 P3 P4 P5 P6 P7 P8 P9 P10 P11 P12 P13 P14 P15 P16 P17 P18 P19 P20 RPM Eff Cav
Parameters objectives

Figure 9.6: Optimal RPM coordinates

Other features can be seen in this representation as clustering of solutions near


particular parameter values.

9.2.4 Post analysis of the results

Since the Pareto optimal front obtained by MOPSO contains the best trade-
offs between cavitation and efficiency, some of the characteristics and physical
rules applied to the propeller can possibly be inferred. One of the best tools for
identifying and observing such behaviours is parallel coordinates (Fig. 9.6). The
first pattern that can be seen in the parallel coordinates is the relatively uniform
distribution of solutions over P 1 − P 6. These first pairs of parameters define the
shape of the first three airfoils starting from the shaft of the propeller. This shows
that the first three airfoils do not play very important roles in defining the final
efficiency and cavitation. In contrast, parameters P 7 − P 10 are not distributed
uniformly across the vertical lines. This shows that the shape of the fourth and
fifth airfoils is very critical for designing the propeller. This is also consistent with
the fact that the middle part of a blade has the greatest width and is consequently
significantly involved in generating thrust, and cavitation. Although similar
behaviour can be seen in the rest of the parameters, the distribution of P 7 −
P 10 is much less than others, showing again the importance of these structural
parameters. Other features evident in Fig. 9.6 also bear further investigation, a
topic for future work.
9. Real world applications 195

9.2.5 Effects of uncertainties in operating conditions on


the objectives
This experiment was to investigate the effects of uncertainties in the RPM on
the efficiency/cavitation of the propeller. To do this, the best Pareto optimal
front obtained for the 5-blade propeller in Fig. 9.3 was selected as the main front.
The efficiency and cavitation of the Pareto optimal solutions in this front were
then re-calculated by changing the RPM as the most important environmental
condition. The projections of the solutions are illustrated in Fig. 9.7. Note that
the perturbation considered is δRP M = ±1, which has been recommended by an
expert in the field of mechanical engineering.

Figure 9.7: Pareto optimal solutions in case of (left) δRP M = +1 , (right) δRP M =
−1 fluctuations in RPM (right). Original values are shown in blue, perturbed
results in red.

As Fig. 9.7 (left) shows, the efficiencies of all the Pareto optimal solutions
obtained decrease when δRP M = +1 perturbations occur. The cavitation of
Pareto optimal solutions is also increased. A similar behaviour for the efficiency
can be observed in Fig. 9.7 (right). This figures shows that the efficiencies of
Pareto optimal solutions decrease when δRP M = −1. However, the cavitation
is decreased, which is obviously due to the lower rate of RPM. These results
shows that perturbations in RPM can have significant negative impacts on the
expected and desired efficiencies. The cavitation can also vary substantially with
196 9. Real world applications

uncertainties in RPM.

9.2.6 Uncertainties in the structural parameters


Uncertainties may occur in the structural parameters as well. This type of un-
certainty mostly originates from manufacturing errors. This subsection considers
the maximum permitted errors, according to ISO 484/2-1981, that can alter the
optimal values obtained by MOPSO. Note that the perturbation considered is
δ = 1.5% of the nominal values.
The Pareto optimal solutions obtained in Fig. 9.3 are first selected. Maximum
positive and negative perturbations are then applied to parameters (thickness
and chord). Finally, the objectives of the Pareto optimal solutions obtained are
re-calculated. The results are illustrated in Fig. 9.8.

Figure 9.8: Pareto optimal solutions in case of (left) δ = +1.5% (right) δ =


−1.5% perturbations in parameters. Original values are shown in blue, perturbed
results in red.

The trend is similar to the results of the preceding subsection, in that the
uncertainties in parameters also degrade the expected efficiency significantly. In
addition, the results show that the cavitation can vary dramatically in case of
uncertainties in parameters.
In summary, these results strongly show the remarkably negative impacts
of perturbations on the performance of marine propellers and emphasise the
9. Real world applications 197

importance of considering such undesirable inputs when designing propellers.


As a further illustration of the effect on efficiency, it may be noted that the
perturbations in structural parameters gave rise to reductions in efficiency of
about 0.25%. This translates directly to increased fuel consumption, the biggest
cost in marine shipping. For the vessels for which the propeller tested is suited,
generally those up to 100 tonnes displacement, the difference may be an increase
of 40 litres per day. Scaling the effect to typical container ships operating under
normal conditions, the increased fuel usage could be over half a tonne of bunker
oil a day, increasing not only costs but also environmental emissions.

9.3 Confidence-based Robust optimisation of ma-


rine propellers
The preceding section showed the significant negative impacts of perturbations
in parameters on both objectives of the propeller design problem. The pro-
posed confident-based robust multi-objective perspective in this thesis has been
designed to handle such perturbations without the need for extra function eval-
uation. Therefore, this section employs the proposed CRMOPSO algorithm for
finding robust Pareto optimal solutions for the propeller design problem.
The experimental set up is identical to that of the preceding section, in which
100 particle is utilised and allowed to search for the robust Pareto optimal solu-
tions over 200 iterations. The maximum level of perturbation is also considered
as 1.5% as per ISO 484/2-1981. The CRMOPSO algorithm was run 5 times
and the best approximated robust front is illustrated in Fig. 9.9. Note that the
nominal objective values for the robust Pareto optimal solutions obtained by the
CRMOPOS algorithm are illustrated in this figure as well as the best Pareto
optimal solutions obtained by MOPSO for comparison.
It may be seen in this figure that the robust Pareto optimal front is completely
dominated by the global Pareto optimal front. The coverage of the robust front
is also lower than the global front. In order to see the significance of the results,
the amount of excessive fuel consumptions that can be raised in case of 1.5%
perturbation during manufacturing are calculated for the solutions obtained by
both algorithms and presented in Table. 9.1. This shows how much the de-
signs obtained increased the fuel consumption of the motor if 1.5% perturbation
198 9. Real world applications

20000 function evaluations PF obtained by


-25.55 MOPSO

-25.6

-25.65

-25.7
-cavitation

-25.75

-25.8

-25.85
Robust PF obtained
by CRMOPSO -25.9

-25.95
0.68 0.685 0.69 0.695
efficiency

Figure 9.9: Robust front obtained by CRMOPSO versus global front obtained
by MOPSO

Table 9.1: Fuel consumption discrepancy in case of perturbation in all of the


structural parameters for both PS obtained by MOPSO and RPS obtained by
CRMOPSO
Algorithm average min max
MOPSO 0.1735 0.1676 0.1851
CRMOPSO 0.0825 0.0805 0.0863

occurs.
This table shows that the discrepancy of fuel consumption is much lower
for the designs obtained by CRMOPSO algorithm. The average, minimum,
and maximum of the excessive fuel consumption is almost half for the solutions
obtained by the CRMOPSO algorithm. Due to the difficulty of the propeller
design problem and importance of uncertainties, these results strongly evidence
the merits of the proposed confidence-based robust optimisation. It is worth
highlighting here that the robust Pareto optimal solutions are obtained without
even a single extra function evaluation.
To further investigate the effectiveness of the proposed CRMOPSO, this al-
gorithm is employed to find the robust optimal values for RPM as well. In the
previous subsections, the global Pareto optimal front for this problem was found
9. Real world applications 199

with parameterising the RPM. An equal number of particles and iterations are
employed to determine the robust front and robsut optimal values for RPM. The
results are illustrated in Fig. 9.10. This figure also shows the obtained Pareto
optimal front by the MOPSO algorithm for comparison.

20000 function evaluations PF obtained by


0 MOPSO

-2

-4

-6

-8
-cavitation

-10

-12

-14

Robust PF obtained -16


by CRMOPSO
-18

-20
0.685 0.69 0.695 0.7
efficiency

Figure 9.10: Global and robust Pareto optimal fronts obtained by MOPSO and
CRMOPSO when RPM is also a variable

As may be seen in Fig. 9.10 the major part of the robust Pareto front is
dominated by the Pareto optimal front. The distributions of both fronts are
almost identical. A small potion of robust front overlaps with global front.
Therefore, the robust front is of type C, which means that a part of Pareto front
is robust, but there are other robust solutions. To observe the range of RPMs
in both fronts, Fig. 9.11 is provided.
This figure shows that the range of RPM obtained by both algorithms is
evidently different. It may be seen that the range of RPM tends to be higher
in the Pareto optimal solutions obtained by CRMOPSO. Another interesting
pattern is that half of the Pareto optimal solutions obtained by MOSPO have
RPM of 170. However, the results of CRMOPO in Fig. 9.11 show that 170 is not
a robust RPM since a few of the solutions have this value for their RPMs. The
range of RPMs in CRMOPSO is wider than that of MOPSO. This is due to the
intrisic higher exploration of the CRMOPSO. Since less non-dominated solutions
200 9. Real world applications

190
CRMOPSO
188
MOPSO
186

184

RPM 182

180

178

176

174

172

170
0 20 40 60 80 100
Sorted Pareto optimal solutions

Figure 9.11: Optimal and robust optimal values for RPM (note that there are
98 robust Pareto optimal solutions and 100 global optimal solutions)

Table 9.2: Fuel consumption discrepancy in case of perturbation in RPM for


both PS obtained by MOPSO and RPS obtained by CRMOPSO
Algorithm average min max
MOPSO 0.0718 0.0136 0.0927
CRMOPSO 0.0521 0.0018 0.0904

are alloweded to enter the archive, the exploitation is less and exploration is
higher than the normal MOPO, which results in finding a wider range of RPMs.
To observe the impacts of the perturbation on the Pareto optimal solutions
obtained by both of the algorithms, Table. 9.2 is provided. Note that the RPM
is only perturbed in this experiment and other parameters are fixed.
This table shows that the average, maximum, and minimum fuel consump-
tion discrepancy of the robust Pareto optimal solutions obtained by CRMOPSO
algorithm is better that those of MOPSO. These results again strongly prove the
merits of the proposed confidence-based robust optimisation approach in finding
robust solutions that are less sensitive to perturbations in parameters.

9.4 Summary
In this chapter, the shape of several ship propellers were optimised considering
two objectives: efficiency versus cavitation. MOPSO was first employed to find
9. Real world applications 201

the best approximation of the true Pareto optimal front for the propeller. It
was observed that the MOPSO algorithm showed very good convergence and
was able to find a uniformly distributed Pareto optimal front. The MOPSO
algorithm was then employed to undertake several experiments, investigating
the effect of the number of blades, RPM, and uncertainties in manufacturing
and operating parameters. The results of MOPSO were also analysed to identify
the possible physical behaviour of the propeller.
The results showed that the best efficiency and cavitation can be achieved by
having five or six blades, since any other number of blades significantly degrades
one of the objectives. The best RPM for the propeller was also found by the
MOPSO algorithm. It was observed that the best Pareto optimal front can be
obtained when the propeller is operating at RPM = 170 to 180. However, the
results of the impact of uncertainties on RPM show that the optimal RPM is very
sensitive to perturbation: efficiency and cavitation can be degraded significantly
by a small amount of uncertainty. Simulation of manufacturing perturbations
also revealed that both of the objectives for the Pareto optimal solutions obtained
also vary dramatically. In addition, post analysis of the results showed that the
most important parameters of the propeller are the maximum thickness and
chord length of the fourth and fifth NACA airfoils along the blade.
The results of CRMOPSO on the propeller design problem showed that this
algorithm is able to find robust optimal values for parameters and RPM that
are not sensitive to perturbations. It was observed that the fuel consumption
discrepancy is much less for robust designs obtained by CRMOPSO in case of
perturbations in parameters and RPM. Since the propeller design problem is a
challenging real problem with a large number of constraints, this chapter strongly
demonstates and supports the practicality of the proposed CRMO approach in
finding robust solutions for real problems.
Chapter 10

Conclusion

10.1 Summary and conclusions


This thesis concentrates on robust optimisation in single- and multi-objective
search spaces using population-based meta-heuristics. It was observed that the
literature mainly lacks a systematic design process, which is essential for design-
ing new algorithms or improving the current ones.
The current main gaps in the literature in each phase of a systematic design
process were identified as lack of standard single/multi-objective robust test
problems, lack of specific performance metrics for quantifying the performance
of robust multi-objective meta-heuristics, high computational cost of explicit
methods, and low reliability of implicit methods. After identifying the gaps the
following steps were taken to fill them as shown in Fig. 10.1.
A set of novel frameworks were proposed with alterable parameters to design
test functions with different level of difficulty and characteristics. The frame-
works generated test problems with multiple local/global non-robust solutions.
It was also possible to design a bi-modal search space with desired robustness
level for the robust and global optima. In addition, several difficulties such
as bias, deceptiveness, multi-modality, and flatness were integrated to different
functions to design challenging single-objective robust test problems.
Three frameworks were proposed to design robust multi-objective test prob-
lems. The first framework allowed designing bi-frontal search spaces with a
parameter to control the level of robustness. This framework was equipped with
a parameter that defines the shape of the fronts as well. Therefore, several test
functions with different shapes for robust and global Pareto optimal front were

202
10. Conclusion 203

 Proposal of three frameworks


for single-objective robust
optimisation
 Proposal of thee frameworks
for multi-objective robust
optimisation
 Proposal of two  Integrating several hindrances,
performance measures to obstacles, and difficulties to
quantify the performance of the current test functions in both
robust multi-objective fields
algorithms  Establishing the first standard
Robust test and challenging test suits for
function both fields
design

Robsut
performance
metric
design

Robust
algorithm
 Proposal of confidence measure
design
 Establishment of confidence-
based robust optimisation
 Establishment of confidence-
based robust multi-objective  Finding robust
optimisation design for marine
 Proposal of CRGA and CRPSO propellers reliably
without extra
 Proposal of CRMOPSO functions evaluation

Figure 10.1: Gaps filled by the thesis

constructed. The second framework generated multi-frontal search spaces with


different degrees of robustness to benchmark the performance of robust multi-
objective algorithms in terms of non-robust local fronts avoidance. The third
framework created test functions with separated fronts. Each region of the front
had its own level of robustness, so the ability of an algorithm in finding such
types of fronts could be benchmarked. Several test functions with different lev-
els of difficulty were proposed. In addition, the current robust multi-objective
benchmarks proposed in the literature were improved and extended to mimic
the characteristics of real search spaces better. The thesis also considered the
introduction of several hindrances for robust multi-objective test functions: bias,
a large number of local fronts, deceptive search spaces, and flat search spaces.
204 10. Conclusion

A set of novel performance measures was proposed to quantify the perfor-


mance of robust multi-objective algorithms. The first performance measure was
a coverage measure, in which the distribution of the solutions along the robust
regions of the true Pareto optimal front was measured and quantified. The pro-
posed coverage measure allowed comparing the algorithm based on their ability
in finding uniformly distributed robust Pareto optimal solutions. The second
proposed performance measure, success ratio, quantified the performance of a ro-
bust algorithm in finding robust and non-robust Pareto optimal solutions. This
measure counted the number of robust and non-robust solutions and measured
how successful an algorithm was in finding robust Pareto optimal solutions and
avoiding non-robust Pareto optimal solutions.
A novel measure called confidence measure was proposed to improve the re-
liability of the implicit robust optimisation methods. In this measure the status
of previously sampled points in the parameter space were considered and utilised
to measure the confidence level of solutions during optimisation. The proposed
confidence measure was integrated with inequality operators. As a result, sev-
eral confidence-based relational operators were proposed to compare the search
agents of meta-heuristics confidently. In addition, a novel robust optimisation
approach called confidence-based robust optimisation was established. The pro-
posed confidence-based robust optimisation was integrated with two well-known
algorithms: PSO and GA. In the confidence-based PSO, particles updated their
positions in the search space normally, but the process of updating gBest and
pBests were done by the proposed relational confidence-based operator. The GA
was also modified by the proposed confidence-based relational operators. Two
confidence-based operators were proposed for GA: confidence-based cross-over
and confidence-based elitism. In the former method, the individuals were com-
pared and allowed to mate if and only if they were confidently better. In the
latter operator, the elite was only updated based on the proposed confidence-
based relational operators.
The proposed confidence measure was employed to design a confidence-based
Pareto dominance operator. The proposed confidence-based dominance allowed
designing a novel approach for finding robust Pareto optimal solutions called
confidence-based robust multi-objective optimisation. This novel approach could
be integrated with any meta-heuristics. The MOPSO algorithm was chosen as
one of the best algorithms in the literature and converted to confidence-based
10. Conclusion 205

robust MOPSO. In this algorithm the particles updated their positions normally,
but they were added to the archive if and only if they confidently dominated one
of the archive members or were confidently non-dominated compared to the
archive members.
The shape of a ship propeller was optimised by the proposed approach to find
its robust designs. The thesis first considered the investigation of this problem
in terms of the shape of Pareto optimal front, effect of number of blades, effect
of RPM, and negative impacts of uncertainties on efficiency and cavitation. The
proposed CRMOPSO was then employed to find the robust designs for the case
study.
The results of the proposed CRPSO and CRGA on the proposed test func-
tions first revealed the merits of the proposed confidence measure. It was ob-
served that the confidence measure prevents the algorithms from favouring non-
confident solutions and making risky decisions during optimisation. The results
showed that the confidence-based algorithms were able to outperform other ro-
bustness handling techniques in the literature. The results also proved the merits
of the proposed test functions, in that they provide very challenging test beds
and allow benchmarking the performance of different algorithms effectively.
The results of the proposed CRMOPSO proved that the confidence measure
and confidence-based robust multi-objective optimisation approach can provide
very promising results. It was observed that confidence-based Pareto domi-
nance prevents non-confident particles from entering the archive. This assisted
CRMOPSO to always have confident solutions in the archive as the leaders to
guide other particles toward robust regions of the search space. The qualita-
tive and quantitative results indicated that CRMOPSO was able to outperform
other robust algorithms in the literature. In addition, the comparative results
of algorithms on the proposed benchmark functions showed that the proposed
robust multi-objective test functions provide very challenging test beds with di-
verse characteristics and allow designers to benchmark and compare different
algorithms efficiently.
The results of the MOPSO algorithm on the real case study first showed
the optimal values for structural parameters and operating conditions. The
importance of uncertainties in the parameters was also revealed, in which both
efficiency and cavitation fluctuated noticeably. The results of CRMOPO on
the case study proved that this algorithms is able to find robust solutions for
206 10. Conclusion

expensive, challenging problems efficiently. These results strongly demonstrated


the merits of the confidence-based robust optimisation perspective proposed in
this thesis.
The following main conclusions can therefore be made:

• The proposed test functions provide very challenging test beds for bench-
marking the performance of roust algorithms.

• The proposed robust performance measures allow quantifying the perfor-


mance of robust multi-objective algorithms and facilitate relative compar-
isons.

• The proposed confidence measure is able to effectively quantify the confi-


dence level that we have when relying on the previously sampled points.

• The proposed confidence-based relational operators are able to efficiently


compare the robust solutions during optimisation and favour the more
confident ones.

• The proposed confidence-based Pareto dominance operator also assist de-


signers to find non-dominated robust Pareto optimal solutions by consid-
ering their confidence levels.

• The proposed confidence-based robust optimisation prevents algorithms


from favouring non-robust solutions during optimisation and consequently
allows search agents with high levels of confidence to guide other search
agents.

• The proposed two confidence-based robust optimisation approaches im-


prove the reliability of current robust optimisation algorithms that only
rely on previously sampled points.

• The proposed confidence-based robust optimisation apporaches are readily


applicable for solving real problems.

• The proposed systematic design process allows contributors to reliably and


conveniently design and compare robust algorithms.
10. Conclusion 207

10.2 Achievements and significance


A number of contributions have been made to robust single-objective and multi-
objective optimisation as summarised in Fig. 10.2. The highlighted boxes in

Confidence measure

Confidence-based relational
Implicit mathods
operators
CRPSO
Single-objective robust Confidence-based robsust
Population-based stochastic robust

Explicit methods
optimisation optimisation
CRGA
optimisaiton methods

Benchmark problems

Confidence-based Pareto
optimality
Implicit methods CRMOPSO
Confidence-based robust
multi-objective optimisation
Explicit methods Real application
Multi-objective robust
optimisation
Benchmark problem

Performance metrics

Efficiency

Single objective optimisation Cavitation


Marine propeller design

Thrust

Efficiency vs. thrust

Multi-objective optimisation Efficiency vs. cavitation

Robust multi-objective
Impacts of uncertainties
optimisation

Figure 10.2: Contributions of the thesis

green indicate where the following contributions fit:


208 10. Conclusion

• A systematic design process has been proposed to design robust algorithms


in the fields of robust single- and multi-objective optimisation.

• Due to the lack of standard test functions in the literature of robust single-
objective optimisation, the thesis also considers the collection/investigation
of the current robust test functions and the proposal of more challenging
ones.

• Several test functions were proposed, which can be considered as the first
standard set of test functions in the literature of single-objective robust
optimisation.

• Three frameworks were proposed to generate robust single-objective test


functions with desired characteristics and level of difficulty, another novel
contribution to the field.

• Several multi-objective test functions were proposed, which can be consid-


ered as the first standard set of test functions in the literature of multi-
objective robust optimisation.

• Three frameworks were proposed to generate robust multi-objective test


functions with desired characteristics and level of difficulty.

• Two standard test suites were proposed for the first time, to be used by
other researchers.

• Two performance measures for quantifying the performance of robust multi-


objective algorithms were proposed. There were no performance mea-
sures in the literature that consider robustness of Pareto optimal solutions.
Therefore, the proposed set of performance measures is another significant
achievement of this research.

• A systematic attempt has been undertaken to improve the reliability of


robust algorithms that rely on previously sampled points during optimisa-
tion.

• A novel metric called confidence measure was proposed to quantify the


confidence level of solutions based on the status of previously sampled
points in the parameter space. This is no such metric in the literature, so
it can be considered as a substantial contribution to the field.
10. Conclusion 209

• With the proposal of the confidence measure, the relational operators and
Pareto dominance have been re-defined to confidently compare the algo-
rithms in search spaces with single and multiple objectives respectively.
The proposed operators were also a fresh contribution to the literature.

• The most significant contribution and achievement of this research was the
proposal of two novel robust optimisation approaches: confidence-based ro-
bust optimisation and confidence-based robust multi-objective optimisation.
These two approaches established two new research branches in the fields
of robust single-objective and multi-objective optimisation. The experi-
mental results presented in the thesis proved that these two concepts are
able to find robust solutions with a very high level of reliability without
addition function evaluations.

• Two novel robust algorithms based on PSO and GA were proposed, which
can be considered as the first two confidence-based robust algorithm in the
field of single-objective optimisation.

• One robust multi-objective algorithm based on MOPSO was proposed,


which utilises the proposed confidence-based robust multi-objective opti-
misation approach.

• A problem of propeller design was solved by MOPSO considering two


conflicting objectives (efficiency versus cavitation), and optimal values for
RPM, number of blades, and parameters were found for the first time.

• Investigating and proving the significant negative impacts of uncertainty


on the obtained optimal values for parameters and RPM were other sub-
stantial outcomes of this research.

• Robust optimal values for propeller design parameters and RPM were
found by the proposed CRMOPSO for the first time.

In this thesis I have proposed and implemented the necessary components of


a systematic robust optimisation design process and demonstrated its ability to
reliably find robust solutions for challenging design problems without additional
computational cost.
210 10. Conclusion

10.3 Future work


This thesis opens up many research directions. The main and most interesting
future work would be the investigation of all the proposed concepts in the field
of constrained robust optimisation. The effectiveness of the proposed confidence
measure, relational operators, Pareto dominance, and confidence-based robust
optimisation can all be applied to constrained problems. However, constrained
search spaces have their own difficulties that should be considered and require
special mechanisms.
Due to the similarity of robust optimisation and dynamic optimisation, in-
vestigating the concept of the confidence measure and confidence-based robust
optimisation in dynamic optimisation may also prove advantageous. The high
computational cost of explicit methods and low reliability of implicit methods
in the field of dynamic optimisation have the potential to be improved by the
concepts proposed in this thesis.
The proposed confidence measure can co-operate alongside any kind of ro-
bustness indicators to consider the confidence level of solutions during optimi-
sation. Therefore, another research direction would be the investigation of the
confidence measure to improve different type I and type II robustness handling
techniques in the literature. The proposed confidence-based relational operators
and Pareto dominance can be employed to compare search agents of different
algorithms in the literature, which is another research direction.
All of the benchmark problems in this thesis are unconstrained. For fu-
ture work, therefore, integration of different types of constraints for designing
constrained robust single-objective and multi-objective test functions is worth
consideration. Investigating the suitability of the current binary measure or
proposing new ones for RMOO is recommended for future work as well.
Another important research avenue is to propose confidence-based operators
and mechanisms to find robust Pareto optimal solutions of many-objective prob-
lems. In the field of many-objective optimisation, designing unconstrained and
constrained robust many-objective robust test problems would also be a valu-
able contribution. Investigating special mechanisms for CRMOPSO algorithms
to handle ‘thick’ fronts is important as well.
Bibliography

[1] NASA Ames National Full-Scale Aerodynamics Complex (nfac). http:


//www.nasa.gov/centers/ames/multimedia/images/2005/nfac.html.
Accessed: 2016-08-16.

[2] Hussein A Abbass, Ruhul Sarker, and Charles Newton. Pde: a pareto-
frontier differential evolution approach for multi-objective optimization
problems. In Evolutionary Computation, 2001. Proceedings of the 2001
Congress on, volume 2, pages 971–978. IEEE, 2001.

[3] Mohammad Ali Abido. Two-level of nondominated solutions approach


to multiobjective particle swarm optimization. In Proceedings of the 9th
annual conference on Genetic and evolutionary computation, pages 726–
733. ACM, 2007.

[4] Enrique Alba and Bernabé Dorronsoro. The exploration/exploitation


tradeoff in dynamic cellular genetic algorithms. Evolutionary Computa-
tion, IEEE Transactions on, 9(2):126–142, 2005.

[5] M. Asafuddoula, H.K. Singh, and T. Ray. Six-sigma robust design opti-
mization using a many-objective decomposition-based evolutionary algo-
rithm. Evolutionary Computation, IEEE Transactions on, 19(4):490–507,
Aug 2015.

[6] Mordecai Avriel. Nonlinear programming: analysis and methods. Courier


Corporation, 2003.

[7] Thomas Back. Evolutionary algorithms in theory and practice. Oxford


Univ. Press, 1996.

[8] Thomas Back, David B Fogel, and Zbigniew Michalewicz. Handbook of


evolutionary computation. IOP Publishing Ltd., 1997.

211
212 BIBLIOGRAPHY

[9] Thomas Bäck, Ulrich Hammel, and Dirk Wiesmann. Robust design of
multilayer optical coatings by means of evolution strategies. HT014601767,
1998.

[10] Thomas Bäck and Zbigniew Michalewicz. Test landscapes. Handbook of


Evolutionary Computation, pages B, 2, 1997.

[11] Johannes Bader and Eckart Zitzler. Robustness in hypervolume-based


multiobjective search. Technical report, Tech. Rep. TIK 317, Computer
Engineering and Networks Laboratory, ETH Zurich, 2010.

[12] C. Barrico and C.H. Antunes. Robustness analysis in multi-objective op-


timization using a degree of robustness concept. pages 1887–1892. IEEE,
1892.

[13] C. Barrico and C.H. Antunes. A new approach to robustness analysis in


multi-objective optimization. In 7th International Conference on Multi-
Objective Programming and Goal Programming, 2006.

[14] H.G. Beyer and B. Sendhoff. Robust optimization-a comprehensive survey.


Computer methods in applied mechanics and engineering, 196(33-34):3190–
3218, 2007.

[15] Leonora Bianchi, Marco Dorigo, Luca Maria Gambardella, and Walter J
Gutjahr. A survey on metaheuristics for stochastic combinatorial optimiza-
tion. Natural Computing: an international journal, 8(2):239–287, 2009.

[16] Christian Blum, Jakob Puchinger, Günther R Raidl, and Andrea Roli.
Hybrid metaheuristics in combinatorial optimization: A survey. Applied
Soft Computing, 11(6):4135–4151, 2011.

[17] Ilhem Boussaı̈d, Julien Lepagnot, and Patrick Siarry. A survey on opti-
mization metaheuristics. Information Sciences, 237:82–117, 2013.

[18] J. Branke. Creating robust solutions by means of evolutionary algorithms.


pages 119–128. Springer, 1998.

[19] Jurgen Branke. Evolutionary optimization in dynamic environments.


kluwer academic publishers, 2001.
BIBLIOGRAPHY 213

[20] Jürgen Branke. Reducing the sampling variance when searching for robust
solutions. In Genetic and Evolutionary Computation Conference (GECCO
2001), pages 235–242, 2001.

[21] Jürgen Branke and Kalyanmoy Deb. Integrating user preferences into
evolutionary multi-objective optimization. In Knowledge incorporation in
evolutionary computation, pages 461–477. Springer, 2005.

[22] Jürgen Branke, Thomas Kaußler, and Harmut Schmeck. Guidance in evo-
lutionary multi-objective optimization. Advances in Engineering Software,
32(6):499–507, 2001.

[23] Dimo Brockhoff, Tobias Friedrich, Nils Hebbinghaus, Christian Klein,


Frank Neumann, and Eckart Zitzler. Do additional objectives make a
problem harder? In Proceedings of the 9th annual conference on Genetic
and evolutionary computation, pages 765–772. ACM, 2007.

[24] Dimo Brockhoff, Tobias Friedrich, Nils Hebbinghaus, Christian Klein,


Frank Neumann, and Eckart Zitzler. On the effects of adding objectives
to plateau functions. Evolutionary Computation, IEEE Transactions on,
13(3):591–603, 2009.

[25] SP Brooks and BJT Morgan. Optimization using simulated annealing.


The Statistician, pages 241–257, 1995.

[26] John Carlton. Marine propellers and propulsion. Butterworth-Heinemann,


2012.

[27] Andrew J Chipperfield, Nikolay V Dakev, Peter J Fleming, and James F


Whidborne. Multiobjective robust control using evolutionary algorithms.
In Industrial Technology, 1996.(ICIT’96), Proceedings of The IEEE Inter-
national Conference on, pages 269–273. IEEE, 1996.

[28] C.A.C. Coello, G.T. Pulido, and M.S. Lechuga. Handling multiple objec-
tives with particle swarm optimization. Evolutionary Computation, IEEE
Transactions on, 8(3):256–279, 2004.

[29] Carlos A Coello Coello. Evolutionary multi-objective optimization: some


current research trends and topics that remain to be explored. Frontiers
of Computer Science in China, 3(1):18–30, 2009.
214 BIBLIOGRAPHY

[30] Carlos A Coello Coello and Gary B Lamont. Applications of multi-objective


evolutionary algorithms, volume 1. World Scientific, 2004.

[31] Carlos A Coello Coello, David A Van Veldhuizen, and Gary B Lamont.
Evolutionary algorithms for solving multi-objective problems, volume 242.
Springer, 2002.

[32] Carlos A Coello Coello. Evolutionary multi-objective optimization: a


historical view of the field. Computational Intelligence Magazine, IEEE,
1(1):28–36, 2006.

[33] Carlos A Coello Coello and Maximino Salazar Lechuga. Mopso: A proposal
for multiple objective particle swarm optimization. In Evolutionary Com-
putation, 2002. CEC’02. Proceedings of the 2002 Congress on, volume 2,
pages 1051–1056. IEEE, 2002.

[34] Yann Collette and Patrick Siarry. Three new metrics to measure the con-
vergence of metaheuristics towards the pareto frontier and the aesthetic
of a set of solutions in biobjective optimization. Computers & operations
research, 32(4):773–792, 2005.

[35] A. Colorni, M. Dorigo, V. Maniezzo, et al. Distributed optimization by


ant colonies. In Proceedings of the first European conference on artificial
life, volume 142, pages 134–142. Paris, France, 1991.

[36] Gérard Cornuéjols. Valid inequalities for mixed integer linear programs.
Mathematical Programming, 112(1):3–44, 2008.

[37] Carlos Cruz, Juan R González, and David A Pelta. Optimization in dy-
namic environments: a survey on problems, methods and measures. Soft
Computing, 15(7):1427–1448, 2011.

[38] I. Das and J.E. Dennis. Normal-boundary intersection: A new method


for generating the pareto surface in nonlinear multicriteria optimization
problems. SIAM Journal on Optimization, 8(3):631–657, 1998.

[39] Dipankar Dasgupta and Zbigniew Michalewicz. Evolutionary algorithms


in engineering applications. Springer Science & Business Media, 2013.
BIBLIOGRAPHY 215

[40] David A Daum, Kalyanmoy Deb, and Jürgen Branke. Reliability-based


optimization for multiple constraints with evolutionary algorithms. In
Evolutionary Computation, 2007. CEC 2007. IEEE Congress on, pages
911–918. IEEE, 2007.

[41] Lawrence Davis. Bit-climbing, representational bias, and test suite design.
In ICGA, pages 18–23, 1991.

[42] K. Deb. Advances in evolutionary multi-objective optimization. Search


Based Software Engineering, pages 1–26, 2012.

[43] K. Deb and H. Gupta. Searching for robust pareto-optimal solutions


in multi-objective optimization. Lecture Notes in Computer Science,
3410:150–164, 2005.

[44] K. Deb and H. Gupta. Introducing robustness in multi-objective optimiza-


tion. Evolutionary Computation, 14(4):463–494, 2006.

[45] K. Deb, A. Pratap, S. Agarwal, and T. Meyarivan. A fast and elitist mul-
tiobjective genetic algorithm: Nsga-ii. Evolutionary Computation, IEEE
Transactions on, 6(2):182–197, 2002.

[46] Kalyanmoy Deb. Multi-objective genetic algorithms: Problem difficulties


and construction of test problems. Evolutionary computation, 7(3):205–
230, 1999.

[47] Kalyanmoy Deb, Shubham Gupta, David Daum, Jürgen Branke, Ab-
hishek Kumar Mall, and Dhanesh Padmanabhan. Reliability-based opti-
mization using evolutionary algorithms. Evolutionary Computation, IEEE
Transactions on, 13(5):1054–1074, 2009.

[48] Kalyanmoy Deb, Jeffrey Horn, and David E Goldberg. Multimodal decep-
tive functions. Complex Systems, 7(2):131–154, 1993.

[49] Kalyanmoy Deb, S Karthik, et al. Dynamic multi-objective optimization


and decision-making using modified nsga-ii: a case study on hydro-thermal
power scheduling. In Evolutionary Multi-Criterion Optimization, pages
803–817. Springer, 2007.
216 BIBLIOGRAPHY

[50] Kalyanmoy Deb and Abhishek Kumar. Interactive evolutionary multi-


objective optimization and decision-making using reference direction
method. In Proceedings of the 9th annual conference on Genetic and evo-
lutionary computation, pages 781–788. ACM, 2007.

[51] Kalyanmoy Deb, Dhanesh Padmanabhan, Sulabh Gupta, and Ab-


hishek Kumar Mall. Reliability-based multi-objective optimization us-
ing evolutionary algorithms. In Evolutionary multi-criterion optimization,
pages 66–80. Springer, 2007.

[52] Kalyanmoy Deb, Amrit Pratap, and T Meyarivan. Constrained test prob-
lems for multi-objective evolutionary optimization. In Evolutionary Multi-
Criterion Optimization, pages 284–298. Springer, 2001.

[53] Kalyanmoy Deb, Ankur Sinha, Pekka J Korhonen, and Jyrki Wallenius.
An interactive evolutionary multiobjective optimization method based on
progressively approximated value functions. Evolutionary Computation,
IEEE Transactions on, 14(5):723–739, 2010.

[54] Kalyanmoy Deb, Ankur Sinha, and Saku Kukkonen. Multi-objective test
problems, linkages, and evolutionary methodologies. In Proceedings of the
8th annual conference on Genetic and evolutionary computation, pages
1141–1148. ACM, 2006.

[55] Kalyanmoy Deb, Lothar Thiele, Marco Laumanns, and Eckart Zitzler.
Scalable multi-objective optimization test problems. In Proceedings of
the Congress on Evolutionary Computation (CEC-2002),(Honolulu, USA),
pages 825–830. Proceedings of the Congress on Evolutionary Computation
(CEC-2002),(Honolulu, USA), 2002.

[56] Kalyanmoy Deb, Lothar Thiele, Marco Laumanns, and Eckart Zit-
zler. Scalable test problems for evolutionary multiobjective optimization.
Springer, 2005.

[57] JG Digalakis and KG Margaritis. On benchmarking functions for genetic


algorithms. International journal of computer mathematics, 77(4):481–506,
2001.
BIBLIOGRAPHY 217

[58] C.E.J. Dippel. Using particle swarm optimization for finding robust op-
tima.

[59] Michael Dodson and Geoffrey T Parks. Robust aerodynamic design opti-
mization using polynomial chaos. Journal of Aircraft, 46(2):635–646, 2009.

[60] Russ C Eberhart and James Kennedy. A new optimizer using particle
swarm theory. In Proceedings of the sixth international symposium on
micro machine and human science, volume 1, pages 39–43. New York,
NY, 1995.

[61] Francis Ysidro Edgeworth. Mathematical psychics: An essay on the appli-


cation of mathematics to the moral sciences. C. Keagann Paul, 1881.

[62] Agoston E Eiben and CA Schippers. On evolutionary exploration and


exploitation. Fundamenta Informaticae, 35(1):35–50, 1998.

[63] MS Eldred and John Burkardt. Comparison of non-intrusive polynomial


chaos and stochastic collocation methods for uncertainty quantification.
AIAA paper, 976(2009):1–20, 2009.

[64] Brenden Epps. Openprop v2. 3 theory document, 2010.

[65] Brenden Epps, Julie Chalfant, Richard Kimball, Alexandra Techet, Kevin
Flood, and Chrysssostomos Chryssostomidis. Openprop: an open-source
parametric design and analysis tool for propellers. In Proceedings of the
2009 Grand Challenges in Modeling & Simulation Conference, pages 104–
111. Society for Modeling & Simulation International, 2009.

[66] Ali Farhang-Mehr and Shapour Azarm. Diversity assessment of pareto


optimal solution sets: an entropy approach. In Computational Intelligence,
Proceedings of the World on Congress on, volume 1, pages 723–728. IEEE,
2002.

[67] Marco Farina, Kalyanmoy Deb, and Paolo Amato. Dynamic multiobjective
optimization problems: Test cases, approximation, and applications. In
Evolutionary Multi-Criterion Optimization, pages 311–326. Springer, 2003.

[68] Marco Farina, Kalyanmoy Deb, and Paolo Amato. Dynamic multiobjec-
tive optimization problems: test cases, approximations, and applications.
Evolutionary Computation, IEEE Transactions on, 8(5):425–442, 2004.
218 BIBLIOGRAPHY

[69] J. Ferreira, CM Fonseca, JA Covas, and A. Gaspar-Cunha. Evolutionary


multi-objective robust optimization. Advances in evolutionary algorithms,
pages 261–278, 2008.

[70] Lawrence J Fogel, Alvin J Owens, and Michael J Walsh. Artificial intelli-
gence through simulated evolution. 1966.

[71] Carlos M Fonseca and Peter J Fleming. An overview of evolutionary algo-


rithms in multiobjective optimization. Evolutionary computation, 3(1):1–
16, 1995.

[72] Carlos M Fonseca, Peter J Fleming, et al. Genetic algorithms for multiob-
jective optimization: Formulationdiscussion and generalization. In ICGA,
volume 93, pages 416–423, 1993.

[73] Carlos M Fonseca, Joshua D Knowles, Lothar Thiele, and Eckart Zitzler.
A tutorial on the performance assessment of stochastic multiobjective opti-
mizers. In Third International Conference on Evolutionary Multi-Criterion
Optimization (EMO 2005), volume 216, page 240, 2005.

[74] A. Gaspar-Cunha and J. Covas. Robustness in multi-objective optimiza-


tion using evolutionary algorithms. Computational Optimization and Ap-
plications, 39(1):75–96, 2008.

[75] António Gaspar-Cunha, Jose Ferreira, and Gustavo Recio. Evolutionary


robustness analysis for multi-objective optimization: benchmark problems.
Structural and Multidisciplinary Optimization, 49(5):771–793, 2014.

[76] Tiziano Ghisu, Geoffrey T Parks, Jerome P Jarrett, and P John Clarkson.
Robust design optimization of gas turbine compression systems. Journal
of Propulsion and Power, 27(2):282–295, 2011.

[77] Fred Glover. Tabu search-part i. ORSA Journal on computing, 1(3):190–


206, 1989.

[78] Fred Glover. Tabu search-part ii. ORSA Journal on computing, 2(1):4–32,
1990.

[79] Anupriya Gogna and Akash Tayal. Metaheuristics: review and application.
Journal of Experimental & Theoretical Artificial Intelligence, 25(4):503–
526, 2013.
BIBLIOGRAPHY 219

[80] Chi-Keong Goh and Kay Chen Tan. Robust evolutionary multi-objective
optimization. In Evolutionary Multi-objective Optimization in Uncertain
Environments, pages 189–211. Springer, 2009.

[81] Chi Keong Goh, Kay Chen Tan, Chun Yew Cheong, and Yew-Soon Ong.
An investigation on noise-induced features in robust evolutionary multi-
objective optimization. Expert Systems with Applications, 37(8):5960–
5980, 2010.

[82] D.E. Goldberg and J.H. Holland. Genetic algorithms and machine learning.
Machine Learning, 3(2):95–99, 1988.

[83] S Gunawan and S Azarm. Multi-objective robust optimization using a


sensitivity region concept. Structural and Multidisciplinary Optimization,
29(1):50–60, 2005.

[84] Michael Pilegaard Hansen and Andrzej Jaszkiewicz. Evaluating the quality
of approximations to the non-dominated set. IMM, Department of Math-
ematical Modelling, Technical Universityof Denmark, 1998.

[85] Marde Helbig and Andries P Engelbrecht. Benchmarks for dynamic multi-
objective optimisation. In Computational Intelligence in Dynamic and
Uncertain Environments (CIDUE), 2013 IEEE Symposium on, pages 84–
91. IEEE, 2013.

[86] Marde Helbig and Andries P Engelbrecht. Issues with performance mea-
sures for dynamic multi-objective optimisation. In Computational Intel-
ligence in Dynamic and Uncertain Environments (CIDUE), 2013 IEEE
Symposium on, pages 17–24. IEEE, 2013.

[87] Mardé Helbig and Andries P Engelbrecht. Performance measures for


dynamic multi-objective optimisation algorithms. Information Sciences,
250:61–81, 2013.

[88] Alberto Herreros and Enrique Baeyens. Design of multiobjective robust


controllers using genetic algorithms. 1999.

[89] John H Holland. Genetic algorithms. Scientific american, 267(1):66–72,


1992.
220 BIBLIOGRAPHY

[90] John H Holland and Judith S Reitman. Cognitive systems based on adap-
tive algorithms. ACM SIGART Bulletin, (63):49–49, 1977.

[91] Holger H Hoos and Thomas Stützle. Stochastic local search: Foundations
& applications. Elsevier, 2004.

[92] Simon Huband, Philip Hingston, Luigi Barone, and Lyndon While. A
review of multiobjective test problems and a scalable test problem toolkit.
Evolutionary Computation, IEEE Transactions on, 10(5):477–506, 2006.

[93] Jonas Ide and Anita Schöbel. Robustness for uncertain multi-objective
optimization: a survey and analysis of different concepts. OR Spectrum,
38(1):235–271, 2016.

[94] Y. Jin and B. Sendhoff. Trade-off between performance and robustness:


An evolutionary multiobjective approach. pages 68–68. Springer, 2003.

[95] Yaochu Jin. Surrogate-assisted evolutionary computation: Recent ad-


vances and future challenges. Swarm and Evolutionary Computation,
1(2):61–70, 2011.

[96] Yaochu Jin, Markus Olhofer, and Bernhard Sendhoff. A framework for
evolutionary optimization with approximate fitness functions. Evolution-
ary Computation, IEEE Transactions on, 6(5):481–494, 2002.

[97] Yaochu Jin and Bernhard Sendhoff. Constructing dynamic optimization


test problems using the multi-objective optimization concept. In Applica-
tions of Evolutionary Computing, pages 525–536. Springer, 2004.

[98] Andy J Keane. Comparison of several optimization strategies for robust


turbine blade design. Journal of Propulsion and Power, 25(5):1092–1099,
2009.

[99] I.Y. Kim and OL De Weck. Adaptive weighted-sum method for bi-objective
optimization: Pareto front generation. Structural and Multidisciplinary
Optimization, 29(2):149–158, 2005.

[100] Timoleon Kipouros, Daniel M Jaeggi, William N Dawes, Geoffrey T Parks,


A Mark Savill, and P John Clarkson. Biobjective design optimization for
axial compressors using tabu search. AIAA journal, 46(3):701–711, 2008.
BIBLIOGRAPHY 221

[101] Scott Kirkpatrick. Optimization by simulated annealing: Quantitative


studies. Journal of statistical physics, 34(5-6):975–986, 1984.

[102] Scott Kirkpatrick, C Daniel Gelatt, Mario P Vecchi, et al. Optimization


by simmulated annealing. science, 220(4598):671–680, 1983.

[103] Jurgen Klockgether and Hans-Paul Schwefel. Two-phase nozzle and hollow
core jet experiments. In Proc. 11th Symp. Engineering Aspects of Magne-
tohydrodynamics, pages 141–148. Pasadena, CA: California Institute of
Technology, 1970.

[104] J.D. Knowles and D.W. Corne. Approximating the nondominated front
using the pareto archived evolution strategy. Evolutionary computation,
8(2):149–172, 2000.

[105] Johannes Kruisselbrink, Michael Emmerich, and Thomas Bäck. An archive


maintenance scheme for finding robust solutions. In Parallel Problem Solv-
ing from Nature, PPSN XI, pages 214–223. Springer, 2010.

[106] J.W. Kruisselbrink. Evolution strategies for robust optimization. PhD


thesis, 2012.

[107] Apurva Kumar, Andy J Keane, Prasanth B Nair, and Shahrokh Shah-
par. Robust design of compressor fan blades against erosion. Journal of
Mechanical Design, 128(4):864–873, 2006.

[108] Albert YS Lam and Victor OK Li. Chemical-reaction-inspired metaheuris-


tic for optimization. Evolutionary Computation, IEEE Transactions on,
14(3):381–399, 2010.

[109] Ailsa H Land and Alison G Doig. An automatic method for solving discrete
programming problems. In 50 Years of Integer Programming 1958-2008,
pages 105–132. Springer, 2010.

[110] Marco Laumanns, Günter Rudolph, and Hans-Paul Schwefel. A spatial


predator-prey approach to multi-objective optimization: A preliminary
study. In Parallel Problem Solving from Nature-PPSN V, pages 241–249.
Springer, 1998.

[111] Andrew Lewis. personal communication, 2015.


222 BIBLIOGRAPHY

[112] Andrew Lewis, Sanaz Mostaghim, and Ian Scriven. Asynchronous


multi-objective optimisation in unreliable distributed environments. In
Biologically-Inspired Optimisation Methods, pages 51–78. Springer, 2009.

[113] C Li, S Yang, TT Nguyen, EL Yu, X Yao, Y Jin, HG Beyer, and PN Sug-
anthan. Benchmark generator for cec 2009 competition on dynamic op-
timization. University of Leicester, University of Birmingham, Nanyang
Technological University, Tech. Rep, 2008.

[114] Hui Li and Qingfu Zhang. Multiobjective optimization problems with


complicated pareto sets, moea/d and nsga-ii. Evolutionary Computation,
IEEE Transactions on, 13(2):284–302, 2009.

[115] Lingbo Li, Mark Harman, Emmanuel Letier, and Yuanyuan Zhang. Robust
next release problem: handling uncertainty during optimization. In Pro-
ceedings of the 2014 conference on Genetic and evolutionary computation,
pages 1247–1254. ACM, 2014.

[116] JJ Liang, PN Suganthan, and K Deb. Novel composition test functions for
numerical global optimization. In Swarm Intelligence Symposium, 2005.
SIS 2005. Proceedings 2005 IEEE, pages 68–75. IEEE, 2005.

[117] Helena R Lourenço, Olivier C Martin, and Thomas Stützle. Iterated local
search. Springer, 2003.

[118] Guan-Chun Luh and Chung-Huei Chueh. Multi-objective optimal de-


sign of truss structure with immune algorithm. Computers & structures,
82(11):829–844, 2004.

[119] R Timothy Marler and Jasbir S Arora. Survey of multi-objective optimiza-


tion methods for engineering. Structural and multidisciplinary optimiza-
tion, 26(6):369–395, 2004.

[120] Jörn Mehnen, Günter Rudolph, and Tobias Wagner. Evolutionary opti-
mization of dynamic multiobjective functions. 2006.

[121] A. Messac and C.A. Mattson. Generating well-distributed sets of pareto


points for engineering design using physical programming. Optimization
and Engineering, 3(4):431–450, 2002.
BIBLIOGRAPHY 223

[122] Marcin Molga and Czeslaw Smutnicki. Test functions for optimization
needs. Test functions for optimization needs, 2005.

[123] Masaharu Munetomo and D Goldberg. Linkage identification by non-


monotonicity detection for overlapping functions. Evolutionary compu-
tation, 7(4):377–398, 1999.

[124] Tadahiko Murata and Hisao Ishibuchi. Moga: Multi-objective genetic al-
gorithms. In Evolutionary Computation, 1995., IEEE International Con-
ference on, volume 1, page 289. IEEE, 1995.

[125] Patrick Ngatchou, Anahita Zarei, and MA El-Sharkawi. Pareto multi ob-
jective optimization. In Intelligent Systems Application to Power Systems,
2005. Proceedings of the 13th International Conference on, pages 84–91.
IEEE, 2005.

[126] Tatsuya Okabe, Yaochu Jin, Markus Olhofer, and Bernhard Sendhoff. On
test functions for evolutionary multi-objective optimization. In Parallel
Problem Solving from Nature-PPSN VIII, pages 792–802. Springer, 2004.

[127] Yew-Soon Ong, Prasanth B Nair, and Kai Yew Lum. Max-min surrogate-
assisted evolutionary algorithm for robust design. Evolutionary Computa-
tion, IEEE Transactions on, 10(4):392–404, 2006.

[128] Andrzej Osyczka and Sourav Kundu. A new method to solve generalized
multicriteria optimization problems using the simple genetic algorithm.
Structural optimization, 10(2):94–99, 1995.

[129] Ingo Paenke, Jürgen Branke, and Yaochu Jin. Efficient search for robust
solutions by means of evolutionary algorithms and fitness approximation.
Evolutionary Computation, IEEE Transactions on, 10(4):405–420, 2006.

[130] Vilfredo Pareto. Cours d’economie politique. Librairie Droz, 1964.

[131] Konstantinos E Parsopoulos and Michael N Vrahatis. Particle swarm op-


timization method in multiobjective problems. In Proceedings of the 2002
ACM symposium on Applied computing, pages 603–607. ACM, 2002.

[132] CP Pouw. Development of a multiobjective design optimization procedure


for marine propellers. PhD thesis, TU Delft, Delft University of Technol-
ogy, 2008.
224 BIBLIOGRAPHY

[133] Gade Pandu Rangaiah. Multi-objective optimization: techniques and ap-


plications in chemical engineering, volume 1. World Scientific, 2008.

[134] E. Rashedi, H. Nezamabadi-pour, and S. Saryazdi. Gsa: a gravitational


search algorithm. Information Sciences, 179(13):2232–2248, 2009.

[135] T. Ray. Constrained robust optimal design using a multiobjective evolu-


tionary algorithm. In Evolutionary Computation, 2002. CEC ’02. Proceed-
ings of the 2002 Congress on, volume 1, pages 419–424, 2002.

[136] Tapabrata Ray. Constrained robust optimal design using a multiobjec-


tive evolutionary algorithm. In Evolutionary Computation, 2002. CEC’02.
Proceedings of the 2002 Congress on, volume 1, pages 419–424. IEEE, 2002.

[137] Tapabrata Ray and Warren Smith. A surrogate assisted parallel multiob-
jective evolutionary algorithm for robust engineering design. Engineering
Optimization, 38(8):997–1011, 2006.

[138] Ingo Rechenberg. Evolution strategy: Optimization of technical systems


by means of biological evolution. Fromman-Holzboog, Stuttgart, 104, 1973.

[139] Ingo Rechenberg. Evolution strategy. Computational Intelligence: Imitat-


ing Life, 1, 1994.

[140] Margarita Reyes-Sierra and CA Coello Coello. Multi-objective particle


swarm optimizers: A survey of the state-of-the-art. International journal
of computational intelligence research, 2(3):287–308, 2006.

[141] Günter Rudolph. On a multi-objective evolutionary algorithm and its


convergence to the pareto set. In Evolutionary Computation Proceedings,
1998. IEEE World Congress on Computational Intelligence., The 1998
IEEE International Conference on, pages 511–516. IEEE, 1998.

[142] Günter Rudolph and Alexandru Agapie. Convergence properties of some


multi-objective evolutionary algorithms. Citeseer, 2000.

[143] A. Saha, T. Ray, and W. Smith. Towards practical evolutionary robust


multi-objective optimization. pages 2123–2130. IEEE.

[144] Jason R Schott. Fault tolerant design using single and multicriteria genetic
algorithm optimization. Technical report, DTIC Document, 1995.
BIBLIOGRAPHY 225

[145] Ian Scriven, David Ireland, Andrew Lewis, Sanaz Mostaghim, and Jürgen
Branke. Asynchronous multiple objective particle swarm optimisation in
unreliable distributed environments. In Evolutionary Computation, 2008.
CEC 2008.(IEEE World Congress on Computational Intelligence). IEEE
Congress on, pages 2481–2486. IEEE, 2008.

[146] Pranay Seshadri, Shahrokh Shahpar, and Geoffrey T Parks. Robust com-
pressor blades for desensitizing operational tip clearance variations. In
ASME Turbo Expo 2014: Turbine Technical Conference and Exposition,
pages V02AT37A043–V02AT37A043. American Society of Mechanical En-
gineers, 2014.

[147] Y. Shi and R. Eberhart. A modified particle swarm optimizer. In Evolu-


tionary Computation Proceedings, 1998. IEEE World Congress on Compu-
tational Intelligence., The 1998 IEEE International Conference on, pages
69–73. IEEE, 1998.

[148] Margarita Reyes Sierra and Carlos A Coello Coello. Improving pso-based
multi-objective optimization using crowding, mutation and-dominance. In
Evolutionary multi-criterion optimization, pages 505–519. Springer, 2005.

[149] Vinicius LS Silva, Elizabeth F Wanner, Sérgio AAG Cerqueira, and Ri-
cardo HC Takahashi. A new performance metric for multiobjective opti-
mization: The integrated sphere counting. In Evolutionary Computation,
2007. CEC 2007. IEEE Congress on, pages 3625–3630. IEEE, 2007.

[150] Angus R Simpson, Graeme C Dandy, and Laurence J Murphy. Genetic


algorithms compared to other techniques for pipe optimization. Journal of
water resources planning and management, 120(4):423–443, 1994.

[151] Timothy W Simpson, Vasilli Toropov, Vladimir Balabanov, and Felipe AC


Viana. Design and analysis of computer experiments in multidisciplinary
design optimization: a review of how far we have come or not. In 12th
AIAA/ISSMO multidisciplinary analysis and optimization conference, vol-
ume 5, pages 10–12, 2008.

[152] James C Spall. Introduction to stochastic search and optimization: esti-


mation, simulation, and control, volume 65. John Wiley & Sons, 2005.
226 BIBLIOGRAPHY

[153] Nidamarthi Srinivas and Kalyanmoy Deb. Muiltiobjective optimization


using nondominated sorting in genetic algorithms. Evolutionary computa-
tion, 2(3):221–248, 1994.

[154] R. Storn and K. Price. Differential evolution–a simple and efficient heuristic
for global optimization over continuous spaces. Journal of global optimiza-
tion, 11(4):341–359, 1997.

[155] Ponnuthurai N Suganthan, Nikolaus Hansen, Jing J Liang, Kalyanmoy


Deb, Y-Po Chen, Anne Auger, and S Tiwari. Problem definitions and
evaluation criteria for the cec 2005 special session on real-parameter opti-
mization. KanGAL Report, 2005005, 2005.

[156] El-Ghazali Talbi. Metaheuristics: from design to implementation, vol-


ume 74. John Wiley & Sons, 2009.

[157] Kay Chen Tan, Tong Heng Lee, and Eik Fun Khor. Evolutionary al-
gorithms for multi-objective optimization: performance assessments and
comparisons. Artificial intelligence review, 17(4):251–290, 2002.

[158] Masahiro Tanaka, Hikaru Watanabe, Yasuyuki Furukawa, and Tetsuzo


Tanino. Ga-based decision support system for multicriteria optimization.
In Systems, Man and Cybernetics, 1995. Intelligent Systems for the 21st
Century., IEEE International Conference on, volume 2, pages 1556–1561.
IEEE, 1995.

[159] Shigeyoshi Tsutsui and Ashish Ghosh. Genetic algorithms with a robust
solution searching scheme. Evolutionary Computation, IEEE Transactions
on, 1(3):201–208, 1997.

[160] David A Van Veldhuizen. Multiobjective evolutionary algorithms: classifi-


cations, analyses, and new innovations. Technical report, DTIC Document,
1999.

[161] David A Van Veldhuizen and Gary B Lamont. Multiobjective evolutionary


algorithm research: A history and analysis. Technical report, Citeseer,
1998.
BIBLIOGRAPHY 227

[162] Felipe AC Viana, Timothy W Simpson, Vladimir Balabanov, and Vasilli


Toropov. Special section on multidisciplinary design optimization: Meta-
modeling in multidisciplinary design optimization: How far have we really
come? AIAA Journal, 52(4):670–690, 2014.

[163] JK Vis. Particle swarm optimizer for finding robust optima. 2009.

[164] R Vlennet, C Fonteix, and Ivan Marc. Multicriteria optimization using a


genetic algorithm for determining a pareto set. International Journal of
Systems Science, 27(2):255–260, 1996.

[165] Robert W Walters and Luc Huyse. Uncertainty analysis for fluid mechanics
with applications. Technical report, DTIC Document, 2002.

[166] Darrell Whitley, Soraya Rana, John Dzubera, and Keith E Mathias. Evalu-
ating evolutionary algorithms. Artificial intelligence, 85(1):245–276, 1996.

[167] L Darrell Whitley. Fundamental principles of deception in genetic search.


Citeseer, 1991.

[168] Dirk Wiesmann, Ulrich Hammel, and Thomas Back. Robust design of mul-
tilayer optical coatings by means of evolutionary algorithms. Evolutionary
Computation, IEEE Transactions on, 2(4):162–167, 1998.

[169] D.H. Wolpert and W.G. Macready. No free lunch theorems for optimiza-
tion. Evolutionary Computation, IEEE Transactions on, 1(1):67–82, 1997.

[170] Jin Wu and Shapour Azarm. Metrics for quality assessment of a multi-
objective design optimization solution set. Journal of Mechanical Design,
123(1):18–25, 2001.

[171] Guanmo Xie. Optimal preliminary propeller design based on multi-


objective optimization approach. Procedia Engineering, 16:278–283, 2011.

[172] Dongbin Xiu and Jan S Hesthaven. High-order collocation methods for
differential equations with random inputs. SIAM Journal on Scientific
Computing, 27(3):1118–1139, 2005.

[173] Dongbin Xiu and George Em Karniadakis. Modeling uncertainty in flow


simulations via generalized polynomial chaos. Journal of computational
physics, 187(1):137–167, 2003.
228 BIBLIOGRAPHY

[174] Xin-She Yang. Appendix a: Test problems in optimization. Engineering


Optimization, pages 261–266, 2010.

[175] Xin Yao, Yong Liu, and Guangming Lin. Evolutionary programming made
faster. Evolutionary Computation, IEEE Transactions on, 3(2):82–102,
1999.

[176] Jin Yaochu and J. Branke. Evolutionary optimization in uncertain


environments-a survey. Evolutionary Computation, IEEE Transactions on,
9(3):303–317, 2005.

[177] Qingfu Zhang and Hui Li. Moea/d: A multiobjective evolutionary algo-
rithm based on decomposition. Evolutionary Computation, IEEE Trans-
actions on, 11(6):712–731, 2007.

[178] Qingfu Zhang, Aimin Zhou, Shizheng Zhao, Ponnuthurai Nagaratnam Sug-
anthan, Wudong Liu, and Santosh Tiwari. Multiobjective optimization test
instances for the cec 2009 special session and competition. University of
Essex, Colchester, UK and Nanyang Technological University, Singapore,
Special Session on Performance Assessment of Multi-Objective Optimiza-
tion Algorithms, Technical Report, 2008.

[179] Aimin Zhou, Bo-Yang Qu, Hui Li, Shi-Zheng Zhao, Ponnuthurai Nagarat-
nam Suganthan, and Qingfu Zhang. Multiobjective evolutionary algo-
rithms: A survey of the state of the art. Swarm and Evolutionary Com-
putation, 1(1):32–49, 2011.

[180] Eckart Zitzler. Evolutionary algorithms for multiobjective optimization:


Methods and applications, volume 63. Shaker Ithaca, 1999.

[181] Eckart Zitzler, Kalyanmoy Deb, and Lothar Thiele. Comparison of mul-
tiobjective evolutionary algorithms: Empirical results. Evolutionary com-
putation, 8(2):173–195, 2000.

[182] Eckart Zitzler and Lothar Thiele. Multiobjective optimization using evo-
lutionary algorithms-a comparative case study. In Parallel problem solving
from nature-PPSN V, pages 292–301. Springer, 1998.
BIBLIOGRAPHY 229

[183] Eckart Zitzler and Lothar Thiele. Multiobjective evolutionary algorithms:


a comparative case study and the strength pareto approach. Evolutionary
Computation, IEEE Transactions on, 3(4):257–271, 1999.

[184] Eckart Zitzler, Lothar Thiele, Marco Laumanns, Carlos M Fonseca, and
Viviane Grunert Da Fonseca. Performance assessment of multiobjective
optimizers: An analysis and review. Evolutionary Computation, IEEE
Transactions on, 7(2):117–132, 2003.
Appendix A

Single-objective robust test


functions

Some of the test problems of this work have been adopted from [18, 58, 163, 105].
The full description of the benchmark functions utilised and proposed are as
follows:

A.1 Test functions in the literature

A.1.1 TP1
N N
Y 1 X 2
f (~x) = 1 − H(xi ) + x (A.1)
i=1
100 i=1 i


0 x < 0
i
H(xi ) = (A.2)
1 otherwise

Search space: ~x ∈ [−10, 10]N


Input noise: ~δ ∼ U
~ (−1, 1)
Robust optimum fitness (nominal value) ≈ 0.02(2D), 0.05(5D)
Robust optimum fitness (expected value) ≈ 0.026(2D), 0.066(5D)
Robust optimum location 2D(1, 1), 5D(1, 1, 1, 1, 1)
The 5D version of this function has been used in the experimental results.

230
A. Single-objective robust test functions 231

A.1.2 TP2
2
p
f (~x) = ||~x|| + e−5||~x|| (A.3)

Search space: ~x ∈ [−4, 4]N


Input noise: ~δ ∼ U
~ (−0.5, 0.5)
Robust optimum fitness (nominal value) ≈ 0.924(2D), 1.135(5D)
Robust optimum fitness (expected value) ≈ 1.005(2D), 0.967(5D)
Robust optimum location 2D(0.735, 0.0), 5D(−0.334, −0.056, −0.024, −0.023, 0.074)
The 5D version of this function has been used in the experimental results.
Note that despite the fact that the inventor of this test function suggested the
robust optima mentioned above, a more robust optimum is found in (0.735, 0.0).
This point has an expectation of 1.005 and a nominal value of 0.924. For the 5-
variable problem, a more robust optimum is located in (−0.334, −0.056, −0.024, −0.023, 0.074).
This point has an expectation of 0.967 and a nominal value of 1.135.

A.1.3 TP3
5
f (~x) = √ − max{f0 (~x), f1 (~x), f2 (~x), f3 (~x)} (A.4)
5− 5

1 0.5||~x||
f0 (~x) = e (A.5)
10

s
5 ||~x + 5||
f1 (~x) = √ (1 − √ ) (A.6)
5− 5 5 N

||~x + 5|| 4
f2 (~x) = c1 (1 − ( √ )) (A.7)
5 N

||~x − 5|| d2
f3 (~x) = c2 (1 − ( √ ) ) (A.8)
5 N

625
c1 = , c2 = 1.5975, d2 = 1.1513 (A.9)
624
232 A. Single-objective robust test functions

Search space: ~x ∈ [−10, 10]N


Input noise: ~δ ∼ U
~ (−1, 1)
Robust optimum fitness (nominal value) ≈ 0.2115(2D), 0.2115(5D)
Robust optimum fitness (expected value) ≈ 0.33(2D), 0.34(5D)
Robust optimum location 2D(5, 5), 5D(5, 5, 5, 5, 5)
The 5D version of this function has been used in the experimental results.

A.1.4 TP4
N
1 X
f (~x) = c − f1 (xi ) (A.10)
N i=1


−(x + 1)2 + 1 −2 ≤ x < 0
i i
f1 (xi ) = (A.11)
c.2−8|xi −1| 0 ≤ xi < 2

c = 1.3 (A.12)

Search space: ~x ∈ [−2, 2]N


Input noise: ~δ ∼ U
~ (−0.5, 0.5)
Robust optimum fitness (nominal value) ≈ 0.3(2D), 0.3(5D)
Robust optimum fitness (expected value) ≈ 0.37(2D), 0.366(5D)
Robust optimum location 2D(−1, −1), 5D(−1, −1, −1, −1, −1)
The 5D version of this function has been used in the experimental results.

A.1.5 TP5
N
X
f (~x) = f1 (xi ) (A.13)
i=1


1 −0.5 ≤ x < 0.5
i
f1 (xi ) = (A.14)
0 otherwise

Search space: ~x ∈ [−0.5, 0.5]N


A. Single-objective robust test functions 233

Input noise: ~δ ∼ U
~ (−0.2, 0.2)
Robust optimum fitness (nominal value) ≈ −2.0(2D), −20.0(20D)
Robust optimum fitness (expected value) ≈ −2.0(2D), −20.0(20D)
Robust optimum location 2D(0, 0), 20D(0, 0, ..., 0)
The 20D version of this function has been used in the experimental results.

A.1.6 TP6
N
X
f (~x) = f1 (xi ) (A.15)
i=1




1 −0.6 ≤ xi < −0.2

f1 (xi ) = 1.25 (0.2 ≤ xi < 0.36) ∨ (0.44 ≤ xi < 0.6) (A.16)


0

otherwise

Search space: ~x ∈ [−1.5, 1.5]N


Input noise: ~δ ∼ U
~ (−0.2, 0.2)
Robust optimum fitness (nominal value) ≈ −2(2D), −20(20D)
Robust optimum fitness (expected value) ≈ −2.0(2D), −20(20D)
Robust optimum location 2D(−0.4, −0.4), 20D(−0.4, −0.4, ..., −0.4)
The 20D version of this function has been used in the experimental results.

A.1.7 TP7
N
1 X
f (~x) = 1 − f1 (xi ) (A.17)
N i=1

x + 0.8 −0.8 ≤ x < 0.2
i i
f1 (xi ) = (A.18)
0 otherwise

Search space: ~x ∈ [−1, 1]N


Input noise: ~δ ∼ U
~ (−0.2, 0.2)
Robust optimum fitness (nominal value) ≈ 0(2D), 0(20D)
Robust optimum fitness (expected value) ≈ −1.6(2D), −16.0(20D)
Robust optimum location 2D(0.2, 0.2), 20D(0.2, 0.2, ..., 0.2)
The 20D version of this function has been used in the experimental results.
234 A. Single-objective robust test functions

A.1.8 TP8

N
1 X
f (~x) = − g(x) (A.19)
N i=1


e−2ln2( x−0.1
0.8
)2
p
|sin(5πxi )| 0.4 < xi ≤ 0.6
g(xi ) = x−0.1 2
(A.20)
e−2ln2( 0.8 ) sin6 (5πxi ) otherwise

Search space: ~x ∈ [0, 1]N


Input noise: ~δ ∼ U
~ (−0.0625, 0.0625)
Robust optimum fitness (nominal value) ≈ −1(2D), −1(20D)
Robust optimum fitness (expected value) ≈ −0.91(2D), −0.91(20D)
Robust optimum location 2D(0.1, 0.1)
The 2D version of this function has been used in the experimental results.

A.1.9 TP9




 cos( 21 xi ) + 1 −2π ≤ xi < 2π

f (~x) = 1.1cos(xi + π) + 1.1 2π ≤ xi < 4π (A.21)


0

otherwise

Search space: ~x ∈ [−2π, 4π]N


Input noise: ~δ ∼ U
~ (−2, 2)
Robust optimum fitness (nominal value) ≈ −4(2D), −40(20D)
Robust optimum fitness (expected value) ≈ −3.68(2D), −36.8(20D)
Robust optimum location 20D(0, 0, ..., 0)
The 20D version of this function has been used in the experimental results.
A. Single-objective robust test functions 235

A.2 Test functions generated by the proposed


framework I
A.2.1 TP10




 f1 (~x) (x1 ≤ 0) ∧ (x2 ≥ 0)

(x1 ≥ 0) ∧ (x2 ≤ 0)

 f2 (~x)
f (~x) = (A.22)


 f3 (~x) (x1 > 0) ∧ (x2 > 0)


f4 (~x) (x1 < 0) ∧ (x2 < 0)

f1 (~x) = ΣN 2
i=1 xi (A.23)

f2 (~x) = maxi {|xi |, 1 ≤ i ≤ N } (A.24)

f3 (~x) = ΣN 2
i=1 [xi − 10cos(2πxi ) + 10] (A.25)

√1 N 2 1 N
f4 (~x) = −20e(−0.2 N Σi=1 xi ) − e( N Σi=1 cos(2πxi )) + 20 + e (A.26)

Search space: ~x ∈ [−4, 4]N


Input noise: ~δ ∼ U
~ (−1, 1)
Robust optimum fitness (nominal value) ≈ 0.8(2D)
Robust optimum fitness (expected value) ≈ 1.46(2D)
Robust optimum location 2D(−2, 2)
The 2D version of this function has been used in the experimental results.

A.3 Proposed biased test functions


A.3.1 TP11 - biased 1

Y (~x)G(~x) (x2 + x2 > 25)
1 2
f (~x) = (A.27)
G(~x)B(~x) (x2 + x2 ≤ 25)
1 2
236 A. Single-objective robust test functions

N N
Y 1 X 2
Y (~x) = 1 − H(xi ) + x (A.28)
i=1
100 i=1 i


0 x < 0
i
H(x) = (A.29)
1 otherwise

G(~x) = ΣN 2
i=3 50xi (A.30)

B(~x) = (ΣN
i=1 |xi |)
θ
(A.31)

θ = 0.1
Search space: ~x ∈ [−10, 10]N
Input noise: ~δ ∼ U
~ (−1, 1)
Robust optimum fitness (nominal value): 0.02
Robust optimum fitness (expected value): 3.58
Robust optimum location (1, 1, 0, 0, ..., 0)
The 10D version of this function has been used in the experimental results.

A.3.2 TP12 - biased 2


2
1 X
f (~x) = (c − y(xi ))G(~x)B(~x) (A.32)
N i=1


−(x + 1)2 + 1 −2 ≤ x < 0
i i
y(xi ) = (A.33)
c × 2xi −1 0 ≤ xi < 2

G(~x) = ΣN 2
i=3 50xi (A.34)

B(~x) = (ΣN
i=1 |xi |)
θ
(A.35)
A. Single-objective robust test functions 237

θ = 0.1
c = 1.3
Search space: ~x ∈ [−10, 10]N
Input noise: ~δ ∼ U
~ (−1, 1)
Robust optimum fitness (nominal value): 0.2663
Robust optimum fitness (expected value): 8720
Robust optimum location (5, 5, 0, 0, ..., 0)
The 10D version of this function has been used in the experimental results.

A.4 Proposed deceptive test functions


A.4.1 TP13
f (~x) = (H(x1 ) + H(x2 )) × G(~x) − 1 (A.36)

1 x−0.4 2 x−0.5 2 x−0.6 2


H(x) = − 0.3e−( 0.004 ) − 0.5e−( 0.05 ) − 0.3e−( 0.004 ) + sin(πx) (A.37)
2

G(~x) = (ΣN 2
i=3 50xi ) + 1 (A.38)

Search space: ~x ∈ [0.2, 0.8]N


Input noise: ~δ ∼ U
~ (−0.01, 0.01)
Robust optimum fitness (nominal value): 0.2663
Robust optimum fitness (expected value): 1.0395
Robust optimum location (0.5, 0.5, 0, 0, ..., 0)
The 10D version of this function has been used in the experimental results.

A.4.2 TP14
f (~x) = (H(x1 ) + H(x2 )) × G(~x) − 1 (A.39)

1 x−0.5 2
−( x−0.04 )2 x−0.04 2
H(x) = − 0.5e−( 0.05 ) − Σ11
i=1 (0.3e
0.004 + 0.3e−( 0.004 ) + sin(πx)) (A.40)
2
238 A. Single-objective robust test functions

G(~x) = (ΣN 2
i=3 50xi ) + 1 (A.41)

Search space: ~x ∈ [0.22, 0.78]N


Input noise: ~δ ∼ U
~ (−0.01, 0.01)
Robust optimum fitness (nominal value): 1
Robust optimum fitness (expected value): 1.0396
Robust optimum location (0.5, 0.5, 0, 0, ..., 0)
The 10D version of this function has been used in the experimental results.

A.4.3 TP15




 (H(x1 ) + H(x2 )) × G(~x) − 1 (x1 ≤ 1) ∧ (x2 ≤ 1)

(H(2 − x ) + H(x − 2)) × G(~x) − 1

(x1 > 1) ∧ (x2 > 1)
1 2
f (~x) = (A.42)


 (H(2 − x1 ) + H(x2 )) × G(~x) − 1 (x1 > 1) ∧ (x2 ≤ 1)


(H(x1 ) + H(2 − x2 )) × G(~x) − 1

(x1 ≤ 1) ∧ (x2 > 1)

H(x) = e−3x sin(8πx) + x (A.43)

G(~x) = (ΣN 2
i=3 50xi ) + 1 (A.44)

Search space: ~x ∈ [0.79, 1.21]N


Input noise: ~δ ∼ U ~ (−0.01, 0.01)
Robust optimum fitness (nominal value): 1.721
Robust optimum fitness (expected value): 1.7451
Robust optimum location (1.1, 1.1, 0, 0, ..., 0), (1.1, 0.9, 0, 0, ..., 0),
(0.9, 0.9, 0, 0, ..., 0), (0.9, 1.1, 0, 0, ..., 0)
The 10D version of this function has been used in the experimental results.

A.5 Proposed multi-modal robust test functions


A.5.1 TP16
f (~x) = (H(x1 ) + H(x2 )) × G(~x) − 1.399 (A.45)
A. Single-objective robust test functions 239

3 x−0.5 2
−( x−0.0063i )2
x−(1−0.0063i) 2
H(x) = − 0.5e−( 0.04 ) − Σ16
i=0 (0.8e
0.004 + 0.8e−( 0.004 ) ) (A.46)
2

G(~x) = (ΣN 2
i=3 50xi ) (A.47)

Search space: ~x ∈ [0, 1]N


Input noise: ~δ ∼ U
~ (−0.01, 0.01)
Robust optimum fitness (nominal value): 0.601
Robust optimum fitness (expected value) ≈ 0.6484
Robust optimum location (0.5, 0.5, 0, 0, ..., 0)
The 10D version of this function has been used in the experimental results.

A.5.2 TP17
f (~x) = (H(x1 ) + H(x2 )) × G(~x) − 1.399 (A.48)

3 x−0.5 2
−( x−0.0063i )2
x−(1−0.0063i) 2
H(x) = − 0.8e−( 0.04 ) − Σ16
i=0 (0.5e
0.004 + 0.5e−( 0.004 ) ) (A.49)
2

G(~x) = (ΣN 2
i=3 50xi ) (A.50)

Search space: ~x ∈ [0, 1]N


Input noise: ~δ ∼ U
~ (−0.01, 0.01)
Robust optimum fitness (nominal value): 0.001
Robust optimum fitness (expected value): 0.529
Robust optimum location (0.5, 0.5, 0, 0, ..., 0)
The 10D version of this function has been used in the experimental results.

A.6 Proposed flat robust test function


A.6.1 TP18
f (~x) = (H(x1 ) + H(x2 )) × G(~x) − 2 (A.51)
240 A. Single-objective robust test functions

1 x−0.95 2 x−0.05 2
H(x) = − (0.2e−( 0.03 ) + 0.2e−( 0.01 ) ) (A.52)
2

G(~x) = (ΣN 2
i=3 50xi ) + 1 (A.53)

Search space: ~x ∈ [0, 1]N


Input noise: ~δ ∼ U
~ (−0.01, 0.01)
Robust optimum fitness (nominal value): 0
Robust optimum fitness (expected value): 0.0412
Robust optimum location (0.95, 0.95, 0, 0, ..., 0)
The 10D version of this function has been used in the experimental results.

A.7 Test functions generated by the proposed


frameworks II and III
A.7.1 TP19
1 −0.5( Σni=1 (xi −1.5)2 )2 1 −0.5( Σni=1 (xi −0.5)2 )2
f (~x) = ( √ e 0.5 )+( √ e 0.01 ) (A.54)
2π 2π

Search space: ~x ∈ [0, 2]N


Input noise: ~δ ∼ U
~ (−0.2, 0.2)
Robust optimum fitness (nominal value): −0.3989
Robust optimum fitness (expected value): −0.3981
Robust optimum location (1.5, 1.5)
The 2D version of this function has been used in the experimental results.

A.7.2 TP20
f (~x) = −H(x1 ) × H(x2 ) + G(~x) (A.55)

2 π
e2x sin(8π(x + 16
)) −x
H(x) = + 0.5 (A.56)
3
A. Single-objective robust test functions 241

ΣN
i=3 xi
G(~x) = 10 (A.57)
N

Search space: ~x ∈ [0, 1]N


Input noise: ~δ ∼ U
~ (−0.02, 0.02)
Robust optimum fitness (nominal value): −3.83
Robust expected fitness (expected value): −3.65
Robust optimum location (0.8, 0.8, 0, 0, ..., 0)
The 10D version of this function has been used in the experimental results.
Appendix B

Multi-objective robust test


functions

B.1 Deb’s test functions


B.1.1 RMTP1
f1 (~x) = x1 (B.1)

f2 (~x) = h(x1 ) + g(~x)S(x1 ) (B.2)

subject to : 0 ≤ x1 ≤ 1, −1 ≤ xi ≤ 1, i = 2, 3, ..., n (B.3)

where : h(x1 ) = 1 − x21 (B.4)

n
X
g(~x) = 10 + x2i − 10cos(4πxi ) (B.5)
i=2

1
S(x1 ) = + x21 (B.6)
0.2 + x1

242
B. Multi-objective robust test functions 243

B.1.2 RMTP2
f1 (~x) = x1 (B.7)

f2 (~x) = h(x1 ) + g(~x)S(x1 ) (B.8)

subject to : 0 ≤ x1 ≤ 1, −1 ≤ xi ≤ 1, i = 2, 3, ..., n (B.9)

where : h(x1 ) = 1 − x21 (B.10)

n
X
g(~x) = 10 + x2i − 10cos(4πxi ) (B.11)
i=2

1
S(x1 ) = + 10x21 (B.12)
0.2 + x1

B.1.3 RMTP3
f1 (~x) = x1 (B.13)

f2 (~x) = h(x2 ) + (g(~x) + S(x1 )) (B.14)

subject to : 0 ≤ x1 , x2 ≤ 1, −1 ≤ xi ≤ 1, i = 3, 4, ..., n (B.15)

x2 −0.35 2 x2 −0.85 2
where : h(x1 ) = 2 − 0.8e−( 0.25
)
− e−( 0.03
)
(B.16)

n
X
g(~x) = 50x2i (B.17)
i=3


S(x1 ) = 1 − x1 (B.18)
244 B. Multi-objective robust test functions

B.1.4 RMTP4
f1 (~x) = x1 (B.19)

f2 (~x) = x2 (B.20)

f3 (~x) = h(x1 , x2 ) + g(~x)S(x1 , x2 ) (B.21)

subject to : 0 ≤ x1 , x2 ≤ 1, −1 ≤ xi ≤ 1, i = 3, 4, ..., n (B.22)

where : h(x1 , x2 ) = 2 − x21 − x22 (B.23)

n
X
g(~x) = (10 + x2i − 10cos(4πxi )) (B.24)
i=3

0.75 0.75
S(x1 , x2 ) = + 10x81 + + 10x82 (B.25)
0.2 + x1 0.2 + x2

B.1.5 RMTP5
f1 (~x) = x1 (B.26)

f2 (~x) = x2 (B.27)

f3 (~x) = h(x3 )(g(~x) + S(x1 , x2 )) (B.28)

subject to : 0 ≤ x1 , x2 , x3 ≤ 1, −1 ≤ xi ≤ 1, i = 4, 5, ..., n (B.29)


B. Multi-objective robust test functions 245

x3 −0.35 2 x3 −0.85 2
where : h(x3 ) = 2 − 0.8e−( 0.25
)
− e−( 0.03
)
(B.30)

n
X
g(~x) = (10 + x2i − 10cos(4πxi )) (B.31)
i=3

√ √
S(x1 , x2 ) = 1 − x1 − x2 (B.32)

B.2 Gaspar Cunha’s functions

B.2.1 RMTP6

f1 (~x) = x1 (B.33)

f2 (~x) = h(x1 ) + g(~x)S(x1 ) (B.34)

subject to : 0 ≤ xi ≤ 1, i = 1, 2, ..., n (B.35)

(x1 − 0.6)3 − 0.43


where : h(x1 ) = (B.36)
−0.63 − 0.43

Pn
i=2 (xi )
g(~x) = (B.37)
n−1

1
S(x1 ) = (B.38)
0.2 + x1
246 B. Multi-objective robust test functions

B.2.2 RMTP7
πx1
f1 (~x) = cos( ) (B.39)
2

πx1
f2 (~x) = g(~x)sin( ) (B.40)
2

subject to : 0 ≤ xi ≤ 1, i = 1, 2, ..., n (B.41)

Pn
i=2 (xi )
g(~x) = 1 + 10 (B.42)
n−1

B.2.3 RMTP8
f1 (~x) = 1 − x21 (B.43)

πx1
f2 (~x) = g(~x)sin( ) (B.44)
2

subject to : 0 ≤ xi ≤ 1, i = 1, 2, ..., n (B.45)

Pn
i=2 (xi )
g(~x) = 1 + 10 (B.46)
n−1

B.2.4 RMTP9
x1
e −1
f1 (~x) = (B.47)
e−1

sin(4πx1 ) − 15x1
f2 (~x) = g(~x)[ + 1] (B.48)
15

subject to : 0 ≤ xi ≤ 1, i = 1, 2, ..., n (B.49)

Pn
i=2 (xi )
g(~x) = 1 + 10 (B.50)
n−1
B. Multi-objective robust test functions 247

B.2.5 RMTP10
f1 (~x) = x1 (B.51)

sin(4πx1 ) − 15x1
f2 (~x) = g(~x)[ + 1] (B.52)
15

subject to : 0 ≤ xi ≤ 1, i = 1, 2, ..., n (B.53)

Pn
i=2 (xi )
g(~x) = 1 + 10 (B.54)
n−1

B.3 Test functions generated by the proposed


frameworks 1, 2, and 3
B.3.1 RMTP11
Framework 1
α = 0.1
ω = 1.5
β1 = 1
β2 = 1

B.3.2 RMTP12
Framework 1
α = 0.1
ω = 1.5
β1 = 1.5
β2 = 1.5

B.3.3 RMTP13
Framework 1
α = 0.1
248 B. Multi-objective robust test functions

ω = 1.5
β1 = 0.5
β2 = 0.5

B.3.4 RMTP14
Framework 1
α = 0.1
ω = 1.5
β1 = 0.5
β2 = 1

B.3.5 RMTP15
Framework 1
α = 0.1
ω = 1.5
β1 = 1
β2 = 0.5

B.3.6 RMTP16
Framework 1
α = 0.1
ω = 1.5
β1 = 1
β2 = 1.5

B.3.7 RMTP17
Framework 1
α = 0.1
ω = 1.5
β1 = 1.5
β2 = 1
B. Multi-objective robust test functions 249

B.3.8 RMTP18
Framework 1
α = 0.1
ω = 1.5
β1 = 1.5
β2 = 0.5

B.3.9 RMTP19
Framework 1
α = 0.1
ω = 1.5
β1 = 0.5
β2 = 1.5

B.3.10 RMTP20
Framework 2
γ=3
λ=4
ω=1
β=1

B.3.11 RMTP21
Framework 2
γ=3
λ=4
ω=1
β = 0.5

B.3.12 RMTP22
Framework 2
γ=3
λ=4
250 B. Multi-objective robust test functions

ω=1
β = 1.5

B.3.13 RMTP23
Framework 2
ζ=2
λ=6
γ=3
ω = 0.5

B.3.14 RMTP24
Framework 2
ζ=4
λ=6
γ=3
ω = 0.5

B.3.15 RMTP25
Framework 2
ζ=8
λ=6
γ=3
ω = 0.5

B.4 Extended version of current test functions

B.4.1 RMTP26

f1 (~x) = x1 (B.55)

f2 (~x) = g(~x) − C(x1 ) (B.56)


B. Multi-objective robust test functions 251

subject to : 0 ≤ xi ≤ 1, i = 1, 2, ..., n (B.57)

Pn
i=2 (xi )
g(~x) = 1 + 10 (B.58)
n−1




 0 0 ≤ x1 ≤ 0.2

0.2 < x1 ≤ 0.4



 0.25

C(x1 ) = 0.5 0.4 < x1 ≤ 0.6 (B.59)


0.75 0.6 < x1 ≤ 0.8






1 0.8 < x1 ≤ 1

B.4.2 RMTP27
x1
e −1
f1 (~x) = (B.60)
e−1

f2 (~x) = g(~x) − C(x1 ) (B.61)

subject to : 0 ≤ xi ≤ 1, i = 1, 2, ..., n (B.62)

Pn
i=2 (xi )
g(~x) = 1 + 10 (B.63)
n−1




 0 0 ≤ x1 ≤ 0.2

0.2 < x1 ≤ 0.4



 0.25

C(x1 ) = 0.5 0.4 < x1 ≤ 0.6 (B.64)


0.75 0.6 < x1 ≤ 0.8






1 0.8 < x1 ≤ 1

252 B. Multi-objective robust test functions

B.4.3 RMTP28
f1 (~x) = x1 (B.65)

f2 (~x) = x2 (B.66)

 r 
x1 x1
f3 (~x) = g(~x) × 1− − sin(ζ × 2πx1 )
g(~x) g(~x) (B.67)
  r  
x2 x2
+ H(x1 ) 1− − sin(ζ × 2πx2 ) + H(x3 ) +ω
g(~x) g(~x)

subject to : 0 ≤ xi ≤ 1, i = 1, 2, ..., n (B.68)

2
e−2x sin(λ × 2π(x + π

)) −x
where : H(x) = + 0.5 (B.69)
γ

PN
i=2 xi
g(~x) = 1 + 10 (B.70)
N

γ, λ ≥ 1 (B.71)

B.4.4 RMTP29
f1 (~x) = x1 (B.72)

f2 (~x) = x2 (B.73)

f3 (~x) = g(~x) − C(x1 , x2 ) (B.74)

subject to : 0 ≤ xi ≤ 1, i = 1, 2, ..., n (B.75)


B. Multi-objective robust test functions 253

Pn
i=2 (xi )
g(~x) = 1 + 10 (B.76)
n−1


0

 0(≤ x1 ≤ 0.2) ∨ (0 ≤ x2 ≤ 0.2)

(0.2 < x1 ≤ 0.4) ∨ (0.2 < x2 ≤ 0.4)



 0.25

C(x1 ) = 0.5 (0.4 < x1 ≤ 0.6) ∨ (0.4 < x2 ≤ 0.6) (B.77)


0.75 (0.6 < x1 ≤ 0.8) ∨ (0.6 < x2 ≤ 0.8)






1 (0.8 < x1 ≤ 1) ∨ (0.8 < x2 ≤ 1)

B.4.5 RMTP30
f1 (~x) = x1 (B.78)

f2 (~x) = x2 (B.79)

sin(4πx1 ) − 15x1
f3 (~x) = g(~x)[ + 1] (B.80)
15

subject to : 0 ≤ xi ≤ 1, i = 1, 2, ..., n (B.81)

Pn
i=2 (xi )
g(~x) = 1 + 10 (B.82)
n−1

B.4.6 RMTP31
x1
e −1
f1 (~x) = (B.83)
e−1

ex2 − 1
f2 (~x) = (B.84)
e−1

sin(4πx1 ) − 15x1
f3 (~x) = g(~x)[ + 1] (B.85)
15
254 B. Multi-objective robust test functions

subject to : 0 ≤ xi ≤ 1, i = 1, 2, ..., n (B.86)

Pn
i=2 (xi )
g(~x) = 1 + 10 (B.87)
n−1

B.4.7 RMTP32
f1 (~x) = x1 (B.88)

f2 (~x) = x2 (B.89)

sin(4πx1 x2 ) − 15x1 x2
f3 (~x) = g(~x)[ + 1] (B.90)
15

subject to : 0 ≤ xi ≤ 1, i = 1, 2, ..., n (B.91)

Pn
i=2 (xi )
g(~x) = 1 + 10 (B.92)
n−1

B.4.8 RMTP33
πx1
f1 (~x) = cos( ) (B.93)
2

πx1
f2 (~x) = g(~x)sin( ) (B.94)
2

subject to : 0 ≤ xi ≤ 1, i = 1, 2, ..., n (B.95)

 Pn ψ
i=2 (xi )
g(~x) = 1 + 10 (B.96)
n−1

ψ = 1/3
B. Multi-objective robust test functions 255

B.4.9 RMTP34
f1 (~x) = x1 (B.97)

f2 (~x) = h(x1 ) + g(~x)S(x1 ) (B.98)

subject to : 0 ≤ xi ≤ 1, i = 1, 2, ..., n (B.99)

(x1 − 0.6)3 − 0.43


where : h(x1 ) = (B.100)
−0.63 − 0.43

 Pn ψ
i=2 (xi )
g(~x) = 1 + (B.101)
n−1

1
S(x1 ) = (B.102)
0.2 + x1

ψ = 1/3

B.4.10 RMTP35
x1
e −1
f1 (~x) = (B.103)
e−1

sin(4πx1 ) − 15x1
f2 (~x) = g(~x)[ + 1] (B.104)
15

subject to : 0 ≤ xi ≤ 1, i = 1, 2, ..., n (B.105)

 Pn ψ
i=2 (xi )
g(~x) = 1 + 10 (B.106)
n−1

ψ = 1/3
256 B. Multi-objective robust test functions

B.5 Proposed deceptive test functions


B.5.1 RMTP36
f1 (~x) = x1 (B.107)

f2 (~x) = H(x2 ) × g(~x) + S(x1 ) (B.108)

1 x−0.2 2 x−0.5 2 x−0.8 2


H(x) = − 0.3e−( 0.004 ) − 0.5e−( 0.05 ) − 0.3e−( 0.004 ) + sin(πx) (B.109)
2

g(~x) = ΣN 2
i=3 50xi (B.110)

S(x) = 1 − xβ (B.111)

β = 1/2

B.5.2 RMTP37
Same as RMTP36 but with β = 1.

B.5.3 RMTP38
Same as RMTP36 but with β = 2.

B.6 Proposed multi-modal robust test functions


B.6.1 RMTP39
f1 (~x) = x1 (B.112)

f2 (~x) = H(x2 ) × {g(~x) + S(x1 )} (B.113)


B. Multi-objective robust test functions 257

M
3 −( x−0.5 )2
X
−( x−0.02i )2 −(
x−(0.6+0.02i) 2
)

H(x) = − 0.5e 0.04 − 0.8e 0.004 − 0.8e 0.004 (B.114)
2 i=0

g(~x) = ΣN 2
i=3 50xi (B.115)

S(x) = 1 − xβ (B.116)

M = 20
β = 1/2

B.6.2 RMTP40
Same as RMTP39 but with β = 1.

B.6.3 RMTP41
Same as RMTP39 but with β = 2.

B.7 Proposed flat robust test functions


B.7.1 RMTP42
f1 (~x) = x1 (B.117)

f2 (~x) = H(x2 ) × {g(~x) + S(x1 )} (B.118)

1 x−0.95 2 x−0.05 2
H(x) = − 0.2e−( 0.03 ) − 0.2e−( 0.01 ) (B.119)
2

g(~x) = (ΣN 2
i=3 50xi ) + 1 (B.120)
258 B. Multi-objective robust test functions

S(x) = 1 − xβ (B.121)

β = 1/2

B.7.2 RMTP43
Same as RMTP42 but with β = 1.

B.7.3 RMTP44
Same as RMTP42 but with β = 2.
Appendix C

Complete results

C.1 Robust Pareto optimal fronts obtained by


CRMOPSO, IRMOPSO, and ERMOPSO

259
260 C. Complete results

CRMOPSO IRMOPSO ERMOPSO


3 3 3

2 2 2
f2

f2

f2
RMTP1 1 1 1

0 0 0

-1 -1 -1
0 0.5 1 0 0.5 1 0 0.5 1
f1 f1 f1
CRMOPSO IRMOPSO ERMOPSO
3 3 3

2 2 2
f2

f2

f2
RMTP2 1 1 1

0 0 0

-1 -1 -1
0 0.5 1 0 0.5 1 0 0.5 1
f1 f1 f1
CRMOPSO IRMOPSO ERMOPSO
2 2 2

1.5 1.5 1.5


f2

f2

f2

1 1 1
RMTP3
0.5 0.5 0.5

0 0 0
0 0.5 1 0 0.5 1 0 0.5 1
f1 f1 f1
CRMOPSO IRMOPSO ERMOPSO
2
8 8
1.5
6 6
f3

f3

4 4 1
RMTP4
f3

2 2
0.5
0 0
1 1 0
1 1 0 1
0.5 0.5 0.5 0.5
0.5 0.5 1 0
f2 0 0 f2 0 0 f1 f2
f1 f1
CRMOPSO IRMOPSO ERMOPSO
16 16 16

14 14 14
12 12 12
RMTP5
f3

f3

f3

10 10 10

8 8 8
6 6 6
0 1 0 1 0 1
0.5 0.5 0.5 0.5 0.5 0.5
1 0 1 0 1 0
f2 f2 f2
f1 f1 f1

Figure C.1: Robust fronts obtained for RMTP1 to RMTP5.


C. Complete results 261

CRMOPSO IRMOPSO ERMOPSO


1 1 1

0.8 0.8 0.8

0.6 0.6 0.6


f2

f2

f2
RMTP6 0.4 0.4 0.4

0.2 0.2 0.2

0 0 0
0 0.5 1 0 0.5 1 0 0.5 1
f1 f1 f1
CRMOPSO IRMOPSO ERMOPSO
1 1 1

0.8 0.8 0.8

0.6 0.6 Obtained less robust 0.6


solutions
f2

f2

f2
RMTP7 0.4 0.4 0.4

0.2 Density in the robust 0.2 0.2


region
0 0 0
0 0.5 1 0 0.5 1 0 0.5 1
f1 f1 f1
CRMOPSO IRMOPSO ERMOPSO
1 1 1

0.8 0.8 0.8

0.6 0.6 0.6


f2

f2

f2

RMTP8 0.4 0.4 0.4

0.2 0.2 0.2

0 0 0
0 0.5 1 0 0.5 1 0 0.5 1
f1 f1 f1
CRMOPSO IRMOPSO ERMOPSO
1 1 1

0.8 0.8 0.8

0.6 0.6 0.6


f2

f2

f2

RMTP9 0.4 0.4 0.4

0.2 0.2 0.2


Gap
0 0 0
0 0.5 1 0 0.5 1 0 0.5 1
f1 f1 f1
CRMOPSO IRMOPSO ERMOPSO
1 1 1

0.8 0.8 0.8

0.6 0.6 0.6


f2

f2

f2

0.4 0.4 0.4


RMTP10
0.2 0.2 0.2

0 0 0
0 0.5 1 0 0.5 1 0 0.5 1
f1 f1 f1

Figure C.2: Robust fronts obtained for RMTP6 to RMTP10.


262 C. Complete results

CRMOPSO IRMOPSO ERMOPSO


1.5 1.5 1.5

1 1 1
f2

f2

f2
RMTP11
0.5 0.5 0.5

0 0 0
0 0.5 1 1.5 2 0 0.5 1 1.5 2 0 0.5 1 1.5 2
f1 f1 f1
CRMOPSO IRMOPSO ERMOPSO
1.5 1.5 1.5

1 1 1
f2

f2

f2
0.5 0.5 0.5
RMTP12
0 0 0

-0.5 -0.5 -0.5


0 0.5 1 1.5 2 0 0.5 1 1.5 2 0 0.5 1 1.5 2
f1 f1 f1
CRMOPSO IRMOPSO ERMOPSO
1.5 1.5 1.5

1 1 1
f2

f2

f2

RMTP13
0.5 0.5 0.5

0 0 0
0 0.5 1 1.5 2 0 0.5 1 1.5 2 0 0.5 1 1.5 2
f1 f1 f1
CRMOPSO IRMOPSO ERMOPSO
1.5 1.5 1.5

1 1 1
f2

f2

f2

RMTP14
0.5 0.5 0.5

0 0 0
0 0.5 1 1.5 2 0 0.5 1 1.5 2 0 0.5 1 1.5 2
f1 f1 f1
CRMOPSO IRMOPSO ERMOPSO
1.5 1.5 1.5

1 1 1
f2

f2

f2

RMTP15
0.5 0.5 0.5

0 0 0
0 0.5 1 1.5 2 0 0.5 1 1.5 2 0 0.5 1 1.5 2
f1 f1 f1

Figure C.3: Robust fronts obtained for RMTP11 to RMTP15. Note that the
dominated (local) front is robust and considered as reference for the performance
measures.
C. Complete results 263

CRMOPSO IRMOPSO ERMOPSO


1.5 1.5 1.5

1 1 1
f2

f2

f2
RMTP16
0.5 0.5 0.5

0 0 0
0 0.5 1 1.5 2 0 0.5 1 1.5 2 0 0.5 1 1.5 2
f1 f1 f1
CRMOPSO IRMOPSO ERMOPSO
1.5 1.5 1.5

1 1 1
f2

f2

f2
RMTP17
0.5 0.5 0.5

0 0 0
0 0.5 1 1.5 2 0 0.5 1 1.5 2 0 0.5 1 1.5 2
f1 f1 f1
CRMOPSO IRMOPSO ERMOPSO
1.5 1.5 1.5

1 1 1
f2

f2

f2

0.5 0.5 0.5


RMTP18
0 0 0

-0.5 -0.5 -0.5


0 0.5 1 1.5 2 0 0.5 1 1.5 2 0 0.5 1 1.5 2
f1 f1 f1
CRMOPSO IRMOPSO ERMOPSO
1.5 1.5 1.5

1 1 1
f2

f2

f2

RMTP19 0.5 0.5 0.5

0 0 0
0 0.5 1 1.5 2 0 0.5 1 1.5 2 0 0.5 1 1.5 2
f1 f1 f1

Figure C.4: Robust fronts obtained for RMTP16 to RMTP19. Note that the
dominated (local) front is robust and considered as reference for the performance
measures.
264 C. Complete results

CRMOPSO IRMOPSO ERMOPSO


1 1 1

0.8 0.8 0.8

0.6 0.6 0.6


f2

f2

f2
RMTP20 0.4 0.4 0.4

0.2 0.2 0.2

0 0 0
0 0.5 1 0 0.5 1 0 0.5 1
f1 f1 f1
CRMOPSO IRMOPSO ERMOPSO
1 1 1

0.8 0.8 0.8

0.6 0.6 0.6


f2

f2

f2

RMTP21 0.4 0.4 0.4

0.2 0.2 0.2

0 0 0
0 0.5 1 0 0.5 1 0 0.5 1
f1 f1 f1
CRMOPSO IRMOPSO ERMOPSO
1 1 1

0.8 0.8 0.8

0.6 0.6 0.6


f2

f2

f2

0.4 0.4 0.4


RMTP22
0.2 0.2 0.2

0 0 0
0 0.5 1 0 0.5 1 0 0.5 1
f1 f1 f1

Figure C.5: Robust fronts obtained for RMTP20 to RMTP22. Note that the
worst front is the most robust and considered as reference for the performance
measures.
C. Complete results 265

CRMOPSO IRMOPSO ERMOPSO


3 3 3

2 2 2
f2

f2

f2
RMTP23 1 1 1

0 0 0

-1 -1 -1
0 0.5 1 0 0.5 1 0 0.5 1
f1 f1 f1
CRMOPSO IRMOPSO ERMOPSO
3 3 3

2 2 2
f2

f2

f2
RMTP24 1 1 1

0 0 0

-1 -1 -1
0 0.5 1 0 0.5 1 0 0.5 1
f1 f1 f1
CRMOPSO IRMOPSO ERMOPSO
3 3 3

2 2 2
f2

f2

f2
1 1 1
RMTP25
0 0 0

-1 -1 -1
0 0.5 1 0 0.5 1 0 0.5 1
f1 f1 f1

Figure C.6: Robust fronts obtained for RMTP23 to RMTP25. Note that the
worst front is the most robust and considered as reference for the performance
measures.

CRMOPSO IRMOPSO ERMOPSO


1 1 1

0.8 0.8 0.8

0.6 0.6 0.6


f2

f2

f2

RMTP26 0.4 0.4 0.4

0.2 0.2 0.2

0 0 0
0 0.5 1 0 0.5 1 0 0.5 1
f1 f1 f1
CRMOPSO IRMOPSO ERMOPSO
1 1 1

0.8 0.8 0.8

0.6 0.6 0.6


f2

f2

f2

0.4 0.4 0.4


RMTP27
0.2 0.2 0.2

0 0 0
0 0.5 1 0 0.5 1 0 0.5 1
f1 f1 f1

Figure C.7: Robust fronts obtained for RMTP26 and RMTP27.


266 C. Complete results

CRMOPSO IRMOPSO ERMOPSO

4 4 4

2 2 2
RMTP28
f3

f3

f3
0 0 0

-2 0 -2 0 -2 0
0 0 0
0.5 0.5 0.5
0.5 0.5 0.5
1 1 f1 1 1 f1 1 1 f1
f2 f2 f2
CRMOPSO IRMOPSO ERMOPSO
3 3 3

2 2 2

1 1 1
RMTP29
f3

f3

f3
0 0 0

-1 -1 -1
0 1 0 1 0 1
0.5 0.5 0.5 0.5 0.5 0.5
1 0 1 0 1 0
f2 f1 f2 f1 f2 f1
CRMOPSO IRMOPSO ERMOPSO
2 1.5 1.5

1.5
1 1
1
RMTP30
f3

f3

f3

0.5 0.5
0.5

0 0 0
0 1 0 1 0 1
0.5 0.5 0.5 0.5 0.5 0.5
1 0 1 0 1 0
f2 f1 f2 f1 f2 f1
CRMOPSO IRMOPSO ERMOPSO
1.5 2 2

1.5 1.5
1
1 1
RMTP31
f3

f3

f3

0.5
0.5 0.5

0 0 0
0 1 0 1 0 1
0.5 0.5 0.5 0.5 0.5 0.5
1 0 1 0 1 0
f2 f1 f2 f1 f1 f2

CRMOPSO IRMOPSO ERMOPSO


2 1.5 1.5

1.5
1 1
1
f3

f3

f3

RMTP32 0.5
0.5 0.5

0 0 0
0 1 0 1 0 1
0.5 0.5 0.5 0.5 0.5 0.5
1 0 1 0 1 0
f2 f1 f2 f1 f2 f1

Figure C.8: Robust fronts obtained for RMTP28 and RMTP32.


C. Complete results 267

CRMOPSO ERMOPSO IRMOPSO


1 1 1

0.8 0.8 0.8

f2 0.6 0.6 0.6

f2

f2
RMTP33 0.4 0.4 0.4

0.2 0.2 0.2

0 0 0
0 0.5 1 0 0.5 1 0 0.5 1
f1 f1 f1
CRMOPSO IRMOPSO ERMOPSO
1 1 1

0.8 0.8 0.8

0.6 0.6 0.6


f2

f2

f2
RMTP34 0.4 0.4 0.4

0.2 0.2 0.2

0 0 0
0 0.5 1 0 0.5 1 0 0.5 1
f1 f1 f1
CRMOPSO IRMOPSO ERMOPSO
1 1 1

0.8 0.8 0.8

0.6 0.6 0.6


f2

f2

f2
RMTP35 0.4 0.4 0.4

0.2 0.2 0.2

0 0 0
0 0.5 1 0 0.5 1 0 0.5 1
f1 f1 f1
CRMOPSO IRMOPSO ERMOPSO
1 1 1

0.8 0.8 0.8

0.6 0.6 0.6


f2

f2

f2

RMTP36 0.4 0.4 0.4

0.2 0.2 0.2

0 0 0
0 0.5 1 0 0.5 1 0 0.5 1
f1 f1 f1
CRMOPSO IRMOPSO ERMOPSO
1 1 1

0.8 0.8 0.8

0.6 0.6 0.6


f2

f2

f2

RMTP37 0.4 0.4 0.4

0.2 0.2 0.2

0 0 0
0 0.5 1 0 0.5 1 0 0.5 1
f1 f1 f1
CRMOPSO IRMOPSO ERMOPSO
1 1 1

0.8 0.8 0.8

0.6 0.6 0.6


f2

f2

f2

RMTP38 0.4 0.4 0.4

0.2 0.2 0.2

0 0 0
0 0.5 1 0 0.5 1 0 0.5 1
f1 f1 f1

Figure C.9: Robust fronts obtained for RMTP33 and RMTP38. Note that
in RMTP36, RMTP37, and RMTP38, the worst front is the most robust and
considered as the reference for the performance measures.
268 C. Complete results

CRMOPSO IRMOPSO ERMOPSO


1 1 1

0.8 0.8 0.8

0.6 0.6 0.6


f2

f2

f2
RMTP39 0.4 0.4 0.4

0.2 0.2 0.2

0 0 0
0 0.5 1 0 0.5 1 0 0.5 1
f1 f1 f1
CRMOPSO IRMOPSO ERMOPSO
1 1 1

0.8 0.8 0.8

0.6 0.6 0.6


f2

f2

f2
RMTP40 0.4 0.4 0.4

0.2 0.2 0.2

0 0 0
0 0.5 1 0 0.5 1 0 0.5 1
f1 f1 f1
CRMOPSO IRMOPSO ERMOPSO
1 1 1

0.8 0.8 0.8

0.6 0.6 0.6


f2

f2

RMTP41 0.4 0.4 f2 0.4

0.2 0.2 0.2

0 0 0
0 0.5 1 0 0.5 1 0 0.5 1
f1 f1 f1
CRMOPSO IRMOPSO ERMOPSO
1 1 1

0.8 0.8 0.8

0.6 0.6 0.6


f2

f2

f2

RMTP42 0.4 0.4 0.4

0.2 0.2 0.2

0 0 0
0 0.5 1 0 0.5 1 0 0.5 1
f1 f1 f1
CRMOPSO IRMOPSO ERMOPSO
1 1 1

0.8 0.8 0.8

0.6 0.6 0.6


f2

f2

f2

RMTP43 0.4 0.4 0.4

0.2 0.2 0.2

0 0 0
0 0.5 1 0 0.5 1 0 0.5 1
f1 f1 f1
CRMOPSO IRMOPSO ERMOPSO
1 1 1

0.8 0.8 0.8

0.6 0.6 0.6


f2

f2

f2

RMTP44 0.4 0.4 0.4

0.2 0.2 0.2

0 0 0
0 0.5 1 0 0.5 1 0 0.5 1
f1 f1 f1

Figure C.10: Robust fronts obtained for RMTP39 and RMTP44.

Potrebbero piacerti anche