Sei sulla pagina 1di 245

Solid Mechanics and Its Applications

Tadeusz Burczyński · Wacław Kuś ·
Witold Beluch · Adam Długosz ·
Arkadiusz Poteralski ·
Mirosław Szczepanik

Intelligent
Computing
in Optimal
Design
Solid Mechanics and Its Applications

Volume 261

Founding Editor
G. M. L. Gladwell, University of Waterloo, Waterloo, ON, Canada

Series Editors
J. R. Barber, Department of Mechanical Engineering, University of Michigan,
Ann Arbor, MI, USA
Anders Klarbring, Mechanical Engineering, Linköping University, Linköping,
Sweden
The fundamental questions arising in mechanics are: Why?, How?, and How much?
The aim of this series is to provide lucid accounts written by authoritative
researchers giving vision and insight in answering these questions on the subject of
mechanics as it relates to solids. The scope of the series covers the entire spectrum
of solid mechanics. Thus it includes the foundation of mechanics; variational
formulations; computational mechanics; statics, kinematics and dynamics of rigid
and elastic bodies; vibrations of solids and structures; dynamical systems and
chaos; the theories of elasticity, plasticity and viscoelasticity; composite materials;
rods, beams, shells and membranes; structural control and stability; soils, rocks and
geomechanics; fracture; tribology; experimental mechanics; biomechanics and
machine design. The median level of presentation is the first year graduate student.
Some texts are monographs defining the current state of the field; others are
accessible to final year undergraduates; but essentially the emphasis is on
readability and clarity.
Springer and Professors Barber and Klarbring welcome book ideas from
authors. Potential authors who wish to submit a book proposal should
contact Dr. Mayra Castro, Senior Editor, Springer Heidelberg, Germany,
email: mayra.castro@springer.com
Indexed by SCOPUS, Ei Compendex, EBSCO Discovery Service, OCLC,
ProQuest Summon, Google Scholar and SpringerLink.

More information about this series at http://www.springer.com/series/6557


Tadeusz Burczyński Wacław Kuś
• •

Witold Beluch Adam Długosz


• •

Arkadiusz Poteralski Mirosław Szczepanik


Intelligent Computing
in Optimal Design

123
Tadeusz Burczyński Wacław Kuś
Institute of Fundamental Technological Silesian University of Technology
Research of the Polish Academy of Sciences Gliwice, Poland
Warsaw, Poland
Adam Długosz
Cracow University of Technology Silesian University of Technology
Cracow, Poland Gliwice, Poland
Witold Beluch Mirosław Szczepanik
Silesian University of Technology Silesian University of Technology
Gliwice, Poland Gliwice, Poland
Arkadiusz Poteralski
Silesian University of Technology
Gliwice, Poland

ISSN 0925-0042 ISSN 2214-7764 (electronic)


Solid Mechanics and Its Applications
ISBN 978-3-030-34159-6 ISBN 978-3-030-34161-9 (eBook)
https://doi.org/10.1007/978-3-030-34161-9
© Springer Nature Switzerland AG 2020
This work is subject to copyright. All rights are reserved by the Publisher, whether the whole or part
of the material is concerned, specifically the rights of translation, reprinting, reuse of illustrations,
recitation, broadcasting, reproduction on microfilms or in any other physical way, and transmission
or information storage and retrieval, electronic adaptation, computer software, or by similar or dissimilar
methodology now known or hereafter developed.
The use of general descriptive names, registered names, trademarks, service marks, etc. in this
publication does not imply, even in the absence of a specific statement, that such names are exempt from
the relevant protective laws and regulations and therefore free for general use.
The publisher, the authors and the editors are safe to assume that the advice and information in this
book are believed to be true and accurate at the date of publication. Neither the publisher nor the
authors or the editors give a warranty, expressed or implied, with respect to the material contained
herein or for any errors or omissions that may have been made. The publisher remains neutral with regard
to jurisdictional claims in published maps and institutional affiliations.

This Springer imprint is published by the registered company Springer Nature Switzerland AG
The registered company address is: Gewerbestrasse 11, 6330 Cham, Switzerland
Preface

In an attempt to create new material systems or identify their some geometrical or


materials parameters, one would like to propose computational methodologies and
approaches which enable to solve optimization problems by finding optimal shape,
topology or size and materials properties.
Such methods should ensure finding the global minimum/maximum of an
objective function with imposed constraint conditions.
Computational intelligence as a set of biology-inspired computational method-
ologies and techniques having some learning attributes typical for natural intelligence
provides solutions for such problems.
Dramatic increase in computational power available for mathematical modelling
of systems and their optimization raises the possibility that computational intelli-
gence can play a significant role in the rational optimal design of new structures.
This fact has motivated the work that is presented in this monograph, which
contains computational models of structures, intelligent computing techniques,
structural intelligent optimization and intelligent computing in inverse problems.

Warsaw, Poland Tadeusz Burczyński


Gliwice, Poland Wacław Kuś
Gliwice, Poland Witold Beluch
Gliwice, Poland Adam Długosz
Gliwice, Poland Arkadiusz Poteralski
Gliwice, Poland Mirosław Szczepanik
July 2019

v
Contents

1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
2 Computational Models of Structures . . . . . . . . . . . . . . . . . . . . . . . . . 3
2.1 Finite-Element Models of Structures . . . . . . . . . . . . . . . . . . . . . 3
2.1.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
2.1.2 The FEM Formulation for Linear Structures . . . . . . . . . 3
2.2 Boundary Element Models of Structures . . . . . . . . . . . . . . . . . . 7
2.2.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
2.2.2 BEM for 2D Structures . . . . . . . . . . . . . . . . . . . . . . . . 8
2.3 FE and BE Models of Structures . . . . . . . . . . . . . . . . . . . . . . . . 13
References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15
3 Intelligent Computing Techniques . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
3.1 Introduction to Computational Intelligence . . . . . . . . . . . . . . . . . 17
3.2 Sequential Evolutionary Algorithms . . . . . . . . . . . . . . . . . . . . . . 18
3.2.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18
3.2.2 Evolutionary Operators . . . . . . . . . . . . . . . . . . . . . . . . . 19
3.3 Parallel and Distributed Evolutionary Algorithms . . . . . . . . . . . . 21
3.3.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21
3.3.2 The Parallel Evolutionary Algorithm . . . . . . . . . . . . . . . 22
3.3.3 The Distributed Evolutionary Algorithm . . . . . . . . . . . . 23
3.3.4 The Improved Distributed Evolutionary Algorithm . . . . . 23
3.3.5 Optimal Parameters of the Distributed Evolutionary
Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24
3.3.6 Numerical Examples . . . . . . . . . . . . . . . . . . . . . . . . . . . 26
3.3.7 Concluding Remarks . . . . . . . . . . . . . . . . . . . . . . . . . . 28
3.4 Information Granularity and Granular Computing . . . . . . . . . . . . 29
3.4.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29
3.4.2 Interval Numbers and Interval Arithmetic . . . . . . . . . . . 30

vii
viii Contents

3.4.3 Fuzzy Sets and Fuzzy Numbers . . . . . . . . . . . . . . . . . . 32


3.4.4 Rough Sets . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34
3.5 Fuzzy and Stochastic Evolutionary Algorithms . . . . . . . . . . . . . . 36
3.5.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36
3.5.2 The Fuzzy Evolutionary Algorithm . . . . . . . . . . . . . . . . 37
3.5.3 The Stochastic Evolutionary Algorithm . . . . . . . . . . . . . 40
3.6 Artificial Immune Systems and Algorithms . . . . . . . . . . . . . . . . 43
3.7 Particle Swarm Optimizer . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47
3.8 Artificial Neural Networks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50
3.8.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50
3.8.2 Artificial Neuron and Artificial Neural Network . . . . . . . 51
3.8.3 Activation Functions . . . . . . . . . . . . . . . . . . . . . . . . . . 52
3.8.4 Learning Methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55
3.8.5 Radial Basis Function Neural Networks . . . . . . . . . . . . . 57
3.9 Hybrid Computational Intelligence Algorithms . . . . . . . . . . . . . . 58
3.9.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 58
3.9.2 The Evolutionary Algorithm Coupled with Gradient
Methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 59
3.9.3 Local Optimization Method Supported by ANN . . . . . . . 62
3.9.4 Two-Step Optimization Strategy . . . . . . . . . . . . . . . . . . 63
3.9.5 The Fuzzy-Neural Network . . . . . . . . . . . . . . . . . . . . . . 65
3.10 Comparison of Particle Swarm Optimizer to Evolutionary
Algorithms and Artificial Immune Systems . . . . . . . . . . . . . . . . 67
3.10.1 The Choice of the Optimization Parameters . . . . . . . . . . 68
3.10.2 The Results of the Effectiveness Comparison . . . . . . . . . 70
References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 74
4 Structural Intelligent Optimization . . . . . . . . . . . . . . . . . . . . . . .... 77
4.1 Formulation of Single- and Multiobjective Optimization
Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 77
4.1.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 77
4.1.2 Formulation of the Optimization Problem . . . . . . . . . . . 78
4.1.3 Intelligent Optimization System . . . . . . . . . . . . . . . . . . 84
4.1.4 Geometry Modelling . . . . . . . . . . . . . . . . . . . . . . . . . . 86
4.2 Shape, Topology, Material and Size Optimization and Their
Parameterization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .... 88
4.2.1 Formulation of the Problem . . . . . . . . . . . . . . . . . .... 89
4.2.2 Concept of Generalized Evolutionary Optimization
of Structures . . . . . . . . . . . . . . . . . . . . . . . . . . . . .... 90
4.2.3 Parameterization . . . . . . . . . . . . . . . . . . . . . . . . . . .... 92
4.2.4 Additional Procedure Supporting the Bio-Inspired
Optimization . . . . . . . . . . . . . . . . . . . . . . . . . . . . .... 95
4.2.5 Smoothing Procedure . . . . . . . . . . . . . . . . . . . . . . .... 96
Contents ix

4.3 Optimization of Elastic Structures Under Static Loads . . . . . ... 98


4.3.1 Evolutionary Optimization of Shape, Topology
and Thickness or Mass Density of Structures . . . . . . ... 98
4.3.2 Immune Optimization of the Shape, the Topology
and Mass Density of Structures . . . . . . . . . . . . . . . . . . . 104
4.3.3 Evolutionary Optimization of a Bending Plate . . . . . . . . 108
4.3.4 Swarm Optimization of a Shell Bracket . . . . . . . . . . . . . 109
4.4 Optimization of Elastic Structures Under Dynamical Loads . . . . . 110
4.4.1 Evolutionary Generalized Optimization of Structures
Modelled by the FEM . . . . . . . . . . . . . . . . . . . . . . . . . 112
4.4.2 Bio-Inspired Optimization of Reinforced Structures
by the Coupled BEM/FEM . . . . . . . . . . . . . . . . . . . . . . 115
4.5 Optimization of Structures with Stiffeners . . . . . . . . . . . . . . . . . 124
4.5.1 Formulation of the Optimization Problem . . . . . . . . . . . 125
4.5.2 Examples of the Optimization of the Stiffeners
Location . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 127
4.6 Optimization of Structures Under Thermo-Mechanical
Loading . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 131
4.6.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 131
4.6.2 Objective Functions for Thermo-Mechanical
Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 135
4.6.3 Numerical Examples . . . . . . . . . . . . . . . . . . . . . . . . . . . 136
4.6.4 Concluding Remarks . . . . . . . . . . . . . . . . . . . . . . . . . . 146
4.7 Optimization of Structures with Cracks . . . . . . . . . . . . . . . . . . . 146
4.7.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 146
4.7.2 Formulation of the Optimization Task . . . . . . . . . . . . . . 147
4.7.3 Fatigue Crack Growth . . . . . . . . . . . . . . . . . . . . . . . . . 148
4.7.4 The Dual-Boundary Element Method for Crack
Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 151
4.7.5 NURBS Parametric Curves . . . . . . . . . . . . . . . . . . . . . . 152
4.7.6 Numerical Examples . . . . . . . . . . . . . . . . . . . . . . . . . . . 153
4.7.7 Concluding Remarks . . . . . . . . . . . . . . . . . . . . . . . . . . 157
4.8 Optimization of Structures with Nonlinearities . . . . . . . . . . . . . . 158
4.8.1 Objective Functions for the Evolutionary
Optimization of Structures with Nonlinearities . . . . . . . . 159
4.8.2 Numerical Examples . . . . . . . . . . . . . . . . . . . . . . . . . . . 161
4.8.3 Concluding Remarks . . . . . . . . . . . . . . . . . . . . . . . . . . 168
4.9 Optimization of Composites . . . . . . . . . . . . . . . . . . . . . . . . . . . 168
4.9.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 168
4.9.2 Laminates and Laminate Mechanics . . . . . . . . . . . . . . . 169
4.9.3 Formulation of the Optimization Task . . . . . . . . . . . . . . 171
4.9.4 Numerical Examples . . . . . . . . . . . . . . . . . . . . . . . . . . . 172
4.9.5 Concluding Remarks . . . . . . . . . . . . . . . . . . . . . . . . . . 176
x Contents

4.10 Multiobjective Optimization in Coupled Problems . . . . . . . . . . . 176


4.10.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 176
4.10.2 Objective Functions for the Multiobjective
Evolutionary Optimization . . . . . . . . . . . . . . . . . . . . . . 178
4.10.3 Numerical Examples . . . . . . . . . . . . . . . . . . . . . . . . . . . 179
4.10.4 Concluding Remarks . . . . . . . . . . . . . . . . . . . . . . . . . . 188
References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 189
5 Intelligent Computing in Inverse Problems . . . . . . . . . . . . . . . . . . . . 197
5.1 Formulation of the Inverse Problems . . . . . . . . . . . . . . . . . . . . . 197
5.2 Identification of Boundary Conditions . . . . . . . . . . . . . . . . . . . . 198
5.2.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 198
5.2.2 Formulation of the Problem . . . . . . . . . . . . . . . . . . . . . 199
5.2.3 Numerical Examples of Identification of Boundary
Conditions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 201
5.2.4 Concluding Remarks . . . . . . . . . . . . . . . . . . . . . . . . . . 209
5.3 Identification of Defects . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 210
5.3.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 210
5.3.2 Formulation of the Defect Identification Task . . . . . . . . 211
5.3.3 Geometrical Parameterization of Defects . . . . . . . . . . . . 212
5.3.4 The Intelligent Identification System . . . . . . . . . . . . . . . 214
5.3.5 Numerical Examples of Defect Identification . . . . . . . . . 216
5.3.6 Concluding Remarks . . . . . . . . . . . . . . . . . . . . . . . . . . 227
5.4 Identification of Material Properties . . . . . . . . . . . . . . . . . . . . . . 227
5.4.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 227
5.4.2 Formulation of the Materials Identification Task . . . . . . 228
5.4.3 Measurements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 229
5.4.4 Numerical Examples . . . . . . . . . . . . . . . . . . . . . . . . . . . 230
5.4.5 Concluding Remarks . . . . . . . . . . . . . . . . . . . . . . . . . . 235
References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 235
6 Closing Remarks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 237
Chapter 1
Introduction

Various methods of structural optimization have been elaborated. Methods based on


sensitivity analysis have found wide applications. The main disadvantage of this
class of methods is the fact that they lead to finding local solutions, because such
algorithms usually get stuck in the nearest local extremum.
In recent years, several methods have been emerged in connection with the
development of computational intelligence, which can be considered as a group of
bio-inspired methodologies. These methodologies simulate biological phenomena
as the theory of evolution (evolutionary algorithms), the behaviour of neural system
(artificial neural networks), the immune system (artificial immune systems) or the
behaviour of biological individuals (swarm algorithms).
The common feature of these methodologies is the ability to learn, which is
usually attributed to natural intelligence.
Additionally, they are resistant to different kinds of uncertainties encountered in
the system modelled by various models of granularity.
The book is devoted to intelligent design of structures as a novel kind of
designing based on computational intelligence. The design process is considered in
the framework of evolutionary and immune paradigms, swarm intelligence and
neural networks, which take the inspiration from the theory of evolution and the
natural immune systems, from the behaviour of a group of biological individuals or
from the natural nervous systems, respectively.
These approaches based on biologically inspired methods make possible to build
novel computational tools to solve optimization and inverse problems. The main
feature of this design process is the population approach, which means that the
designer deals with a set of structures. By introducing changes in structures, taking
into account genetic operators such as the crossover, the mutation, the cloning or
positions and velocities of particles and performing selection of the best solutions,
the designer creates new structural solutions or identifies some geometrical or
material properties of the existing structures. The designing process is equivalent to
evolutionary immune or swarm optimization. It facilitates the rational searching of

© Springer Nature Switzerland AG 2020 1


T. Burczyński et al., Intelligent Computing in Optimal Design,
Solid Mechanics and Its Applications 261,
2 1 Introduction

the best structure by the minimization or maximization of the objective functions


playing the role of the fitness.
The proposed methodology based on computational intelligence has some
heuristic and learning attributes typical for natural intelligence.
Optimization based on such approaches requires creating of a computer model of
the structure.
Computer models of the structures are built on the base of the finite-element
method (FEM), the boundary element method (BEM) or coupling of FEM and
BEM. A short description of possible discrete models of structures using these
methods is included in Chap. 2.
Various kinds of intelligent approaches using sequential, parallel, distributed,
fuzzy and hybrid evolutionary, immune and particle swarm algorithms and neural
computing are presented in Chap. 3.
Different kinds of optimization such as shape, topology, size and material
optimization for structures under static and dynamical mechanical and
thermo-mechanical loadings, structures with cracks and composite structures are
considered in Chap. 4. Multiobjective optimization for coupled problems is also
taken into account. Several numerical examples illustrating these kinds of opti-
mization are presented for 2D (plane–stress or plane–strain, plates, shells) as well as
3D structures.
Chapter 5 is devoted to special problems related to solving inverse problems in
which boundary conditions, defects such as voids or cracks, and material charac-
teristics are unknown.
Closing comments summarizing the book are presented in Chap. 6.
Chapter 2
Computational Models of Structures

Abstract This chapter contains a description of computational models of struc-


tures. The finite-element method (FEM) is presented for linear elastic structures. An
alternative computational technique based on the boundary element method
(BEM) is also described. The advantages and disadvantages of both the methods are
identified. The coupling of the finite-element method with the boundary element
method is also considered. The advantages of the coupled approach are depicted.

2.1 Finite-Element Models of Structures

2.1.1 Introduction

The section describes the basics of the finite element method (FEM). The name of
the method was used for the first time by Clough in 1960s of twentieth century [9]
in a work devoted to plates. Since then the method was extended to different types
of structures and beyond mechanics. The first book describing the FEM was by
Zienkiewicz published in 1971 [14], which was improved and extended later [16].
The last edition consists of three volumes, describing in detail many aspects of
FEM. The FEM is the most popular method used in academia and industry for
solving partial differential equations-based problems [8]. The method works well,
both for linear and nonlinear problems, isotropic and anisotropic materials. The
formulation of FEM for linear isotropic plates for static problem is described in the
following sections.

2.1.2 The FEM Formulation for Linear Structures

Let us consider a body X bounded by the boundary C, as shown in Fig. 2.1. The
body is loaded with internal forces b in area X, tractions p0 act on the boundary

© Springer Nature Switzerland AG 2020 3


T. Burczyński et al., Intelligent Computing in Optimal Design,
Solid Mechanics and Its Applications 261,
https://doi.org/10.1007/978-3-030-34161-9_2
4 2 Computational Models of Structures

Fig. 2.1 The considered


body

segment Cp and the displacements u0 are prescribed on the boundary segment Cu,
whereby Cp [ Cu ¼ C and Cp \ Cu ¼ ;.
The partial differential equation for the body can be expressed as:

rij;j þ bi ¼ 0 ð2:1:1Þ

where rij;j is an element of the stress tensor. The boundary conditions are given by
equations:

ui ðxÞ ¼ u0i ðxÞ; x 2 Cu ð2:1:2Þ

pi ðxÞ ¼ p0i ðxÞ; x 2 Cp ð2:1:3Þ

The stress–strain relation is expressed with the use of Hooke’s law:

rij ¼ Cijkl ekl ð2:1:4Þ

where Cijkl is a material coefficient given by:


 
Cijkl ¼ kdij dkl þ l dik djl þ dil djk ð2:1:5Þ

where k and l are Lame constants, d is a Kronecker symbol. Strains are defined on
the basis of displacements u as:

1 
eij ¼ ui;j þ uj;i ð2:1:6Þ
2

The virtual work equation can be expressed as:


Z Z Z
deij rij dX  dui bi dX  dui pi dX ¼ 0 ð2:1:7Þ
X X C
2.1 Finite-Element Models of Structures 5

where virtual strains are given by the following equation:

1 
deij ¼ dui;j þ duj;i ð2:1:8Þ
2

The stress and strains tensors can be presented in a more compact form due to
symmetry as Voigt vectors:

r ¼ ½ r11 r22 r33 r12 r23 r31 T


ð2:1:9Þ
¼ ½ rxx ryy rzz rxy ryz rzx T

and

e ¼ ½ e11 e22 e33 c12 c23 c31 T


ð2:1:10Þ
 T
¼ exx eyy ezz cxy cyz czx

where the shear strains are given as:

cij ¼ 2eij ð2:1:11Þ

The Eq. 2.1.7 can be expressed as:


Z Z Z
deT rdX  duT bdX  duT pdC ¼ 0 ð2:1:12Þ
X X C

The FEM operates on the discretized bodies. Discretization is a process of gen-


erating simple-shaped elements—finite elements. In most cases either triangles or
quadrilaterals are used in 2D problems. The hexahedral or tetrahedral elements are
commonly used for 3D cases. An example of discretized 2D structure is shown in
Fig. 2.2. The discretization (mesh generation) may lead to some inaccuracies of the
geometrical shape of the considered structure. The outer shape of the boundary may
be different after discretization. The errors connected with discretization can be
reduced with the use of more dense meshes (with more, smaller elements) or by
introducing curvilinear elements (where each edge is a curve instead of a straight line).
The nodal displacements u and shape functions N describe displacements and
virtual displacements in the body:

uðxÞ ¼ Nu; duðxÞ ¼ Ndu ð2:1:13Þ

The matrix N contains the shape functions and u is a column matrix containing
displacements. Virtual strains can be expressed as:
6 2 Computational Models of Structures

Fig. 2.2 The body


discretized using finite
elements

de ¼ Bdu ð2:1:14Þ

where geometrical matrix B determines relations between strains and displace-


ments. The matrix B can be written as:
2 3
N1 0 0
60 N2 0 7
6 7
60 0 N3 7
B¼6
6 N2
7 ð2:1:15Þ
6 N1 0 77
40 N3 N2 5
N3 0 N1

As a result, the Eq. (2.1.1) can be formulated as:


Z
BT rdXu ¼ f ð2:1:16Þ
X

where it contains information about internal and external loads:


Z Z
f ¼ NT bdX þ NT pdC ð2:1:17Þ
X C

The relation between strains and stresses for linear problems assuming small
strains and displacements can be formulated in a matrix form as:

r ¼ De ð2:1:18Þ

where D is an elasticity matrix. Strains can be expressed by using equation:


2.1 Finite-Element Models of Structures 7

e ¼ Bu ð2:1:19Þ

As a result, the Eq. (2.1.16) can be given in the following form:


Z
BT DBdXu ¼ f ð2:1:20Þ
X

The stiffness matrix K is defined as:


Z
K ¼ BT DBdX ð2:1:21Þ
X

The Eq. (2.1.20) can be finally expressed as:

Ku ¼ f ð2:1:22Þ

The equation can be solved by taking into account the boundary conditions. The
vector of nodal displacements u is obtained after solving the system of equations.
Internal stresses can be obtained using Eq. (2.1.23).

r ¼ DBu ð2:1:23Þ

where D is an elasticity matrix, and B is as described in (2.1.15).

2.2 Boundary Element Models of Structures

2.2.1 Introduction

The boundary element method (BEM) is not as popular as FEM but gives some
advantages, especially when dealing with infinite, semi-infinite structures or bodies
with complex geometries. For most problems, only the boundary of the body is
discretized, which leads to fewer elements compared to FEM. For some specific
problems, like elastoplasticity, internal boundary elements must be introduced. The
main disadvantage of BEM is full and asymmetrical matrices, which cause prob-
lems when solving algebraic equation systems. The details about BEM can be
found in Banerjee [1], Beer [2], Brebbia and Dominiguez [3], Brebbia et al. [4],
Dominguez [10], Burczyński [5, 6], Burczyński and Grabacki [7], Gaul et al. [11],
Sladek and Sladek [13]. The BEM is still under development, and new approaches
like fast multipole BEM [12] have been recently introduced that allow overcoming
some disadvantages of BEM.
The following sections present BEM for elastic 2D static problem.
8 2 Computational Models of Structures

2.2.2 BEM for 2D Structures

Let us consider the body as in Sect. 2.1.2 with the same boundary conditions and
equilibrium equations. The displacement equation of static elastic problem can be
formulated as:

Ls u ¼ b; x2X ð2:2:1Þ

In case of isotropic material, the Ls operator may be defined as:


2 @
3
ðl þ kÞ @x@@x ðl þ kÞ @x@@x
2 2 2
lD þ ðl þ kÞ @x 2
6 1 1 2 1 3
7
Ls ¼ 6 @2 @
ðl þ kÞ @x@@x 7
2 2

4 ðl þ kÞ @x @x 1 2
lD þ ðl þ kÞ @x 2
2 2 3 5 ð2:2:2Þ
ðl þ kÞ @x@@x ðl þ kÞ @x@@x @2
2 2
lD þ ðl þ kÞ @x 2
1 3 2 3 3

After taking into account Maxwell-Betti reciprocal work theorem, we can obtain
Somigliana identity:
Z Z Z
uðxÞ ¼ U ðx; yÞpðyÞdCðyÞ  P ðx; yÞuðyÞdCðyÞ þ U ðx; yÞbðyÞdXðyÞ
C C X
ð2:2:3Þ

where U ðx; yÞ is a fundamental solution, also named as Kelvin solution. The


fundamental solution has a form of symmetric tensor. The Uij ðx; yÞ component of
the tensor is a displacement of point y along direction j in infinite elastic medium
due to unit point load in point x along direction i. Elements of tensor Pij ðx; yÞ
describe the elements of stress vector in point y along direction j in infinite elastic
medium due to the point load in point x along direction i. Equation (2.2.3) allows
computing the displacements inside body when the boundary displacements and
tractions are known. Stress tensor coefficients can be obtained using the following
equation:
Z Z
rij ðxÞ ¼ Dijk ðx; yÞpk ðyÞdCðyÞ  Sijk ðx; yÞuk ðyÞdCðyÞ
C C
Z ð2:2:4Þ
þ Dijk ðx; yÞbk ðyÞdXðyÞ
X

where
@ 
Dijk ðx; yÞ ¼ Cijlm U ðx; yÞ ð2:2:5Þ
@xm lk
2.2 Boundary Element Models of Structures 9

@ 
Sijk ðx; yÞ ¼ Cijlm P ðx; yÞ ð2:2:6Þ
@xm lk

The fundamental solution in case of isotropic 2D structure in plane–stress state is


given by:

1  
Uij ðx; yÞ ¼  ð3  4mÞlnðrÞdij  ri rj ð2:2:7Þ
8pð1  mÞl
 
1   @r  
Pij ðx; yÞ ¼ ð1  2mÞdij þ 2ri rj  ð1  2mÞ ri nj  rj ni
4pð1  mÞr @n
ð2:2:8Þ

where m is a Poisson’s ratio, and r is a distance defined as:

rðx; yÞ ¼ ðri rj Þ1=2 ; ri ¼ xi ðyÞ  xi ðxÞ

and

@r ri
ri ¼ ¼
@xi ðyÞ r

Somigliana identity (2.2.3) can be used when all displacements and tractions are
known on the boundary of the structure. Only parts of the displacements and
tractions are known in boundary-value problems. If the point x tends to the
boundary, Eq. (2.2.3) becomes an integral equation. The equation can be modified
by assuming the area near boundary as a point x surrounded by boundary Ce with
radius e, as shown in Fig. 2.3. The C boundary can be expressed as a sum:

C ¼ ðC  Ce Þ þ Ce ð2:2:9Þ

Fig. 2.3 An area near the


boundary of the body
10 2 Computational Models of Structures

In this case, Somigliana identity becomes:


8
>
< Z Z
uðxÞ ¼ lim U  ðx; yÞpðyÞdCðyÞ  P ðx; yÞuðyÞdCðyÞ
e!0>:
CCe þ Ce CCe þ Ce
9 ð2:2:10Þ
Z =
þ U  ðx; yÞbðyÞdXðyÞ
;
X0

where X0 is a spherical area near source point x with radius e. Due to nonuniqueness
of U  , the first and the third integrals are improper. The second integral can be
expressed as a sum of two integrals:
Z Z
uðxÞ ¼ U  ðx; yÞpðyÞdCðyÞ þ U  ðx; yÞbðyÞdXðyÞ þ
C X
8 9
> > ð2:2:11Þ
< Z Z =
 
 lim P ðx; yÞuðyÞdCðyÞ þ P ðx; yÞuðyÞdCðyÞ
e!0>
: >
;
CCe Ce

The last integral in Eq. (2.2.11) can be formulated as:


Z Z

lim P ðx; yÞuðyÞdCðyÞ ¼ lim P ðx; yÞ½uðyÞ  uðxÞdCðyÞ
e!0 e!0
Ce Ce
Z ð2:2:12Þ

þ lim uðxÞ P ðx; yÞdCðyÞ
e!0
Ce

where the first integral is equal to zero due to the continuity of displacements. After
the rearrangement, the Eq. (2.2.12) can be formulated as:
Z
cðxÞ ¼ I þ lim P ðx; yÞdCðyÞ ð2:2:13Þ
e!0
Ce

where I is a unit matrix. Taking into account Eq. (2.2.13), Somigliana identity may
be written in the form:
Z Z

cðxÞuðxÞ þ P ðx; yÞuðyÞdCðyÞ ¼ U ðx; yÞpðyÞdCðyÞ
C C
Z ð2:2:14Þ
þ U ðx; yÞbðyÞdXðyÞ
X
2.2 Boundary Element Models of Structures 11

or using index notation as:


Z Z
cij ðxÞui ðxÞ þ Pij ðx; yÞuj ðyÞdCðyÞ ¼ Uij ðx; yÞpj ðyÞdCðyÞ
C C
Z ð2:2:15Þ
þ Uij ðx; yÞbj ðyÞdXðyÞ
X

The cij ðxÞ is equal to 0.5 in case of smooth boundary.


An external boundary of a 2D structure is discretized by means of linear or
curvilinear elements Ce , as shown in Fig. 2.4. The inner area of the structure should
also be discretized in case on nonlinearities or nonzero body forces.
Each boundary element Ce has We nodes. In case of 2D problems and linear
shape functions, We ¼ 2; while We ¼ 3 for quadratic shape functions. The points in
the element are described using nodes locations and shape functions Mw :

xi ðnÞ ¼ Mw ðnÞðxi Þw ; i ¼ 1; 2; w ¼ 1; . . .; We ð2:2:16Þ

The linear shape functions for 2D problem are expressed as:

1 1
M1 ðnÞ ¼ ð1  nÞ; M2 ðnÞ ¼ ð1 þ nÞ ð2:2:17Þ
2 2

Boundary element with linear shape functions is shown in Fig. 2.5.


Boundary displacements and tractions can be expressed using nodal values and
shape functions as:

uðxðnÞÞ  Mw ðuÞw ; x 2 Ce
w ð2:2:18Þ
pðxðnÞÞ  Mw ðpÞ ; x 2 Ce

Due to local formulation in n direction, the Jacobian of coordinate system


transfer should be used:

dCðyÞ ¼ JðnÞdn ð2:2:19Þ

Fig. 2.4 Body discretized


with boundary elements
12 2 Computational Models of Structures

Fig. 2.5 Boundary 2D


element with linear shape
functions

where
"  2 #12
@x1 2 @x2
JðnÞ ¼ þ ð2:2:20Þ
@n @n

The Eq. 2.2.14 can be reformulated taking into account the discretization and
shape functions:

ne X
X We Z
cðxÞuðxÞ ¼ ðuÞw
e P ½x; yðnÞMw ðnÞJðnÞdn þ
e¼1 w¼1
Ce
Z ð2:2:21Þ
X
ne X
We

 ðpÞw
e U ½x; yðnÞMw ðnÞJðnÞdn þ BðxÞ
e¼1 w¼1
Ce

The BðxÞ is present in case of body loads and it is the only integral over the area
of body. In many problems BðxÞ vanishes or can also be expressed as a boundary
integral:
Z
BðxÞ ¼ U ðx; yÞbðyÞdXðyÞ ð2:2:22Þ
X

The integrals over element Ce can be understood as:

Z Z
nodeð2Þ

½dn ¼ ½dn ð2:2:23Þ


Ce nodeð1Þ

The Eq. (2.2.21) can be expressed in matrix form as:

Hu ¼ Gp þ B ð2:2:24Þ
2.2 Boundary Element Models of Structures 13

where matrices H and G contain the values from integrals. Taking into account the
boundary conditions, Eq. (2.2.24) is converted into the following form:

AX ¼ F ð2:2:25Þ

where matrix A contains a part of the values from matrices H and G, vector
X contains unknown displacements and tractions, vector F contains known values
of displacements and tractions multiplied by part of the H and G matrices.
A concept of the dual boundary element method is presented in Sect. 4.7.4.

2.3 FE and BE Models of Structures

The coupling of finite elements with boundary elements allows using advantages of
both the methods [15]. FEM is a simple and efficient method for both linear and
nonlinear problems. The disadvantage of this method is the need of mesh gener-
ation for the interior of the structure. It is very inconvenient for analysis of struc-
tures with infinite volume. The use of FEM in such cases is complicated; hence one
can add artificial boundary to the structure and treat it as a structure with finite
boundary, or the specialized finite elements can be created. The BEM treats infinite
structures by simply defining only interior boundaries. The boundary of the
structure has to be meshed with the use of boundary elements if the problem is
linear. The coupled finite and boundary element method is a handy tool if we deal
with infinite structures with local nonlinearities. In such cases the infinite part is
modelled using boundary elements, while the structures near the areas with non-
linearities are modelled by using finite elements [2]. The coupling of boundary and
finite elements can be performed in two ways. The boundary elements region can be
defined as a finite element and included in FEM analysis or the finite elements may
be prescribed as a boundary element formulation and included in BEM analysis.
The chapter describes expressing boundary element region as a finite element.
Using the coupled finite and boundary element method, one should divide the
body into finite elements and boundary element regions (Fig. 2.6). X1 denotes
region with finite elements, and X2 is the region of boundary elements. The regions

Fig. 2.6 The 2D body


discretized using finite and
boundary elements
14 2 Computational Models of Structures

discretized using finite elements can contain nonlinearities (e.g. plastic strains). The
common nodes are present on the common boundary between finite and boundary
elements regions.
We can write the integral equation for the BEM region:
Z Z
cu ¼ U pdC  P udC ð2:3:1Þ
C C

where u is a displacement vector, p is a traction vector, c depends on boundary


smoothness, U* and P* are fundamental solutions. The integral Eq. (2.3.1) after
boundary discretization can be expressed as:

Hu ¼ Gp ð2:3:2Þ

where H and G are coefficient matrices. The above equation may be transformed
into a form similar to the FEM dependence between forces and displacements. The
matrix G is eliminated from the right side:

G1 Hu ¼ p ð2:3:3Þ

and then tractions are converted into nodal forces by multiplying Eq. (2.3.3) by
shape function matrix M:

MG1 Hu ¼ Mp ð2:3:4Þ

Assuming that f 0 ¼ Mp and K0 ¼ MG1 H; one obtains:

K0 u ¼ f 0 ð2:3:5Þ

K0 in the Eq. (2.3.5) can be treated as the FEM element stiffness matrix.
Unfortunately, the stiffness matrix K0 is not symmetrical. The iterative method
should be used to solve the problem because of the presence of nonlinearities in the
finite-element region.
Figure 2.7 presents an infinite body X: The structure is divided into
finite-element region near interior hole and the boundary elements model the infi-
nite structure. The boundary elements are located on the outer boundary of the
finite-elements region.
References 15

Fig. 2.7 The infinite body


discretized using finite and
boundary elements

References

1. Banerjee PK (1994) The boundary element method in engineering. McGraw-Hill Book


Company, London
2. Beer G (1983) Finite element, boundary element and coupled analysis of unbounded
problems in elastostatics. Int J Numer Meth Eng 19:567–580
3. Brebbia CA, Dominiguez J (1989) Boundary elements: an introductory course. Comput Mech
4. Brebbia CA, Telles JCF, Wrobel LC (1984) Boundary element techniques. Springer-Verlag,
Berlin
5. Burczyński T (1995) The boundary element method in mechanics. WNT, Warsaw
6. Burczyński T (ed) (2001) Advanced mathematical and computational mechanics aspects of
the boundary element method. Kluwer Publishers, Dordrecht
7. Burczyński T, Grabacki J (1998) The boundary element method. Part IV. In: Kleiber M
(ed) Handbook of computational solid mechanics. Springer-Verlag, Berlin
8. Chandrupatla TR, Belegundu AD (2002) Introduction to finite elements in engineering, 3rd
edn. Pearson
9. Clough RW (1960) The finite element in plane stress analysis. In: Proceedings of 2nd ASCE
and conference on electronic computation, Pittsburgh, PA
10. Dominguez J (1993) Boundary elements in dynamics. Computational Mechanics
Publications, Elsevier Applied Science, Southampton-Boston, London-New York
11. Gaul L, Kögl M, Wagner M (2003) Boundary element methods for engineers and scientists:
an introductory course with advanced topics. Springer-Verlag
12. Ptaszny J (2015) Accuracy of the fast multipole boundary element method with quadratic
elements in the analysis of 3D porous structures. Comput Mech 56(3):477–490
13. Sladek V, Sladek J (1983) Boundary integral equation method in thermoelasticity, Part I:
general analysis. Appl. Math. Modell 7:241–253
14. Zienkiewicz OC (1971) The finite element method in engineering science. McGraw-Hill,
London
15. Zienkiewicz OC, Kelly W, Bettess P (1977) The coupling of finite element method and
boundary solution procedures. Int J Numer Meth Eng 11:355–375
16. Zienkiewicz OC, Taylor RL (2000) The finite element method, vol 1–3. Oxford, Butterworth
Chapter 3
Intelligent Computing Techniques

Abstract The chapter presents various methods that can be qualified as intelligent
computing ones. Different bio-inspired methods and techniques in the form of
evolutionary algorithms (EAs), artificial immune systems (ANNs), particle swarm
optimizers (PSOs) and artificial immune systems (AISs) are described. Moreover,
information granularity attitude is introduced to model some uncertainties in
material properties, geometry or boundary conditions. Granular computing tech-
niques using interval numbers, fuzzy numbers and random variables are presented.
Combinations of EAs and granular computing techniques in the form of fuzzy and
stochastic EAS are proposed. Various hybrid computational intelligence algorithms
combining different, intelligent or conventional techniques (e.g. gradient opti-
mization methods) are described. A brief comparison of the effectiveness of
selected bio-inspired optimization methods (PSO, EA and AIS) for the chosen test
functions is also included.

3.1 Introduction to Computational Intelligence

Computational intelligence can be considered as a set of biologically inspired


computational methodologies and techniques. The common feature of these classes
of approaches is the fact that they have some learning attributes typical for natural
intelligence. These approaches can be very useful in the case when applications of
the traditional methods to solving complex problems are hard or useless. Such a
situation may occur when the considered problems contain different kinds of
uncertainty or have many extrema. Evolutionary algorithms, artificial immune
systems, swarm intelligence and artificial neural networks belong to the most
popular approaches based on biologically inspired methods. They are based on
natural selection, learning procedures and probabilistic rules. Sequential, parallel
and distributed approaches to evolutionary computation are presented in this
chapter. Different models of uncertainties and granular computing and their
applications to fuzzy and stochastic evolutionary algorithms are also considered.
Alternative biologically inspired approaches based on the simulation of immune

© Springer Nature Switzerland AG 2020 17


T. Burczyński et al., Intelligent Computing in Optimal Design,
Solid Mechanics and Its Applications 261,
https://doi.org/10.1007/978-3-030-34161-9_3
18 3 Intelligent Computing Techniques

phenomena and swarm behaviour of biological individuals are also presented. It is


also possible to create some hybrid computational intelligence algorithms which
can be considered as a special kind of synergy between biologically inspired and
traditional approaches. Comparisons of numerical results of tests based on particle
swarm, evolutionary and immune computations conclude and close the chapter.

3.2 Sequential Evolutionary Algorithms

3.2.1 Introduction

Evolutionary algorithms [4, 38] are algorithms that search the space of solutions
based on the analogy of the biological evolution of species. Like in biology, the
term of an individual is used, and it represents a single solution. Evolutionary
algorithms operate on populations of individuals, so while an algorithm works, all
the time we deal with a set of problem solutions. An individual consists of chro-
mosomes. Usually it is assumed that an individual contains only one chromosome.
Chromosomes consist of genes which are equivalent to design variables in opti-
mization problems. Adaptation of each individual is calculated using the fitness
function. Figure 3.1 shows how an evolutionary algorithm works.

Fig. 3.1 Diagram of the


evolutionary algorithm
3.2 Sequential Evolutionary Algorithms 19

In the first step, an initial population of individuals is created. Values of the


genes of particular individuals are usually generated randomly. The evolutionary
operators (like crossover and mutation) are used to produce new individuals cre-
ating an offspring population. In the next step, the individuals’ fitness function
values are computed. Then, the selection procedure is performed to choose indi-
viduals for next iteration taking into account their fitness values. The process is
repeated iteratively until the termination condition is satisfied. The termination
condition of the computation is formulated as the maximum number of iterations in
most cases.
In evolutionary algorithms floating-point representation is applied, which means
that genes included in chromosomes include floating-point numbers. Usually the
variation of the gene value is limited.
A single-chromosome individual (called a chromosome) chi, i = 1, 2, …, N,
where N is the population size, may be presented by means of a column or line
matrix, whose elements are represented by genes gij, j = 1, 2, …, n; n is the number
of genes in a chromosome (Fig. 3.2).

3.2.2 Evolutionary Operators

Evolutionary operators change the gene value like the biological mechanisms of the
mutation and the crossover. Different kinds of operators are presented in publica-
tions, and the basic ones are:
– uniform mutation,
– mutation with Gaussian distribution,
– boundary mutation,
– simple crossover,
– arithmetical crossover.
A uniform mutation changes the values of randomly chosen genes of randomly
selected individual. The gene takes a random value with a uniform distribution from
the variables range. The diagram of how the uniform mutation operator works is
presented in Fig. 3.3.

Fig. 3.2 The structure of an


individual
20 3 Intelligent Computing Techniques

Fig. 3.3 Schema of a


uniform mutation

A mutation with Gaussian distribution is an operator changing the values of an


individual’s genes randomly, similar to the uniform mutation. New values of the
genes are created by means of random numbers with the Gaussian distribution. The
operator searches the individual’s surrounding.
A boundary mutation modifies selected gene of selected individual which can
randomly take only one of two values: either the lower or upper limit of the variable
range (Fig. 3.4).
A simple crossover is an operator creating an offspring on the basis of two parent
individuals. A cutting position is drawn (Fig. 3.5), and the offspring individuals
consist of the genes coming partly from the first and partly from the second
individual.
An arithmetical crossover has no biological counterpart. A new individual is
formed similar to a simple crossover, on the basis of two parent individuals;
however, the values of the individual’s genes are defined as the average values of
the parent individuals’ genes (Fig. 3.6).
An important element of an evolutionary algorithm is the mechanism of selec-
tion. The probability of the individual’s survival depends on the value of the fitness
function. The ranking selection is performed in a few steps. First, the individuals are
classified according to the values of the fitness function; then a rank value is
attributed to each individual. It depends on the individual’s number and the rank

Fig. 3.4 Diagram of


boundary mutation
3.2 Sequential Evolutionary Algorithms 21

Fig. 3.5 Diagram of a simple


crossover

Fig. 3.6 Diagram of an


arithmetical crossover

function. The best individuals obtain the highest rank value; the worst obtain the
lowest ones. In the final step, individuals for the offspring generation are drawn, but
the probability of drawing particular individuals is closely related to their rank
value.

3.3 Parallel and Distributed Evolutionary Algorithms

3.3.1 Introduction

The sequential evolutionary algorithms are well-known tools for global optimiza-
tion [4, 38]. The number of fitness function evaluations during optimization is equal
to thousands or even hundreds of thousands. The fitness function evaluation for
most of the real-life problems connected with mechanics or mechanical engineering
takes a lot of time (from seconds to hours). The long-time computations can be
shortened when the parallel or distributed evolutionary algorithm is used. The
fitness function evaluation is done in parallel way when the parallel evolutionary
algorithms are used. The distributed evolutionary algorithms operate on many
22 3 Intelligent Computing Techniques

subpopulations. The parallelization of the distributed evolutionary algorithm leads


to two cases: the first one when each subpopulation uses different processor; and the
second one when the different processors can be used by each chromosome of the
subpopulations.

3.3.2 The Parallel Evolutionary Algorithm

The parallel evolutionary algorithms [18] perform an evolutionary process in the


same manner as the sequential evolutionary algorithm. The difference is in the
fitness function evaluation. The parallel evolutionary algorithm evaluates fitness
function values in parallel way. Theoretically, the maximum reduction in time
needed to solve the optimization problem using parallel evolutionary algorithms is
equal to the number of processing units used. The maximum number of processing
units which can be used is constrained by the number of chromosomes in the
population. The flowchart of the parallel evolutionary algorithm is shown in
Fig. 3.7. The starting population of chromosomes is created randomly. The evo-
lutionary operators change chromosomes and the fitness function value for each

Fig. 3.7 The parallel evolutionary algorithm


3.3 Parallel and Distributed Evolutionary Algorithms 23

chromosome is computed. The server/master transfers chromosomes to clients/


workers. The workers compute the fitness function and send it to the server. The
workers operate on different processing units. The selection is performed after
computing the fitness function value for each chromosome. The selection decides
which chromosomes will be in the new population. The selection is done randomly,
but the fitter chromosomes have bigger probability to be in the new population. The
next iteration is performed if the stop condition is not fulfilled. The stop condition
can be expressed as the maximum number of iterations.

3.3.3 The Distributed Evolutionary Algorithm

The distributed genetic algorithms [1, 55] and the distributed evolutionary algo-
rithms (DEA) work similar to many evolutionary algorithms operating on sub-
populations. The evolutionary algorithms exchange chromosomes during a
migration phase between subpopulations. The flowchart of DEA is presented in
Fig. 3.8. When DEA is used, the number of fitness function evaluations can be
lower in comparison with sequential and parallel evolutionary algorithms. DEA
usually works in a parallel manner. Each of the evolutionary algorithms in DEA
works on a different processing unit. The theoretical reduction of calculation time
could be bigger than the number of processing units. The starting subpopulation of
chromosomes is created randomly. The evolutionary operators change chromo-
somes and the fitness function value for each chromosome is computed. The
migration exchanges a part of chromosomes between subpopulations. The selection
decides which chromosomes will be in the new population. The selection is done
randomly, but the fitter chromosomes have bigger probability to be in the new
population. The selection is performed on chromosomes changed by operators and
immigrants. The next iteration is performed if the stop condition is not fulfilled. The
stop condition can be expressed as the maximum number of iterations.

3.3.4 The Improved Distributed Evolutionary Algorithm

To improve the scalability of the distributed evolutionary algorithm, mechanisms


from the parallel evolutionary algorithm can be used. The simplest improvement is
to compute fitness function values in a parallel way. The maximum number of
processing units which can be used is equal to the sum of chromosomes in sub-
populations instead of the number of subpopulations. The flowchart of the modified
distributed evolutionary algorithm is presented in Fig. 3.9.
24 3 Intelligent Computing Techniques

Fig. 3.8 The distributed


evolutionary algorithm (one
subpopulation)

3.3.5 Optimal Parameters of the Distributed Evolutionary


Algorithm

It is hard to find the optimal parameters of the evolutionary algorithm. One of the
methods is to perform optimization with different parameters of the evolutionary
algorithm and comparison of the results. The parameter values can be found with
the use of other “master” evolutionary algorithm. The parameters are coded into the
master algorithm chromosomes. The following parameters can be taken into
account:
– the probabilities of operators,
– the number of subpopulations,
– the number of chromosomes in subpopulations,
– migration topology,
– the frequency and number of migrating chromosomes.
3.3 Parallel and Distributed Evolutionary Algorithms 25

Fig. 3.9 The improved distributed evolutionary algorithm

The fitness function of the master algorithm can be formulated as:

1X m
FðxÞ ¼ EVt ð3:3:1Þ
m t¼1

where m is the number of tests performed for parameters x, and EVt is the number
of fitness function evaluations in test t. The tested algorithm stops after achieving a
prescribed optimal value of the fitness function. Other fitness function can be used
when the stop criterion of the tested algorithm is the number of fitness function
evaluations:

1X m
FðxÞ ¼ BEt ð3:3:2Þ
m t¼1

where BEt means the best fitness function value found in test t. The number of tests
m is important because evolutionary algorithms are stochastic and the quality of the
26 3 Intelligent Computing Techniques

Fig. 3.10 The flowchart of the algorithm used to optimize the parameters of the evolutionary
algorithm

algorithm with parameters x should be determined as the average of many runs. The
flowchart of the algorithm is presented in Fig. 3.10.
This method is very time-consuming and can be applied for mathematical
functions and simple mechanical problems only, but it can indicate parameter
values for other real problems.

3.3.6 Numerical Examples

The Rastrigin function with 20 design variables is considered. The function is


defined by the equation:

X
20  
FðxÞ ¼ 200 þ x2i  10 cosð2pxi Þ ð3:3:3Þ
i¼1
3.3 Parallel and Distributed Evolutionary Algorithms 27

The minimum value of the function is equal to zero. The constraints imposed on
the design variables are:

5:12  xi  5:12 ð3:3:4Þ

The fitness function for the “master” evolutionary is expressed by (3.3.1). The
number of tests m is equal to 30. The “master” algorithm chromosome contains
information about:
– the probability of the uniform mutation (0–1);
– the probability of the mutation with Gaussian distribution (0–1);
– the probability of the simple crossover (0–1);
– the probability of the arithmetic crossover (0–1);
– the number of chromosomes in subpopulations (2–30);
– the frequency of the migration (1–10);
– the number of migrating chromosomes (1–30).
The tests were performed for several numbers of subpopulations. The obtained
values of the evolutionary algorithm’s parameters suggest the following: using only
the mutation with Gaussian distribution, a small number of chromosomes, the
migration every generation and the migration of all chromosomes.
After these tests it can be checked out how the number of fitness function
evaluations depends on the number of chromosomes in one population (only the
mutation with the Gaussian distribution was used). In Fig. 3.11 the average (for 100
tests), maximal and minimal number of fitness function evaluations in the function
of the number of chromosomes are presented. The lowest average for six
chromosomes is obtained, and the average was 7891.9 evaluations.
Next, the tests for the number of subpopulations varying from 2 to 16 were
performed. Two chromosomes in each subpopulation and only the mutation with
the Gauss distribution were used. The migration occurs in every generation and all
the chromosomes migrate. The topology of migration is “full connected”. The
results are shown in Fig. 3.12. The average is computed for 100 tests.

Fig. 3.11 The number of average minimal maximal


number of fitness function

30000
fitness function evaluations in
the function of the number of 25000
chromosomes for one
evaluations

20000
population
15000

10000

5000

0
4 8 12 16 20 24 28 32
number of chromosomes
28 3 Intelligent Computing Techniques

Fig. 3.12 The number of average minimal maximal

number of fitness function


fitness function evaluations in 14000
the function of the number of 12000
subpopulations

evaluations
10000
8000
6000
4000
2000
2 4 6 8 10 12 14 16
number of subpopulations

Fig. 3.13 The maximal 17 speedup


speedup of computation 15
13
speedup

11
9
7
5
3
1
2 7 12
number of subpopulations

Figure 3.13 presents speedup computed as:

t1
k¼ ð3:3:5Þ
tn

where t1 is the time needed for optimization when one population is used, and tn is
the time needed when n subpopulations are used. Because these tests were per-
formed by means of a single computer, we compute maximal speedup which can be
achieved when communication time between subpopulations is equal to zero. The
number of fitness function evaluations for one population was used as time t1, and
the average number of fitness function evaluations for n subpopulations was used as
the time tn. Such assumptions can be made when time needed for each fitness
function evaluation is constant.

3.3.7 Concluding Remarks

The distributed evolutionary algorithm outperforms one-population algorithms. The


time needed for computation can be shortened using such algorithm. The optimal
algorithm parameters could be found using other evolutionary algorithm but is very
time-consuming. The speedup achieved for the mathematical function is near linear,
and for the real problem is much bigger. This difference may be due to the parameters
of the evolutionary algorithm. The test for mathematical function parameters of the
evolutionary algorithm was optimized by means of other “master” evolutionary
3.3 Parallel and Distributed Evolutionary Algorithms 29

algorithm. During the mechanical optimization, such optimization of the parameters


cannot be done. It is possible that there are such parameters which may give worse
speedup, but the time needed for finding such parameters is very long.

3.4 Information Granularity and Granular Computing

3.4.1 Introduction

The topic of information granularity was presented in the early works of Lotfi
Zadeh in the context of fuzzy sets [59]. Information granules can be defined as
elements grouped together on the basis on their similarity, indistinguishability,
proximity or functionality [6]. The concept of information granules is more
philosophical than technical and it can be diversely interpreted. Information gran-
ules are parts of abstraction encountered everywhere in the surrounding reality and
they represent the human, intuitive way of thinking. They constitute a connection
between the real world and its digital representation. Granularity can be considered
spatial as well as temporal. Granules may be in the form of classes, objects, subsets
and clusters or different elements of the reality [57]. The process of preparation of
information granules is called granulation. Information granules can exhibit dif-
ferent granularity levels according to the required accuracy [48]. Granules of par-
ticular level may be formed of lower level subgranules and they also can state
subgranules for granules of higher level.
Granular computing is knowledge-oriented in contrast to numeric calculations
based on the data. It denotes the idea of processing of the data in the form of the
information granules. Granular computing as a concept of “a subset of computing
with words” was introduced at the end of twentieth century by Zadeh [60]. Granular
computing is also considered as the semantical transformation of data in the
granulation process and verification of information abstractions in the noncompu-
tational way [7]. According to Yao [57], granular computing may be considered as
an integration of three elements (granular computing triangle, Fig. 3.14):

Fig. 3.14 Granular computing triangle


30 3 Intelligent Computing Techniques

(i) philosophy (structured thinking): an attempt to extract and formalize the


human way of understanding;
(ii) methodology (structured problem-solving): determines the methods and
techniques of systematic solving of various problems;
(iii) computation (structural information processing): refers to the problems of
abstract information processing, both in the human mind and using
machines.
Significant in technical sciences are models of information granules related to
the uncertainties in systems which result from the manufacturing of mechanical
structures.
In many engineering problems it is necessary to identify some parameters as
material properties, geometric parameters or boundary conditions. If such parame-
ters cannot be determined precisely, they can be considered as uncertain and mod-
elled as information granules. Granular computing is then treated as a set of methods
and numerical algorithms for numerical processing of uncertain data. The most
commonly used uncertainty models are: interval numbers, fuzzy sets and rough sets.

3.4.2 Interval Numbers and Interval Arithmetic

The history of intervals and interval arithmetic goes back to 1950s of the twentieth
century. It was introduced by Ramon Moore in 1959 as a tool for automatic control
of the computational errors that arise from input error, rounding errors during
computation and truncation errors while using a numerical approximation to the
mathematical problem [19]. Nowadays, this idea leads to very powerful technique
with many applications in mathematics, computer science and engineering.
A real interval [x] is a connected portion of the real line. It is defined by a
bounded, closed subset of real numbers [39]:

½a ¼ ½a; 
a ¼ fx 2 R; a  x  
ag; a; a 2 R; a  a ð3:4:1Þ

where a; 
a are lower and upper bounds of the interval, respectively; x is any element
belonging to interval [a].
The midpoint mð½aÞ; the radius rð½aÞ and the width wð½aÞ of the interval are
defined as:

1
mð½aÞ ¼ ða þ  aÞ
2
1
rð½aÞ ¼ ð a  aÞ ð3:4:2Þ
2
j½aj ¼ maxfðjaj; j
aj Þ g
wð½aÞ ¼ a  a ¼ 2rð½aÞ

A degenerate interval [a] for which a ¼ 


a is called a singleton.
3.4 Information Granularity and Granular Computing 31

Interval arithmetic is the arithmetic of quantities that lie within the specified
ranges (intervals) instead of having exact values [2]. Interval arithmetic is typically
limiter to real intervals. If two intervals [a] and [b] are bounded and closed,
arithmetical interval operators can be defined as:

½a  ½b ¼ fa  bja 2 ½a ^ b 2 ½bg ð3:4:3Þ

where:  2 f þ ; ; ; g.
The endpoints of the interval ½a  ½b can be determined as:

aþ
½a þ ½b ¼ ½a þ b;  b
½a  ½b ¼ ½a   b; 
a  b
     ð3:4:4Þ
½a  ½b ¼ min ab; a b;  a
ab;  b ; max ab; ab; ab; ab
 
½a=½b ¼ ½a  ½b
1
; where ½b
1
¼ 1bb 2 ½b if 0 62 ½b

If IðRn Þ denotes the set of real intervals, an interval vector ½v in IðRn Þ has
n components, thus intervals being defined as ½vi  2 IðRn Þ; i ¼ 1; . . .; n:
The midpoint, the radius and the upper bound of the interval vector are defined
similar to (3.4.2) as:

mð½vÞi ¼ mð½vi Þ
rð½vÞi ¼ rð½vi Þ ð3:4:5Þ
j½vji ¼ j½vi j

The width and norm of the interval vector are scalars represented, respectively,
as:

wð½vÞ ¼ maxfwð½vi Þg


i
ð3:4:6Þ
k½vk ¼ max½jvi j
i

An interval matrix ½A ¼ ð½aij Þ has m rows and n columns; each element of the
interval matrix is an interval ½aij  2 IðRmn Þ; i ¼ 1; . . .; m; j ¼ 1; . . .; n: The width
and norm of the interval matrix are scalars defined as:
  
wð½AÞ ¼ max w ½aij 
i; j
nX  o ð3:4:7Þ
k½Ak ¼ max ½aij 
i j
32 3 Intelligent Computing Techniques

3.4.3 Fuzzy Sets and Fuzzy Numbers

Fuzzy sets are an extension of the classical (crisp) notion of set introduced by
Lotfi A. Zadeh in 1965 [61]. In the classical sets theory, any element belongs or not
to a given set, so only two logical states: 0 or 1 are admissible. In the fuzzy sets
theory, an arbitrary element can also partially belong to the set.
If X is a nonempty set of elements (objects, points) denoted by x, a fuzzy set A is
a set of ordered pairs defined as:

A ¼ fðx; lA ðxÞÞ; 8x 2 Xg ð3:4:8Þ

where lA ðxÞ is the membership function which associates with each element x in
X a real non-negative number whose supremum is finite.
There exist many standard membership functions; some of them are presented in
Fig. 3.15.
A support of the fuzzy set A is defined as the crisp subset of all x 2 X whose
elements all have nonzero membership function values:

supp A ¼ fx 2 X; lA ðxÞ [ 0g ð3:4:9Þ

An a-level set (a-cut) is the crisp set of elements that belongs to the fuzzy set
A at least to the degree a (Fig. 3.16):

Aa ¼ fx 2 X : lA ðxÞ ag 8ða 2 ½0; 1 ð3:4:10Þ

Each fuzzy set can uniquely be represented by the family of all its a-cuts.

Fig. 3.15 Exemplary membership functions: a triangular; b trapezoidal; c Gaussian

Fig. 3.16 An a-cut of a


triangular fuzzy set
3.4 Information Granularity and Granular Computing 33

Fig. 3.17 Operations on fuzzy sets: a intersection; b union; c complement

If supðlA ðxÞÞ ¼ 1; the fuzzy set is called normal one; otherwise fuzzy set A is
subnormal. Depending on the membership function value, the element x can be: not
included in the fuzzy set A (lA ðxÞ = 0), fully included (lA ðxÞ = 1) or x is called a
fuzzy member ð0\lA ðxÞ\1Þ:
The following operations on fuzzy sets were proposed by Zadeh (Fig. 3.17) [61]:
(i) intersection (logical and):

lA \ B ðxÞ ¼ minflA ðxÞ; lB ðxÞg8x 2 X ð3:4:11Þ

(ii) union (logical or):

lA [ B ðxÞ ¼ maxflA ðxÞ; lB ðxÞg8x 2 X ð3:4:12Þ

(iii) complement (negation):

lA~ ðxÞ ¼ 1  lA ðxÞ ð3:4:13Þ

Intersection and union definitions were later extended to T-norm and S-norm
(T-conorm) definitions, respectively:

lA \ B ðxÞ ¼ T ðlA ðxÞ; lB ðxÞÞ


ð3:4:14Þ
lA [ B ðxÞ ¼ SðlA ðxÞ; lB ðxÞÞ
34 3 Intelligent Computing Techniques

Table 3.1 Selected T-norms Name Tða; bÞ Sða; bÞ


and corresponding S-norms
Zadeh’s norms minða; bÞ maxða; bÞ
Algebraic ab aþb  a  b
norms
Lukasiewicz’s maxða þ b  1; 0Þ minða þ b; 1Þ
norms
8 8
Weber’s norms < a if b¼0 < a if b¼0
b if a¼0 b if a¼0
: :
1 if a; b 6¼ 0 1 if a; b 6¼ 0

Each T-norm corresponds to an S-norm:


h i
aT  b ¼ 1  ð 1  aÞ S  ð 1  bÞ ð3:4:15Þ

where a ¼ lA ðxÞ; and b ¼ lB ðxÞ:


Some of the T-norms and corresponding S-norms are tabulated in Table 3.1.
A fuzzy number is a fuzzy set of the real line with a normal, convex and
continuous membership function of bounded support. Applying the extension
principle to arithmetic operations, it is possible to define fuzzy arithmetic operations
as [30]:

lC ðzÞ ¼ max fmin½lA ðxÞ; lB ðyg ð3:4:16Þ


z¼xy

where:  2 f þ ; ; ; g.
Representing fuzzy numbers by a-cuts allows very often using simple interval
arithmetic instead of extension principle.

3.4.4 Rough Sets

Rough set theory was introduced in 1982 by Zdzisław Pawlak as a formal


approximation of conventional sets in terms of a pair of sets which gives the lower
and upper approximation of the original set [45]. The idea of rough sets shows
another approach to imprecision, which is expressed by a boundary region of the
set, not by a partial membership like in fuzzy sets.
The idea of rough sets can be described by means of approximations (interior
and closure) being topological operations.
For a given set of objects U (the universe), any set X being a subset of U with
respect to an indiscernibility relation R
U  U may be characterized by the
following approximations [46]:
3.4 Information Granularity and Granular Computing 35

(i) the lower approximation of a set X with respect to R is the set of all objects,
which are certainly X with respect to R:
[
R ðxÞ ¼ x2U fRðxÞ : RðxÞ
X g ð3:4:17Þ

where R(x) denotes the equivalence class of R determined by element x.


(ii) the upper approximation of a set X with respect to R is the set of all objects,
which are possibly X with respect to R:
[
R ðxÞ ¼ x2U fRðxÞ : RðxÞ \ X 6¼ 0g ð3:4:18Þ

The boundary region of a set X with respect to R is the set of all objects, which
can be neither ruled in nor ruled out as members of the set X with respect to R:

RNR ðXÞ ¼ R ðXÞ  R ðXÞ ð3:4:19Þ

Indiscernibility relation being the equivalence relation divides U into disjoint,


nonempty classes of abstraction. This relation describes the lack of knowledge
about the universe. Information (knowledge) granules generated by R are equiva-
lence classes of the indiscernibility relation. They represent elementary portion of
knowledge that one is able to perceive due to R.
The idea of rough sets is depicted in Fig. 3.18.
A set X is rough if the boundary region RNR ðXÞ is not empty; otherwise it is
crisp (exact with respect to R).

Fig. 3.18 Rough set idea


36 3 Intelligent Computing Techniques

Approximations have the following properties:

R ðXÞ
X
R ðXÞ
R ð;Þ ¼ R ð;Þ ¼ ;
R ðUÞ ¼ R ðUÞ ¼ U
R ðX \ YÞ ¼ R ðXÞ \ R ðYÞ
R ðX \ YÞ
R ðXÞ \ R ðYÞ
R ðX [ YÞ R ðXÞ [ R ðYÞ ð3:4:20Þ
X
Y ! R ðXÞ
R ðYÞ & R ðXÞ
R ðYÞ
R ðXÞ ¼ R ðXÞ
R ðXÞ ¼ R ðXÞ
R R ðXÞ ¼ R R ðXÞ ¼ R ðXÞ
R R ðXÞ ¼ R R ðXÞ ¼ R ðXÞ

Rough sets can also be defined by means of rough membership function which
expresses conditional probability that x belongs to X in terms of information about
x expressed by R [47]:
jX \ RðxÞj
lRX : U ! \0; 1[ ; lRX ðxÞ ¼ ð3:4:21Þ
jRðxÞj

where j X j denotes the cardinality of X.


Rough membership can be treated as a generalization of fuzzy membership. The
main difference is for the rough sets the membership of union and intersection of
sets cannot, in general, be calculated from membership of constituents.
Approximations and the boundary region of a set can be defined using rough
membership functions as:
 
R ðXÞ ¼ x 2 U : lRX ðxÞ ¼ 1
 
R ðXÞ ¼ x 2 U : lRX ðxÞ [ 0 ð3:4:22Þ
 
RNR ðXÞ ¼ x 2 U : 0\lRX ðxÞ\1

3.5 Fuzzy and Stochastic Evolutionary Algorithms

3.5.1 Introduction

Systems and processes in physical problems are expressed by some parameters, like
material properties, geometry or boundary conditions. If it is not possible to
describe such parameters precisely, they can be treated as uncertain ones. There
exist different models which describe granular (imprecise) character of data: interval
numbers, fuzzy sets, rough sets and random variables. In the present work it is
assumed that the granularity of information is represented in the form of the fuzzy
numbers and random variables.
3.5 Fuzzy and Stochastic Evolutionary Algorithms 37

Evolutionary algorithms are global optimization methods which process a set


(population) of candidate solutions so that the searching is multidirectional. As the
only information necessary for working is the fitness (objective) function value, the
evolutionary algorithms can be applied for discrete optimization tasks.
In the proposed approach the fuzzy and stochastic versions of the evolutionary
algorithm are used. The general scheme of such algorithms is similar to typical,
real-coding evolutionary algorithm [38]—the main difference is that the chromo-
somes consist of uncertain genes, which are represented by fuzzy numbers or
random variables. The evolutionary operators (mutation operators, crossover
operators and selection procedure) are modified to work with uncertain types of
data.

3.5.2 The Fuzzy Evolutionary Algorithm

The fuzzy evolutionary algorithm (FEA) works on fuzzy chromosomes consisting


of fuzzy genes. In the FEA the data representation, the evolutionary operators and
the selection procedure are the fuzzy ones [13, 14]. Each chromosome represents
one candidate fuzzy solution of the optimization task. After the evaluation of the
solution, a fuzzy fitness function value of the chromosome is obtained. The scheme
of the FEA is presented in Fig. 3.19.
The jth fuzzy chromosome chj in the population consists of N fuzzy genes and
has the following form:

Fig. 3.19 Scheme of the fuzzy evolutionary algorithm


38 3 Intelligent Computing Techniques

Fig. 3.20 The fuzzy number and corresponding a-cuts

 
ch j ðxÞ ¼ x1j ; x2j ; . . .; xij ; . . .; xNj ð3:5:1Þ

The standard representation of the fuzzy number can be problematic from the
fuzzy numbers’ arithmetic point of view. To reduce this inconvenience, it is pos-
sible to represent the fuzzy number x as a set of interval values ½x; x lying on
adequate a-cut levels, as shown in Fig. 3.20.
The number of a-cuts can be arbitrary. Figure 3.20 shows an example of the
replacement of the fuzzy value using the five interval values. This attitude allows
the use of interval arithmetic operators instead of fuzzy ones. It is also possible to
obtain different (symmetrical and asymmetrical) forms of the fuzzy values, as
presented in Fig. 3.21.
To simplify evolutionary operations, a central value cv is introduced and it is
assumed that each gene is represented by a trapezoidal fuzzy number described by
means of five real values (Fig. 3.22):
 
xij ¼ aL ðxij Þ; aU ðxij Þ; cvðxij Þ; bL ðxij Þ; bU ðxij Þ ð3:5:2Þ

where: cvðxij Þ is the central value of a fuzzy number; ak ðxij Þ; bk ðxij Þ is the distances
between the central value and the left and right boundaries of the interval on lower
(L) and upper (U) a-cuts, respectively.

Fig. 3.21 Selected


symmetrical and
asymmetrical forms of the
fuzzy values
3.5 Fuzzy and Stochastic Evolutionary Algorithms 39

Fig. 3.22 The fuzzy gene

Special evolutionary operators have been proposed to work with fuzzy repre-
sentations of genes: two mutation operators, one crossover operator and one
selection operator [14]. In the first type of the mutation, the central value cvðxij Þ of
the ith gene of jth chromosome is modified:

cvðxij Þ ¼ cvðxij Þ þ Gcv ð3:5:3Þ

where: Gcv is a random value with Gaussian distribution.


The second type of mutation operators changes the distances ak ðxij Þ or bk ðxij Þ in
the following way:

ai ðxij Þ ¼ ai ðxij Þ þ Ga ; bi ðxij Þ ¼ bi ðxij Þ þ Gb ð3:5:4Þ

where: Ga and Gb are random values with Gaussian distribution.


This operator changes only for selected a-cut and it is considered symmetric
(Ga = –Gb) or asymmetric one.
The fuzzy arithmetic crossover operator creates two offspring chromosomes
ch1 ðxÞ and ch2 ðxÞ on the basis of the two parents ch1 ðxÞ and ch2 ðxÞ: The selected
parameters of offspring chromosomes’ genes are expressed as:

cvðx1i Þ ¼ kcvðx1i Þ þ ð1  kÞcvðx2i Þ


cvðx2i Þ ¼ kcvðx2i Þ þ ð1  kÞcvðx1i Þ
aðx1i Þ ¼ kaðx1i Þ þ ð1  kÞaðx2i Þ
ð3:5:5Þ
aðx2i Þ ¼ kaðx2i Þ þ ð1  kÞaðx1i Þ
bðx1i Þ ¼ kbðx1i Þ þ ð1  kÞbðx2i Þ
bðx2i Þ ¼ kbðx2i Þ þ ð1  kÞbðx1i Þ

where k 2 [0;1] is a random value with the uniform distribution.


40 3 Intelligent Computing Techniques

The fuzzy selection is based on the tournament selection method. The fuzzy
fitness function values of chromosomes chosen for tournament are compared in
order to select the best individual in the tournament. The better chromosome wins
with a probability depending on the introduced parameter b.
It is assumed that the minimization problem is taken into account. Consider the
fitness functions for two different fuzzy chromosomes having the same number of
a-cuts: eval1= F [ch1(x)] and eval2= F [ch2(x)]. In the first step the central values cv
are compared. If they have identical values, the width condition (ai and bi) is
checked for each a-cut. If both widths are identical, both b1 and b2 take the value
equal to 0.5. Otherwise, the fuzzy value, which has the bigger width take the value
smaller than 0.5 (e.g. b1 = 0.4) and the second fuzzy value takes value greater than
0.5 (e.g. b2 = 0.6). Such attitude promotes more concentrated fuzzy numbers.
Assuming that central values have different values and cv(eval1) < cv(eval1),
parameter b1 takes a value close to 1 (e.g. b1 = 0.95) while parameter b2 takes a
value close to 0 (e.g. b2 = 0.05, b1 þ b2 ¼ 1Þ. In the next step the following
conditions are checked:

cv½ch2 ðxÞ  aðx2i Þ  cv½ch1 ðxÞ þ bðx1i Þ


cv½ch2 ðxÞ  aðx2i Þ  cv½ch1 ðxÞ
cv½ch2 ðxÞ  aðx2i Þ  cv½ch1 ðxÞ  aðx1i Þ ð3:5:6Þ
cv½ch ðxÞ  cv½ch
2 1
ðxÞ þ bðx1i Þ
cv½ch2 ðxÞ þ bðx2i Þ  cv½ch1 ðxÞ þ bðx1i Þ

In the case of fulfilling any of foregoing conditions, the parameter b1 is


decreased by Db1 and the parameter b2 is increased by Db2 (e.g.
Db1 ¼ Db2 ¼ 0:05Þ. The presented procedure is repeated for all a-cuts.
The fuzzy finite-element method (FFEM) is used in order to calculate fuzzy
fitness function values [23].

3.5.3 The Stochastic Evolutionary Algorithm

Stochastic optimization problems appear when some parameters of the objective


function or constraints have probabilistic nature. The application of evolutionary
algorithms to solve such problems requires some modifications of the traditional
evolutionary algorithms. All steps of the algorithm are modified to work with the
stochastic data and their moments. Stochastic evolutionary algorithm (SEA) [42] is
based on the stochastic representation of the data—each chromosome is a multi-
dimensional vector consisting of random variables (genes) with the Gaussian
density probability function:
3.5 Fuzzy and Stochastic Evolutionary Algorithms 41

 
ch j ½xðcÞ ¼ x1j ðcÞ; x2j ðcÞ; . . .; xij ðcÞ; . . .; xNj ðcÞ ð3:5:7Þ

The aim of the optimization is to find a vector x(c) minimizing the objective
function FðcÞ ¼ F ðxðcÞÞ with constraints P½gk ðxÞ 0 pk ; k ¼ 1; 2; . . .; m: Each
gene xij ðcÞ in jth chromosome is represented by a random variable, which is a real
function xij ðcÞ; c 2 C; defined on a sample space C and measurable with respect to P.
It is assumed that each jth random chromosome has a n-dimensional Gaussian
distribution of the probability density function, given as follows:
" #
  1 1 X N
p x1 ; x2 ; . . .; xij ; . . .; xN ¼ pffiffiffiffiffiffiffi  jKil jðxi  mi Þðxl  ml Þ
ð2pÞ N=2
jKj jKj i;l¼1
ð3:5:8Þ

where jKj 6¼ 0 is the determinant of matrix covariance; jKil j is the cofactor of the
element kil of the matrix K.
It is assumed that random genes are independent random variables. The joint
probability density function is expressed by the probability density functions of
random genes:

     
p x1j ; x2j ; . . .; xjj ; . . .; xNj ¼ p1 x1j p2 x2j . . .pi ðxij Þ. . .pN xNj ð3:5:9Þ

where:
"  2 #
  1 xij  mðxij Þ
pi ðxij Þ ¼ N mðxij Þ; rðxij Þ ¼ pffiffiffiffiffiffi exp  ð3:5:10Þ
rðxij Þ 2p 2rðxij Þ2

is the probability density function of the random gene xij ðcÞ: If the random genes are
random independent Gaussian variables, the two parameters—the mean value
mðxij Þ and the standard deviation rðxij Þ—describe the probability density function
for each gene xij ðcÞ in chromosome ch j ½xðcÞ:
 
xij ¼ mðxij Þ; rðxij Þ ð3:5:11Þ

As a result, the stochastic gene is transformed into an equivalent deterministic


one with genes represented by two moments mðxij Þ and rðxij Þ:
Two kinds of constraints are imposed on each vector gene xij ðcÞ; i = 1, 2,…, N:

mðxij Þmin  mðxij Þ  mðxij Þmax ; rðxij Þmin  rðxij Þ  rðxij Þmax ð3:5:12Þ
42 3 Intelligent Computing Techniques

Fig. 3.23 Scheme of the stochastic evolutionary algorithm

Each chromosome in SEA represents a potential stochastic solution of the


optimization problem. The fitness function value for each individual is evaluated
and a stochastic value of the fitness function is obtained as a result of calculations.
To calculate the stochastic fitness function value, the stochastic finite-element
method (SFEM) or the stochastic boundary element method (SBEM) can be used
[17, 35]. The block diagram of the SEA is presented in Fig. 3.23.
Dedicated evolutionary operators have been proposed to work with stochastic
representations of genes: two mutation operators, one crossover operator and one
selection operator [43].
Mutation operators modify the mean value or standard deviation of randomly
chosen genes:

mðxij Þ ¼ mðxij Þ þ Gm
ð3:5:13Þ
sðxij Þ ¼ sðxij Þ þ Gr

where Gm and Gr are random values with Gaussian distribution.


If new values (mðxij Þ or rðxij Þ ) do not fulfil the constraints, the adequate
version of the mutation operation is repeated. Both kinds of mutation can work
simultaneously or individually.
The stochastic arithmetic crossover operator creates two offspring chromosomes
ch1 ½xðcÞ and ch2 ½xðcÞ on the basis of the two parents ch1 ½xðcÞ and ch2 ½xðcÞ:
3.5 Fuzzy and Stochastic Evolutionary Algorithms 43

The selected parameters of offspring chromosomes’ genes are expressed as:

mðx1i Þ ¼ kmðx1i Þ þ ð1  kÞmðx2i Þ


mðx2i Þ ¼ kmðx2i Þ þ ð1  kÞmðx1i Þ
ð3:5:14Þ
rðx1i Þ ¼ krðx1i Þ þ ð1  kÞrðx2i Þ
rðx2i Þ ¼ krðx2i Þ þ ð1  kÞrðx1i Þ

where k 2 [0; 1] is a random value with the uniform distribution.


The selection is based on the tournament selection, which is used in the tradi-
tional EA. Consider the fitness functions for two different random chromosomes:
   
F1 ðcÞ ¼ F ch1 ðcÞ and F2 ðcÞ ¼ F ch2 ðcÞ : The random values F1 ðcÞ and F2 ðcÞ
are described by the moments: F1 ðcÞ ! ðmF1 ; rF1 Þ and F2 ðcÞ ! ðmF2 ; rF2 Þ;
respectively. The parameters b1 and b2 , which decide about the probability of the
survival of chromosomes ch1 ðcÞ and ch2 ðcÞ; correspondingly, are introduced. At
the beginning, the parameters b1 and b2 have equal and small values, for example,
b1 ¼ b2 ¼ 0:1: In the next step, the conditions mF1 < mF2 and rF1 < rF2 are
checked. If both conditions are fulfilled, the probability of survival of the first
chromosome is increased by Dbm and Dbr , respectively (e.g. Dbm ¼ 0:7;
Dbr ¼ 0:3Þ. On the contrary cases, the probability of survival of the second
chromosome is increased by Dbm and Dbr , respectively. If the both stochastic
values of the fitness functions are identical, the probabilities of the survival of both
individuals are the same. Finally, the survived individual is sampled with respect to
the survival parameters b1 and b2 .

3.6 Artificial Immune Systems and Algorithms

The artificial immune systems (AIS) are developed on the basis of a mechanism
discovered in biological immune systems [49]. An immune system is a complex
system which contains distributed groups of specialized cells and organs. The main
purpose of the immune system is to recognize and destroy pathogens—fungi,
viruses, bacteria and improperly functioning cells. The lymphocytes cells play a
very important role in the immune system. The lymphocytes are divided into
several groups of cells. There are two main groups: B and T cells; both contain
some subgroups (like B-T dependent or B-T independent). The B cells contain
antibodies, which could neutralize pathogens and are also used to recognize
pathogens (Fig. 3.24).
There is a big diversity between antibodies of the B cells, allowing the recog-
nition and neutralization of many different pathogens. The B cells are produced in
bone marrow in long bones. It undergoes a mutation process to achieve a big
diversity of antibodies. T cells mature in thymus. Only T cells recognizing
non-self-cells are released to lymphatic and blood systems. There are also other
44 3 Intelligent Computing Techniques

Fig. 3.24 An immune


system: a B cell and pathogen

Fig. 3.25 An immune


system: the recognition of
pathogen using B and T cells

cells, like macrophages with presenting properties. The pathogens are processed by
a cell and presented by major histocompatibility complex (MHC) proteins. The
recognition of a pathogen is performed in a few steps. First, the B cells or mac-
rophages present the pathogen to a T cell using MHC (Fig. 3.25).
A T cell decides if the presented antigen is a pathogen. It gives chemical signal
to B cells to release antibodies.
A part of stimulated B cells goes to a lymph node and proliferate (clone)
(Fig. 3.26).
A part of the B cells changes into memory cells, the rest of them secrete anti-
bodies into blood. The secondary response of the immunology system in the
presence of known pathogens is faster because of memory cells. The memory cells
created during primary response proliferate and the antibodies are secreted to blood
(Fig. 3.27).
The antibodies bind to pathogens and neutralize them. Other cells like macro-
phages destroy pathogens (Fig. 3.28). The number of lymphocytes in the organism
3.6 Artificial Immune Systems and Algorithms 45

Fig. 3.26 An immune


system: the proliferation of
activated B cells

Fig. 3.27 An immune


system: the proliferation of a
memory cell—secondary
response

Fig. 3.28 An immune


system: pathogen absorption
by a macrophage response
46 3 Intelligent Computing Techniques

changes, while the presence of pathogens increases, but after attacks a part of the
lymphocytes is removed from the organism.
The artificial immune systems [5, 20, 21] take only a few elements from the
biological immune systems. The most frequently used are the mutation of the B
cells, proliferation, memory cells and recognition by using B and T cells. The
artificial immune systems have been used for optimization problems in de Castro
and Von Zuben [22], as classification and also computer viruses recognition in
Balthrop et al. [5]. The cloning algorithm presented by von Zuben and de Castro
[20, 21] uses some mechanisms similar to biological immune systems for global
optimization problems. The unknown global optimum is the searched pathogen.
The memory cells contain design variables and proliferate during the optimization
process. The B cells created from the memory cells undergo mutation. The B cells
evaluate and better ones exchange the memory cells. In Wierzchoń [56] version of
Clonalg, the crowding mechanism is used—the diversity between memory cells is
forced. A new memory cell is randomly created, and substitutes the old one, if two
memory cells have similar values of design variables. The crowding mechanism
allows finding not only the global optimum but also other local ones. The presented
approach is based on the Wierzchoń [56] algorithm, but the mutation operator is
changed. The Gaussian mutation is used instead of the nonuniform mutation in the
presented approach.
The flowchart of an artificial immune system is presented in Fig. 3.29. The
memory cells are created randomly. They proliferate and mutate creating
the B cells. The number of nc clones created by each memory cell is determined
by the memory cells objective function value. The objective functions for the B
cells are evaluated. The selection process exchanges some memory cells for better
B cells. The selection is performed on the basis of the geometrical distance between
each memory cell and the B cells (measured by using design variables). The
crowding mechanism removes similar memory cells. The similarity is also deter-
mined as the geometrical distance between memory cells. The process is iteratively
repeated until the stop condition is fulfilled. The stop condition can be expressed as
the maximum number of iterations.

Fig. 3.29 The block diagram of an artificial immune system


3.7 Particle Swarm Optimizer 47

3.7 Particle Swarm Optimizer

The particle swarm algorithms [34], like the evolutionary and immune algorithms,
are developed on the basis of the mechanisms discovered in the nature. The swarm
algorithms are based on the models of the animals’ social behaviours: moving and
living in groups. The animals relocate in the three-dimensional space in order to
change their stay place, the feeding ground, to find the good place for reproduction
or to evading predators. We can distinguish many species of the insects living in
swarms, fish swimming in the shoals, birds flying in flocks or animals living in
herds (Fig. 3.30).
A simulation of bird flocking was published by Reynolds [50]. It was assumed
that this kind of coordinated motion is possible only if three basic rules are fulfilled:
collision avoidance, velocity matching of neighbours and flock centring. The
computer implementation of these three rules showed very realistic flocking
behaviour flying in three-dimensional space, splitting before obstacle and rejoining
again after missing it. Similar observations concerned the fish shoals. Further
observations and simulations of birds and fish behaviour gave effective, more
accurate and more precise formulated conclusions [31, 54]. The results of this
biological examination were used by Kennedy and Eberhart [33], who proposed
particle swarm optimizer (PSO). This algorithm realizes directed motion of particles
in n-dimensional space to search for the solution for an n-variable optimization
problem. PSO works in an iterative way. The location of one individual (particle) is
determined on the basis of its earlier experience and the experience of the whole
group (swarm). Moreover, the ability to memorize and, in consequence, returning to
the areas with convenient properties, known earlier, enables the adaptation of the
particles to the life environment. The optimization process using PSO is based on
finding better locations in the search space (in the natural environment that are, for
example, hatching or feeding grounds).

Fig. 3.30 Particles swarms: a fish shoal, b bird flock


48 3 Intelligent Computing Techniques

Several variants of PSO can be distinguished depending on:


• the design variables representation: discrete or continuous PSO [34],
• the mechanism used to avoid particles dispersion and guaranteeing convergence:
constricted PSO with constant inertia weight [51] or, linear decreasing weight
[25],
• the mechanism used to avoid premature convergence to the local optima: col-
lision avoiding swarm [52] or predator prey [8].
Some new conceptions and developments of PSO have recently appeared. They
make use, among other things, of additional operators or hybridization with other
optimization methods, for example: genetic algorithms, gradient algorithms, sim-
ulated annealing.
The algorithm with the continuous representation of design variables and con-
stant constriction coefficient (constricted continuous PSO) has been used in the
presented research. In this approach, each particle oscillates in the search space
between its previous best position and the best position of its neighbours, with
expectation to find new best locations on its trajectory. When the swarm is rather
small (it consists of several or tens particles), it can be assumed that all the particles
stay in neighbourhood with currently considered one. In this case, the global
neighbourhood version can be assumed and the best location found by swarm so far
is taken into account—current position of the swarm leader (Fig. 3.31).
The position of the ith particle di is changed by stochastic velocity vi, which is
dependent on the particle distance from its earlier best position and position of the
swarm leader. This approach is given by the following equations:
   
vij ðk þ 1Þ ¼ wvij ðkÞ þ /1j ðkÞ qij ðkÞ  dij ðkÞ þ /2j ðkÞ ^qij ðkÞ  dij ðkÞ ð3:7:1Þ

Fig. 3.31 The idea of the particle swarm


3.7 Particle Swarm Optimizer 49

dij ðk þ 1Þ ¼ dij ðkÞ þ vij ðk þ 1Þ; i ¼ 1; 2; . . .; m; j ¼ 1; 2; . . .; n ð3:7:2Þ

where:
/1j ðkÞ ¼ c1 r1j ðkÞ; /2j ðkÞ ¼ c2 r2j ðkÞ;

m the number of the particles,


n the number of design variables (problem dimension),
w inertia weight,
c 1, c 2 acceleration coefficients,
r 1, r 2 random numbers with uniform distribution [0, 1],
di(k) the position of the ith particle in kth iteration step,
vi(k) the velocity of the ith particle in kth iteration step,
qi(k) the best found position of the ith particle found so far,
^qi ðkÞ the best position found so far by swarm—the position of the swarm leader,
k iteration step.
The velocity of ith particle is determined by three components of the sum in
Eq. (3.7.1). The first component wvi(k) plays the role of the constraint to avoid
excessive oscillation in the search space. The inertia weight w controls the influence
of the particle velocity from the previous step on the current one. In this way this
factor controls the exploration and exploitation. A higher value of inertia weight
facilitates the global searching, and lower—the local searching. The inertia weight
plays the role of the constraint applied for the velocities to avoid particles dispersion
and guaranteeing convergence of the optimization process. The second component
/1 ðkÞ½qi ðkÞ  di ðkÞ realizes the cognitive aspect. This component represents the
particle distance from its best position found earlier. It is related to the natural
inclination of the individuals (particles) to the environments where they had the best
experience (the best value of the fitness function). The third component
/2 ðkÞ½^qi ðkÞ  di ðkÞ represents the particle distance from the position of the swarm
leader. It refers to the natural inclination of the individuals to follow the other which
achieved a success.
The flowchart of the particle swarm optimizer is presented in Fig. 3.32. At the
beginning of the algorithm the particle swarm of the assumed size is created ran-
domly. Starting positions and velocities of the particles are also created randomly.
The objective function values are evaluated for each particle. In the next step, the
best positions of the particles are updated and the swarm leader is chosen. Then, the
particles velocities are modified by means of the Eq. (3.7.1) and particles positions
are modified according to the Eq. (3.7.2). The process is iteratively repeated until
the stop condition is fulfilled. The stop condition is typically expressed as the
maximum number of iterations.
The general effect is that each particle oscillates in the search space between its
previous best position (position with the best fitness function value) and the best
position of its best neighbour (relatively swarm leader), finding new best positions
(solutions) on its trajectory, what in whole swarm sense leads to the optimal solution.
50 3 Intelligent Computing Techniques

Fig. 3.32 The block diagram of particle swarm optimizer

3.8 Artificial Neural Networks

3.8.1 Introduction

An artificial neural network (ANN) is a set of simple elements (artificial neurons)


processing information and communicating with other neurons. Artificial neurons
and neural networks draw inspirations from a mammal (or human) nervous system
[29]. The main element of the human nervous system is neuron, and its simplified
schema is presented in Fig. 3.33.
The major elements of the neuron are: a cell body (soma), dendrites and an axon.
The role of the dendrites is to collect signals from other neurons and send them to
the soma. The soma aggregates signals and sends the resulting signal through the
axon to axon terminals. A connection between the axon terminal and the dendrite of
another neuron is called a synapse. In the synapse the signal is transmitted in the
electro-chemical way. A typical neuron has about 1000–10,000 synapses of dif-
ferent sizes and having different amount of neurotransmitters. This causes that the
same impulse can result in different activation of different neurons.

Fig. 3.33 A schema of a human neuron (http://en.wikipedia.org/wiki/Neuron)


3.8 Artificial Neural Networks 51

The artificial neural networks are simplified models of human nervous system.
They are applied for complex problems and if the criteria for the classical computer
programme are not clearly specified. The main advantages of the ANNs are:
• they are trained, not programmed;
• they have the ability of generalizing;
• they are highly resistant to a noise and distortion in the signal;
• they help to detect significant data connections.
Typical applications of the ANNs are:
• prediction (prediction of n + 1 value on the basis of n function values without
defining the relation between input and output data);
• classification and pattern recognition;
• approximation (interpolation, extrapolation);
• control;
• medical diagnosis, financial applications;
• data mining;
• signal filtering;
• optimization problems.

3.8.2 Artificial Neuron and Artificial Neural Network

The notion of artificial neuron was developed by Warren McCulloch and Walter
Pitts in 1943. The artificial neuron is a structure with one or more inputs and one
output. The input values are multiplied by weights (synaptic weights) and then
summed up. The sum is passed through a function known as an activation function
or transfer function. The weights are modified during learning (training) phase.
Weights represent a memory of the neuron. Exemplary artificial neuron is presented
in Fig. 3.34.
The output signal y of the neuron is calculated as:
! !
X
N X
N
y ¼ uðeÞ ¼ u wj xj ¼u wj xj þ B ð3:8:1Þ
j¼0 j¼1

Fig. 3.34 A schema of the


artificial neuron
52 3 Intelligent Computing Techniques

Fig. 3.35 The artificial neural network: a single-layer ANN, b multilayer ANN

where wj is the weight for jth input, xj is the jth input signal; e is the net value; u is
the activation function; and B is the bias input of constant value equal to 1.
Artificial neural network (ANN) consists of an interconnected group of artificial
neurons, typically organized in layers (Fig. 3.35).
The output signals of neurons are sent to the next layer neurons. Single-layer
networks consist of an input layer (not counted) and an output layer (Fig. 3.35a).
Multilayer network is a network which has an input layer, at least the one so-called
hidden layer, and the output layer (Fig. 3.35b).
The four main classes of ANNs are: (i) feedforward networks with signals
flowing in one direction (most often used); (ii) recurrent (feedback) networks (e.g.
Hopfield networks); (iii) Kohonen self-organizing networks; and (iv) radial basis
function (RBF) networks.

3.8.3 Activation Functions

There exist different types of activation functions [27]. Their choice depends on the
problem being solved. The activation function typically has one of the following
forms:
• a linear activation function;
• a threshold activation function;
• a nonlinear activation function.
In the linear neuron the net value e becomes the output signal y. Typical
modification of the linear activation function is truncation of their values to the
range 〈0, 1〉 (for so-called unipolar functions) or 〈–1, 1〉 (for bipolar functions).
Examples of the truncated linear activation functions are presented in Fig. 3.36.
3.8 Artificial Neural Networks 53

(a) (b)

1 1
0.5

-1 0
-0.5 0 0.5
1

-1

Fig. 3.36 Truncated linear activation functions: a unipolar, b bipolar

The threshold activation function is a discontinuous one and it is defined as (the


unipolar function):
8
N 
P 
<
1 if w j xj þ B [ 0
uðeÞ ¼ j¼1 ð3:8:2Þ
:
0 otherwise

or (for the bipolar function):


8
N 
P 
<
1 if wj xj þ B [ 0
uðeÞ ¼ j¼1 ð3:8:3Þ
:
1 otherwise

The threshold functions are presented in Fig. 3.37. The neuron with the
threshold activation function is called perceptron.

(a) (b)

1
1

0
0

-1

Fig. 3.37 Threshold activation functions: a unipolar, b bipolar


54 3 Intelligent Computing Techniques

In the nonlinear activation functions group, the sigmoid activation function is


commonly used. The sigmoid activation function has a form:

1
uðeÞ ¼ ð3:8:4Þ
1 þ expðbeÞ

where b is a coefficient responsible for the slope inclination.


An important feature of the sigmoid activation function (for the learning process)
is its differentiability. A derivative of the sigmoid activation function is given as:

u0 ðeÞ ¼ b  uðeÞ  ½1  uðeÞ ð3:8:5Þ

The diagram of the sigmoid activation function for different values of b is


presented in Fig. 3.38.
The bipolar equivalent of the sigmoid function is a hyperbolic tangent activation
function of a form (Fig. 3.39):

expðbeÞ  expðbeÞ
uðeÞ ¼ ¼ tghðbeÞ ð3:8:6Þ
expðbeÞ þ expðbeÞ

Fig. 3.38 The sigmoid


activation function for
different values of b

Fig. 3.39 The tangent


activation function (for b
= 0.5)
3.8 Artificial Neural Networks 55

and its derivative is:

u0 ðeÞ ¼ b  ½1 þ uðeÞ  ½1  uðeÞ ð3:8:7Þ

3.8.4 Learning Methods

The artificial neural network usually has its weights initialized randomly, usually
with values from the range 〈–0.1;0.1〉. The aim of the training process is to modify
the weights to obtain desired reaction (output values) on given inputs. The training
set is repeated many times (the number of repetition typically depends on the type
and topology of the ANN and on the complexity of the problem).
There are three main groups of ANN learning methods [36]: (i) a supervised
(associative) learning; (ii) an unsupervised learning (self-organization); and (iii) a
reinforcement learning. The supervised learning, as the most popular one, is
described in more detail in Sect. 3.8.4.1.
In unsupervised learning, the ANN is trained to respond to clusters of patterns
within the inputs and with no desired outputs. The ANN is supposed to discover
statistically important features of the input signals. There is no a priori set of
categories into which the patterns have to be classified.
In the reinforcement learning, which can be considered as an intermediate form
of the previous learning methods, the ANN is only provided with a grade, or score,
which indicates network performance.

3.8.4.1 Supervised Learning

In the supervised learning, the neural network is trained by presenting input and
matching output patterns. These input–output pairs can be provided by an external
teacher or by the system which contains the neural network.
The weighs correction for the perceptron is performed according to the following
rule (a delta rule):

ðjÞ ðjÞ
rwi ¼ gdðjÞ xi ð3:8:8Þ

where z is the required output value; y is the obtained output value; dðjÞ ¼ zðjÞ  yðjÞ ;
xi is the ith input value; η is the learning rate.
56 3 Intelligent Computing Techniques

For nonlinear neurons the learning rule has the form:

ðjÞ @uðeÞ ðjÞ


rwi ¼ gdðjÞ x ð3:8:9Þ
@eðjÞ i

The learning rate η is a parameter which strongly influences the learning process.
Typically it takes values from the range 〈0.01  5.0〉. A small value of η results in
slow learning procedure but a large η value causes big weight modifications, and as
a result, the learning process may not be stable (the network is unable to learn).
The momentum learning method makes use of the additional component for
Eq. (3.8.9) which makes the weights correction also dependent on the error in the
previous step:

ðjÞ @uðeÞ ðjÞ ðj1Þ


rwi ¼ gdðjÞ x þ g2 rwi ð3:8:10Þ
@eðjÞ i

where η2 is a momentum parameter taking values from the range 〈0, 1〉 (usually
η2 = 0.9).
The direct application of the formula (3.8.9) to calculate the weights correction
for neurons in hidden layers is not possible, as there is no information about desired
outputs of such neurons. The backpropagation method [28] allows estimating the d
value for hidden layer neurons on the basis of the errors calculated for the next
layer:

X
n
ðjÞ
dðjÞ
m ¼ wðkÞðjÞ
m dk ð3:8:11Þ
k¼1

where m is considered as neuron in the hidden layer; n is the number of neurons in


ðjÞ
the next layer k; j is the learning step; dðjÞ
m is an error of the neuron m; dk is an error
of the neuron in layer k (Fig. 3.40).

Fig. 3.40 A schema of the


backpropagation method
3.8 Artificial Neural Networks 57

3.8.5 Radial Basis Function Neural Networks

A radial basis function (RBF) neural network consists of the input layer, one hidden
layer and the output layer with one output neuron (Fig. 3.41). The input signals
x are transmitted to all neurons in the hidden layer [11].
The number of neurons in the hidden layer is equal to or lower than the number
of training vectors x. The hidden layer neurons implement the following mapping:

x ! uðkx  ckÞ; x 2 Rn ð3:8:12Þ

where kk denotes an Euclidean norm.


Functions uðkx  ckÞ; in which values change radially round the centre c, are
called radial basis functions. Typically, the radial basis functions have the form of
the Gaussian ones:
!
 kx  c k2
uðkx  ci kÞ ¼ exp ð3:8:13Þ
2r2

where r is a parameter describing the width of the function.


The other common variants of the radial basis functions are [9]:

uðkx  ci kÞ ¼ kx  ck=r 2

a
uðkx  ci kÞ ¼ r 2 þ kx  ck2 ; 0\a\1

b ð3:8:14Þ
uðkx  ci kÞ ¼ r 2 þ kx  ck2 ; b[0


uðkx  ci kÞ ¼ r kx  ck2 lnðr kx  ckÞ

where r > 0.

Fig. 3.41 RBF neural


network scheme
58 3 Intelligent Computing Techniques

The output value y is calculated as a weighted sum of k hidden neurons’ output


signals:

X
k
y¼ wi uðkx  ci kÞ ð3:8:15Þ
i¼1

RBF neural networks are applied for classification, approximation of functions


of several variables and prediction tasks—which are the same problems that can be
solved by means of the sigmoidal neural networks. The main difference is the way
of data processing which results in reduction of the learning time for RBF neural
networks. The other difference is that different learning methods can be applied for
particular layers (the hidden layer and the output layer), while all neurons in
multilayered sigmoidal neural networks are trained by means of backpropagation
method.

3.9 Hybrid Computational Intelligence Algorithms

3.9.1 Introduction

Computational intelligence methods, like the ones presented in previous chapters,


are normally applied individually. When in one algorithm methods are applied
together, the obtained results are better when methods are applied separately.
Hybrid computational methods couple at least two different algorithms to increase
the algorithm efficiency (higher precision or shorter computational effort) of finding
a solution to the considered problem. They can combine different classical or
computational intelligence methods.
In many engineering problems the calculation of the objective function value is
the most time-consuming part of the optimization procedure. The objective function
value is usually calculated by means of numerical methods, like finite-element
method or boundary element method. Populational methods, like evolutionary
algorithms, artificial immune systems or particle swarm optimizers, process a set of
candidate solutions in each iteration, and it is required to calculate the objective
function value for each of them. As a result, these algorithms work relatively slow.
An application of parallel and distributed computation can lead to the reduction of
the computation effort, as presented in Sect. 3.3. Hybridization of algorithms is the
second way which allows reducing the cost of calculations. Different types of
hybrid algorithms are presented in subsequent subchapters.
3.9 Hybrid Computational Intelligence Algorithms 59

3.9.2 The Evolutionary Algorithm Coupled with Gradient


Methods

Evolutionary algorithms and gradient methods are usually used separately. Gradient
methods are local optimization methods, which are fast and precise, but their
application is restricted due to their limitations: information about fitness function
gradient is often hard or even impossible to obtain and they have the tendency to
find local minima. Gradient methods process one point in each iteration and they
are very sensitive to the location of the starting point.
Evolutionary algorithms are global optimization methods and they do not require
information about fitness (objective) function gradient. As they process the popu-
lation of candidate solutions they are relatively slow. Also, the precision of such
algorithms in finding the optimal value is lower, which is caused by the manner in
which they work (new candidate solutions are generated by means of evolutionary
operators).
The alternative is coupling of both methods into hybrid algorithms, taking
advantages of global and local algorithms and reducing drawbacks of both of them.
As a result, one can obtain algorithms with the lower cost of calculation than using
only evolutionary optimization and the higher probability of finding the global
optimum than using only gradient methods. Both methods can be applied in a
parallel manner or sequentially. In the first attitude a gradient mutation operator is
introduced (Fig. 3.42). The gradient mutation is a single-argument operator, which
modifies any (especially the best one in the generation) chromosome using infor-
mation about the fitness function gradient [40].

Evolutionary
Gradient Algorithm
Algorithm

[Transition condition fulfilled]

Fig. 3.42 Parallel model of a hybrid algorithm


60 3 Intelligent Computing Techniques

The gradient mutation operator converts jth chromosome chj(x) in a population


into chromosome chj(x)* in the following way:
 
ch j ðxÞ ¼ x1j ; x2j ; . . .; xij ; . . .; xNj
h    
i ð3:9:1Þ
ch j ðxÞ ¼ xj1 ; xj2 ; . . .; xji ; . . .; xjN

where:

 xij if l ¼ 0
xji ¼ ð3:9:2Þ
xij þ Dxij if l ¼ 1

l is a random value equal 0 or 1.


Correction values Dxij create vector Dx which takes the form:

Dx ¼ anðxÞ ð3:9:3Þ

where a is a coefficient describing a step size, n(x) is a vector describing the


direction of searching.
The vector n(x) can be constructed on the basis of the gradient and the Hessian
of the objective function. In the hybrid algorithm three kinds of gradient mutation
are proposed:
• the steepest descent mutation:

nðxÞ ¼ rF j ðxÞ ð3:9:4Þ

where F j ðxÞ denotes the fitness function value for a chromosome ch j ðxÞ;
• the conjugate gradient mutation:

nðxÞ ¼ rF j ðxÞ þ bnðx  1Þ ð3:9:5Þ

where b is the coefficient describing the influence of gradient in the previous


step; wnðxÞ; nðx  1Þ are conjugate directions;
• the variable metric gradient mutation:

nðxÞ ¼ Dk rF j ðxÞ ð3:9:6Þ

where Dk is the Hessian inverse matrix approximation calculated on the basis of


the gradient.
It is necessary to determine the values of three gradient mutation parameters:
• the probability of mutation (like in regular evolutionary algorithms);
• the number of iterations of gradient method;
• the step size a.
3.9 Hybrid Computational Intelligence Algorithms 61

Fig. 3.43 Linear model of a


hybrid algorithm

Global method

[transition condition fulfilled]

Local method + SSN

These parameters can be estimated experimentally for given optimization


problems or computational intelligence methods can be used. The application of the
artificial neural network for the determination of the number of gradient mutation
iterations is described in Orantek [41].
The second approach to hybridization of global and local optimization methods
is the sequential application of them [10]. This attitude takes into consideration a
usefulness of both kinds of algorithms on different stages of calculations. The
evolutionary algorithm often generates chromosomes near global optimum, but
after that, it converges slowly. Gradient methods are local optimization ones, which
are able to find the closest optimum fast and precisely. In the linear model of hybrid
algorithm the evolutionary algorithm is used in the first stage. If the transition
condition is satisfied, the gradient method is used to complete the calculations
(Fig. 3.43).
The determination of the transition moment is a crucial problem—too early
transition between two steps typically leads to the local optima, while too late
transition significantly increases the computational effort. The transition moment
can be determined, for example, on the basis of:
• the changes of the fitness function of the best chromosome;
• the size of the clusters of chromosomes;
• the diversification of the population;
• the exploring capabilities of evolutionary algorithms in each generation.
62 3 Intelligent Computing Techniques

3.9.3 Local Optimization Method Supported by ANN

In order to reduce the problems with calculation of the objective function gradient,
it is possible to employ the artificial neural network (ANN) to calculate the
approximate value. The same ANN can be applied to approximate the
boundary-value problem, which reduces the computational effort [15]. The most
important advantage of artificial neural networks is processing the data simulta-
neously. The approximation problem is one of the typical applications of artificial
neural networks.
An artificial neural network with sigmoid activation functions is considered (see:
Sect. 3.8). The fitness function is modelled closely to the optimum by the parabolic
function for each design variable; therefore one hidden layer in the ANN is suffi-
cient. The number of neurons in the input layer is equal to the number of design
variables of the objective function. In the output layer there is one neuron for which
the output value plays the role of the objective function value (Fig. 3.44).
The number of neurons in hidden layer depends on the optimization problem
and, as usual, on the number of design variables. The backpropagation training
method has been used to modify weights of the ANN.
The output value of a neuron k in the layer i (hidden or output) is expressed by:

1
eik ¼ ð3:9:7Þ
1 þ esik

Fig. 3.44 The artificial


neural network
3.9 Hybrid Computational Intelligence Algorithms 63

where:

sik ¼ ei11 wi11ik þ ei12 wi12ik þ    þ ei1 Ni1 wi1 Ni1 ik þ wwik
X
Ni1 ð3:9:8Þ
¼ ei1n wi1njk þ wwik
n¼1

ei – 1k are output values of neurons in previous layer, wij is weight value for jth
input of ith neuron.
The sensitivity of the output signal e21 of the network with respect to an input
value e0z is expressed as:

de21 X I1
ds1n1 de1n1 ds2n1 de21
¼    ð3:9:9Þ
de0z n ¼1 de0z ds1n1 de1n1 ds21
1

where:
!
ds1n1 d X
I0
¼ ðe0n0  w0n0 1n1 þ ww1n1 Þ ¼ w0z1n1
de0z de0z n0 ¼1

de1n1 d 1 es1n1
¼ ¼
ds1n1 ds1n1 1 þ es1n1 ð1 þ es1n1 Þ2
! ð3:9:10Þ
ds2n1 d XI1
¼ ðe1n1  w1n1 21 þ ww21 Þ ¼ w1n1 21
de1n1 de1n1 n ¼1

1

de21 d 1 es21
¼ ¼
ds21 ds21 1 þ es21 ð1 þ es21 Þ2

As a result, the sensitivity can be expressed as:


!
de21 X I1
w0z1n1 es1n1 w1n1 21 es21
¼  ð3:9:11Þ
de0z n ¼1 ð1 þ es1n1 Þ2 ð1 þ es21 Þ2
1

3.9.4 Two-Step Optimization Strategy

Two-step optimization strategy (TSOS) combines algorithms and techniques pre-


sented in Sects. 3.9.2 and 3.9.3, namely global optimization methods, gradient
methods and artificial neural networks. The aim is to combine the advantages of all
methods and to avoid their limitations. The block diagram of the TSOS is the same
as presented in Fig. 3.45.
64 3 Intelligent Computing Techniques

Fig. 3.45 Block diagram of


the local optimization stage

Initial points (after


1st stage)

Training vectors
creation

ANN training

Local optimization

[Optimum found]

In the first stage, the global optimization method (evolutionary algorithm, arti-
ficial immune system or particle swarm optimizer) is used. As a result, a set
(cluster) of solution points is obtained. It can be assumed that some of the solutions
are located close to a global optimum. There is also a high probability that the
points are situated on the basis of attraction of more than one optimum. In this case
the second stage (local method) may work unstably. This problem can be reduced
by introducing a parameter which describes the maximum size of the cluster [16].
The parameter can be described by the radius of the region in a domain. The centre
of the region is equal to the best solution of the global method. All points inside the
region belong to a cloud of points. This approach is characterized by a variable
number of training vectors. In this case an alternative parameter is introduced. The
parameter defines the maximum number of the points in the cloud.
In the second stage, a local (gradient) method supported by ANN is used. The
local optimization procedure is an iterative procedure presented in Fig. 3.45. Local
optimization stage starts forming a cloud of points being a part of (the best)
solutions from the previous stage. These points are used to construct the training
vectors for the ANN. It is assumed that the initial number of training vectors is
equal to 3m, where m is the number of design variables. In the next step the local
3.9 Hybrid Computational Intelligence Algorithms 65

optimization is performed by means of the steepest descent method. The ANN is


used to calculate the objective function gradient and to approximate the objective
function.
The objective function value for solution obtained during local optimization is
calculated by means of finite-element method. The new solution is added to training
vectors. The presented procedure is repeated until termination condition (typically,
a number of iterations or satisfactory value of objective function) is reached.

3.9.5 The Fuzzy-Neural Network

Global optimization methods are populational global optimization algorithms and


they require many computations of the objective function. This requires solving the
boundary-value problem, typically using FEM or BEM, for each potential solution.
It is possible to reduce the computational cost of global optimization methods by
the objective function approximation by means of the artificial neural network, as
presented in Sect. 3.9.4. It is also possible to use neuro-fuzzy computations to
speed up global optimization algorithms [12]. The neuro-fuzzy inference system
(NFIS) can be used to approximate the boundary-value problem for different
optimization and identification tasks.
A fuzzy inference system is a system that uses a collection of fuzzy membership
functions and rules, instead of conventional (Boolean) logic, to infer about data.
The rules typically have the form [37]:

IF x1 ¼ A1 AND x2 ¼ A2. . .AND xn ¼ An THEN y ¼ B ð3:9:12Þ

where xi is the input variable, y is the output variable, Ai is the fuzzy subset of rules
premise, and B is the fuzzy set of rules conclusion.
The rules are collected in a set called a rule base or a knowledge base. A typical
fuzzy system consists of four parts, as presented in Fig. 3.46 [44].
The inference process consists of three steps: fuzzification, inference and
defuzzification. Fuzzification block determines the degree of membership of each
(typically crisp) input variable x for each fuzzy set A′. Inference block uses the
membership values determined during fuzzification to evaluate the rules according

Fig. 3.46 Block diagram of a fuzzy system


66 3 Intelligent Computing Techniques

to the compositional rule of inference. The result is an output fuzzy set B′.
Defuzzification block is responsible for the transformation of the fuzzy set into a
crisp value y: The centre of gravity (COG) method is usually used [32]. As a result,
the output value can be calculated as:
PM
cl lAðlÞ ðxÞ
y ¼ Pl¼1
M ð3:9:13Þ
l¼1 lAðlÞ ðxÞ

where cl is the centre of the output set for the rule A(l), lAðlÞ ðxÞ is the membership
function calculated in the inference step, l = 1, 2, …, M is the rule number.
The NFIS realizes a multivariable function using the sum of single-variable
fuzzy functions [53]. The fuzzy functions are characterized by the membership
function l(x). The Gaussian membership function for each input in each rule is
assumed:
h 
x  ci2
lA ðx; c; rÞ ¼ exp  ð3:9:14Þ
r

If the fuzzy subset A is calculated by the following formula in the inference


block:

lA ðxÞ ¼ lA1 ðx1 ÞlA2 ðx2 Þ. . .lAn ðxn Þ ð3:9:15Þ

an arbitrary continuous function can be calculated as:


 2 !
PM QN xi ci
ðlÞ

l¼1 Wl i¼1 exp  ðlÞ


ri
f
f ðxÞ ¼   ! ¼ 1 ð3:9:16Þ
PM Q N x c
ðlÞ
2 f2
l¼1 i¼1 exp  i
ðlÞ
i
ri

ðlÞ ðlÞ
where Wl corresponds to the cl value; ci and ri are centres and widths of part “IF”
in each rule, and Wl is the centre of part “THEN” in each rule.
The function f(x) can be described by making use of a multilayer fuzzy-neural
ðlÞ ðlÞ
network (Fig. 3.47). Parameters Wl, ci and ri are searched during the training
process. The aim of the training is the minimization of the mean-square error E by
means of gradient optimization methods. The mean-square error is defined as:

1
E ¼ ½f ðxÞ  d 2 ð3:9:17Þ
2

where x is the input vector, f(x) is the function value approximated by the
fuzzy-neural network and d is the desirable answer of the NFIS for the input vector x.
3.9 Hybrid Computational Intelligence Algorithms 67

Fig. 3.47 The schema of the NFIS

The components of a gradient vector ∇E are expressed in the following forms:

@E yl
¼ ½f ðxÞ  d 
@Wl f2
ðlÞ
@E f ðxÞ  d xi  c
¼2 yl ½Wl  f ðxÞ
i 2
ðlÞ
@ci f ðlÞ
2
ri ð3:9:18Þ
h i2
ðlÞ
@E f ðxÞ  d x i  c i
¼2 yl ½Wl  f ðxÞ
3
ðlÞ
@ri f2 ðlÞ
ri

for each input i = 1, 2, …, N and each rule l = 1, 2, …, M.

3.10 Comparison of Particle Swarm Optimizer


to Evolutionary Algorithms and Artificial Immune
Systems

In order to choose the most effective optimization algorithm, a comparison between


the particle swarm optimizer (PSO), the artificial immune system (AIS), the
sequential (SEA) and distributed evolutionary algorithms (DEA) has been drawn.
There are many publications devoted to the performance and efficiency of the
different optimization methods and to the comparisons between them depending on
68 3 Intelligent Computing Techniques

different criteria. The results are strongly dependent on the declared optimization
parameters. The simulation results depend on the velocity value and the number of
particles in the case of PSO algorithm, and on mutation mechanism and other
algorithm parameters in the case of evolutionary and immune algorithms. Thus, the
optimization parameters have to be set taking into account the number of combi-
nations. In this chapter after the choice of the optimal optimization parameters for
the specified optimization problem, the comparison of effectiveness is performed. In
the literature one can find papers which usually indicate better effectiveness of
PSOs in comparison with AISs and GAs for different global optimization tasks [3,
24, 26, 58, 62].

3.10.1 The Choice of the Optimization Parameters

The comparison has been performed on the basis of the optimization of the known
mathematical functions, that is, the Branin function with two design variables
(Fig. 3.48), the Goldstein-Price function with two design variables (Fig. 3.49), the
Rastrigin function with 20 design variables (Fig. 3.50), and the Griewangk function
with 20 design variables (Fig. 3.51), for the best parameters of the algorithms
(found earlier for these functions).

Fig. 3.48 Tested mathematical functions: Branin function


3.10 Comparison of Particle Swarm Optimizer to Evolutionary Algorithms … 69

Fig. 3.49 Tested mathematical functions: Goldstein-Price function

In order to find the optimal parameters of the particle swarm optimizer, the
algorithm has been tested by changing the number of particles, inertia weight w and
acceleration coefficients c1, c2. The range of the changes of the particular param-
eters of the particle swarm optimizer is presented in Table 3.2. The results of the
stage of the optimal parameters selection for particular mathematical functions are
included in Table 3.3.
The parameters of the artificial immune system are: the number of memory cells,
the number of clones, range of the Gaussian mutation and the crowding factor. The
ranges of the change in the artificial immune system parameters are included in
Table 3.4 and the optimal values of the parameters in Table 3.5.
The sequential and distributed evolutionary algorithms, applied in comparison
with the particle swarm optimizer, use evolutionary operators like a simple cross-
over and a Gaussian mutation. The selection is performed by means of the ranking
method. The optimal probabilities of the evolutionary parameters for particular
mathematical functions are presented in Table 3.6.
70 3 Intelligent Computing Techniques

Fig. 3.50 Tested mathematical functions: Rastrigin function

3.10.2 The Results of the Effectiveness Comparison

The results of the comparison of the particle swarm optimizer to the artificial
immune system and the sequential and distributed evolutionary algorithms are
presented in Figs. 3.52, 3.53, 3.54, 3.55. The criterion of the comparison was the
effectiveness of the tested algorithms measured by the average number of objective
function evaluations. Ten tests have been performed for each change in the
parameters of the algorithm, and the average number of objective function evalu-
ations for this representation has been computed. The numbers of the objective
function evaluations needed to achieve the value near the global optimum for each
of the tested functions were computed (Figs. 3.52, 3.53, 3.54, 3.55). For example,
the number of objective function evaluations for the Branin function with the global
minima 0.397887 was computed when the algorithm reached the value below 0.5 as
3.10 Comparison of Particle Swarm Optimizer to Evolutionary Algorithms … 71

Fig. 3.51 Tested mathematical functions: Griewangk function

Table 3.2 The range of the changes of the PSO parameters


Particles Inertia weight Acceleration coefficient Acceleration coefficient
number w c1 c2
2, 3, 4, …, 200 0.1; 0.2; …; 0.1; 0.2; …; 2.0 0.1; 0.2; …; 2.0
1.0

Table 3.3 The optimal parameters of the PSO for particular functions
Function Particles Inertia Acceleration Acceleration
number weight w coefficient c1 coefficient c2
Branin 4 0.5 1.8 1.7
Goldstein-Price 5 0.5 1.5 1.8
Rastrigin 74 1 1.9 1.9
Griewangk 10 1 1.7 1.7
72 3 Intelligent Computing Techniques

Table 3.4 The range of the changes of the AIS parameters


The number of memory The number of the Crowding factor Gaussian
cells clones mutation
2, 4, 6, …, 100 2, 4, 6, …, 100 0.01; 0.02; …; 0.1; 0.2; …; 1.0
1.0

Table 3.5 The optimal parameters of the AIS for particular functions
Function The number of The number of Crowding Gaussian
memory cells the clones factor mutation
Branin 2 2 0.48 0.1
Goldstein-Price 12 2 0.45 0.5
Rastrigin 2 4 0.45 0.4
Griewangk 2 2 0.45 0.1

Table 3.6 The optimal parameters of SEA and DEA for particular functions
Function The number of The number of The The
subpopulations chromosomes in probability of probability of
each simple Gaussian
subpopulation crossover (%) mutation (%)
Branin 1 20 100 100
2 10 100 100
Goldstein-Price 1 20 100 100
3 7 100 100
Rastrigin 1 20 100 100
2 10 100 100
Griewangk 1 10 100 100
2 5 100 100

Fig. 3.52 Comparison of


PSO to AIS and EAs for
Branin function
3.10 Comparison of Particle Swarm Optimizer to Evolutionary Algorithms … 73

Fig. 3.53 Comparison of


PSO to AIS and EAs for
Goldstein-Price function

Fig. 3.54 Comparison of


PSO to AIS and EAs for
Rastrigin function

the stop condition for the optimization process. Then, new optimization process has
been started. Similarly, for the Goldstein-Price function with the global minima 3.0,
the stop condition was set as 3.1, for the Rastrigin and Griewangk function with the
global minima 0.0, the stop condition was equal to 0.1.
74 3 Intelligent Computing Techniques

Fig. 3.55 Comparison of


PSO to AIS and EAs for
Griewangk function

References

1. Alander JT (2000) An indexed bibliography of distributed genetic algorithms. University of


Vaasa, Report 94-1-PARA, Vaasa
2. Alefeld G, Mayer G (2000) Interval analysis: theory and applications. J Comput Appl Math
121:421–464
3. Ali MM, Khompatraporn C, Zabinsky ZB (2005) A numerical evaluation of several stochastic
algorithms on selected continuous global optimization test problems. J Global Optim 31:635–
672
4. Back T, Fogel DB, Michalewicz Z (1997) Handbook of evolutionary computation. IOP
Publishing Ltd., Bristol
5. Balthrop J, Esponda F, Forrest S, Glickman M (2002) Coverage and generalization in an
artificial immune system. In: Proceedings of the genetic and evolutionary computation
conference GECCO 2002. Morgan Kaufmann, New York, pp 3–10
6. Bargiela A, Pedrycz W (2002) Granular computing as an emerging paradigm of information
processing. Granular computing. Kluwer Academic Publishers, Boston, pp 1–18
7. Bargiela A, Pedrycz W (2008) Toward a theory of granular computing for humancentred
information processing. IEEE Trans Fuzzy Syst 16(2):320–330
8. Blackwell T, Bentley PJ (2002) Don’t push me! Collision-avoiding swarms. In: Proceedings
of congress on evolutionary computation
9. Buhmann MD (2009) Radial basis functions: theory and implementations. Cambridge
University Press, Cambridge
10. Burczyński T, Orantek P (1999) Coupling of genetic and gradient algorithms. In: Proceedings
of conference on evolutionary algorithms and global optimization, Złoty Potok, pp 112–114
11. Burczyński T, Orantek P, Skrobol A (2003) Application of computational intelligence system
for defect identification. In: Proceedings of ECCOMAS symposium on artificial intelligence
AI-METH 2003, Gliwice
12. Burczyñski T, Skrobol A, Orantek P (2004) Fuzzy-neural and evolutionary computation in
identification of defects. J Appl Mech 42(3):445–460
13. Burczyński T, Orantek P (2005) The fuzzy evolutionary algorithm in optimization problems.
In: Arabas J (ed) Proceedings of eight national conference on evolutionary computation and
global optimization, Warsaw, pp 23–30
References 75

14. Burczyński T, Orantek P (2005) The fuzzy evolutionary algorithm in structural optimization
and identification problems. In: 16th international conference on computer methods in
mechanics CMM-2005, Full Paper, CD-ROM, Czestochowa
15. Burczyński T, Orantek P (2005) Sigmoid and radial neural networks in sensitivity analysis:
comparisons and applications in defect identification. In: Proceedings of 16th international
conference on computer methods in mechanics CMM-2005, CD-ROM, Czestochowa
16. Burczyński T, Orantek P (2007) The identification of stochastic parameters in mechanical
structures. In: Proceedings of 17th international conference on computer methods in
mechanics CMM-2007, CD-ROM, Łódź-Spała
17. Burczyński T, Skrzypczyk J (1999) Theoretical and computational aspects of the stochastic
boundary element method. Comput Methods Appl Mech Eng 168:321–344
18. Cantu-Paz E (1998) A survey of parallel genetic algorithms. Calculateurs Paralleles, Reseaux
et Systems Repartis, 10, 2, Paris, pp 141–171
19. Caprani O, Madsen K, Nielsen HB (2002) Introduction to interval analysis. Lecture Notes,
Department of Informatics and Mathematical Modelling, Technical University of Denmark,
Lyngby, Denmark
20. de Castro LN, Timmis J (2003) Artificial immune systems as a novel soft computing
paradigm. Soft Comput 7(8):526–544
21. de Castro LN, Von Zuben FJ (2001) Immune and neural network models: theoretical and
empirical comparisons. Int J Comput Intell Appl (IJCIA) 1(3):239–257
22. de Castro LN, Von Zuben FJ (2002) Learning and optimization using the clonal selection
principle. IEEE Trans Evol Comput Spec Issue Artif Immune Syst 6(3):239–251
23. Chen L, Rao SS (1977) Fuzzy finite element approach for vibrating analysis of imprecisely
defined systems. Finite Elem Anal Des 27:69–83
24. Cheng YM, Li L, Chi SC (2007) Performance studies on six heuristic global optimization
methods in the location of critical slip surface. Comput Geotech 34:462–484
25. Clerc M, Kennedy J (2002) The particle swarm-explosion, stability and convergence in a
multidimensional complex space. IEEE Trans Evol Comput 6
26. Eberhart RC, Shi Y (1998) Comparison between genetic algorithms and particle swarm
optimization. In: Proceedings of the seventh annual conference on evolutionary programming.
Springer: New York, pp 611–616
27. Fausett LV (1993) Fundamentals of neural networks: architectures, algorithms and
applications. Prentice Hall, Upper Saddle River
28. Freeman JA, Skapura DM (1991) Neural networks—Algorithms, applications and program-
ming techniques. Addison-Wesley Pub, Reading
29. Gurney K (1997) An introduction to neural networks. UCL Press, London
30. Hanss M (2005) Applied fuzzy arithmetic. Springer, Berlin
31. Heppner F, Grenander U (1990) A stochastic nonlinear model for coordinated bird flocks. In:
Krasner S (ed) The ubiquity of chaos. AAAS Publications, Washington, DC
32. Jang JSR, Sun CT, Mizutani E (1997) Neuro-fuzzy modeling and soft computing. Prentice
Hall, Upper Saddle River
33. Kennedy J, Eberhart RC (1995) Particle swarm optimisation. In: Proceedings of IEEE
international conference on neural networks. Piscataway, NJ, pp 1942–1948
34. Kennedy J, Eberhart RC (2001) Swarm intelligence. Morgan Kauffman, San Francisco
35. Kleiber M, Hien TD (1992) The stochastic finite element method. Wiley, New York
36. Mehrotra K, Mohan CK, Ranka S (1997) Elements of artificial neural networks. MIT Press,
Cambridge
37. Mendel JM (2001) Uncertain rule-based fuzzy logic systems: introduction and new directions.
Prentice Hall, Upper Saddle River
38. Michalewicz Z (1996) Genetic algorithms + data structures = evolution programs. Springer,
Berlin
39. Moore RE (1966) Interval analysis. Prentice-Hall, Englewood Cliff
76 3 Intelligent Computing Techniques

40. Orantek P (2002) Hybrid evolutionary algorithm in optimization of structures under


dynamical loads. In: IUTAM symposium on evolutionary methods in mechanics, Kraków,
pp 297–308
41. Orantek P (2002) Application of the hybrid algorithms in optimization and identification
problems for dynamic structures. Dissertation, Silesian University of Technology
42. Orantek P, Burczyński T (2006) The evolutionary algorithm in stochastic optimization and
identification problems. In: Arabas J (ed) Evolutionary computation and global optimization,
Warsaw, pp 309–320
43. Orantek P, Burczyński T (2006) The identification of stochastic parameters in mechanical
structures. In: Proceedings of the CMM-2007 conference, CD-ROM, Lodz-Spala
44. Passino KM, Yurkovich S (1998) Fuzzy control. Addison-Wesley, Longman
45. Pawlak Z (1982) Rough sets. Int J Comput Inf Sci 11(5):341–356
46. Pawlak Z (2012) Rough sets: theoretical aspects of reasoning about data. Springer Science &
Business Media, Dordrecht
47. Pawlak Z, Skowron A (1994) Rough membership function. Advances in the Dempster–
Schafer of evidence. Wiley, New York, pp 251–271
48. Pedrycz W (2001) Granular computing: an introduction. In: Proceedings of joint IFSA world
congress on 20th NAFIPS international conference, vol 3, pp 1349–1354
49. Ptak M, Ptak W (2000) Basics of immunology. Jagiellonian University Press, Cracow (in
Polish)
50. Reynolds CW (1987) Flocks, herds, and schools, a distributed behavioral model. Comput
Graph 21:25–34
51. Shi Y, Eberhart RC (1999) Empirical study of particle swarm optimization. In: Proceedings of
congress of evolutionary computation, Piscatay
52. Silva A, Neves A, Costa E (2002) Chasing the swarm: a predator pray approach to function
optimisation. In: Proceedings of MENDEL2002—8th international conference on soft
computing. Brno, Czech Republic
53. Skrobol A (2005) Coupling of evolutionary algorithms and artificial neural network in defect
identification. In: International symposium on neural networks and soft computing in
structural engineering, Full Papers, CD-ROM, Kraków
54. Toner J, Tu Y (1999) Flocks, herds, and schools: a quantitative theory of flocking. Phys Rev E
58:4828–4858
55. Tanese R (1989) Distributed genetic algorithms. In: Schaffer JD (ed) Proceedings of 3rd
ICGA, San Mateo, pp 434–439
56. Wierzchoń ST (2001) Artificial immune systems, theory and applications. EXIT. (in Polish)
57. Yao YY (2007) The art of granular computing. In: Proceeding of the international conference
on rough sets and emerging intelligent systems paradigms. LNAI, vol 458, pp 101–112
58. Yap DFW, Koh SP, Tiong SK (2009) A comparative analysis on the performance of particle
swarm optimization and artificial immune systems for mathematical test functions. Aust J
Basic Appl Sci 3(4):4344–4350
59. Zadeh LA (1979) Fuzzy sets and information granularity. In: Gupta M, Ragade RK,
Yager RR (eds) Advances in fuzzy set theory and applications. North-Holland Publishing
Company, Amsterdam, pp 3–18
60. Zadeh L (1997) Toward a theory of fuzzy information granulation and its centrality in human
reasoning and fuzzy logic. Fuzzy Sets Syst 90:111–127
61. Zadeh LA (1965) Fuzzy sets. Inf Control 8(3):338–353
62. Zhang X, Srinivasan R, Zhao K, Van Liew M (2008) Evaluation of global optimization
algorithms for parameter calibration of a computationally intensive hydrologic model. Hydrol
Process. Wiley InterScience
Chapter 4
Structural Intelligent Optimization

Abstract This chapter is devoted to the single and multiobjective optimization


problems. The chapter contains formulation of diverse mechanical and
thermo-mechanical problems as well as practical applications of optimal design for
the problems considered. The shape and topology optimization for various types of
problems is considered. Several objectives for optimization problems are proposed,
formulated and implemented. The bio-inspired algorithms and hybrid algorithms
coupled with FEM or BEM are used in numerical examples. The formulation and
solutions of sample problems for linear elastic, nonlinear elastoplastic, composites
and structures with crack are described in detail. Optimization problems with more
than one criterion are presented in the context of optimization for coupled fields
problems.

4.1 Formulation of Single- and Multiobjective


Optimization Problems

4.1.1 Introduction

The optimization problem can be defined as an action leading to an increase in


effectivity and achieve the optimal solution. In practical cases, the optimal solution
is unknown, so the main goal of the optimization is improvement. The achievement
of the optimum can be difficult to verify and sometimes impossible to obtain. Three
different classes of optimization methods can be distinguished [22, 74]:
• analytical;
• numerical;
• random.
Indirect analytical method is based on moving on graph of the function in the
direction which is calculated on the basis of the local gradient of function (climbing
the steepest slope), whereas direct methods search for the local minima by solving
the set of equations. The idea of enumerative methods consists in searching all

© Springer Nature Switzerland AG 2020 77


T. Burczyński et al., Intelligent Computing in Optimal Design,
Solid Mechanics and Its Applications 261,
https://doi.org/10.1007/978-3-030-34161-9_4
78 4 Structural Intelligent Optimization

Fig. 4.1 Function with local


minima

points in the considered domain. The algorithm is very simple, but effective only for
finite small domains, so usually checking all possibilities is impossible in reason-
able time. The goal of the random methods is to randomly explore the whole
searching space (without any additional parameters). Searching is very
time-consuming, but less than enumerative methods.
Analytical optimization methods are widely applied and have good mathematical
foundations, but unfortunately for multimodal function they usually get stuck in
local optima (Fig. 4.1). Intelligent computing techniques, for example, EAs, AISs,
PSOs all compromise between efforts to obtain in many practical optimization
tasks; they are the only possible choice. In many practical optimization tasks, they
are the only possible choice.

4.1.2 Formulation of the Optimization Problem

The solution of optimization problem is given by the vector of design variables


which represents the shape of the structure, internal defects, boundary conditions,
and so on. Depending on the computing technique, the vector of design variables is
represented by:
• a chromosome for genetic or evolutionary algorithms [90, 91],
• B-cells for the artificial immune systems [48],
• a particle for the particle swarm optimizers [76].
The design vector consists of N design variables:

ch ¼ ½x1 ; . . .xi ; . . .; xN  ð4:1:1Þ

Box constraints are imposed on each design variable:


4.1 Formulation of Single- and Multiobjective Optimization Problems 79

xiL  xi  xiR ; i ¼ 1; 2; . . .; N ð4:1:2Þ

where xiL and xiR are left and right admissible values of xi .
For the single optimization problem, the task consists in finding a set of design
variables x which minimizes or maximizes the objective function f(x) and simul-
taneously satisfies a set of constrains. For the multiobjective optimization problem
instead of one objective function, a set of objective functions is considered:

JðxÞ ¼ ½f1 ðxÞ; f2 ðxÞ; . . .; fk ðxÞT ð4:1:3Þ

Multiobjective optimization deals with multiple conflicting objectives, and


usually, the optimal solution for one of the objectives is not the optimum for any of
the other objectives. In such an approach, instead of one optimal solution a number
of solutions are optimal. These solutions are called pareto-optimal solutions or
nondominated solutions.
The pareto-optimality is defined as a set F p ; where every element fP is a solution
of the problem, for which no other solutions can be better with regard to all
objective functions. In other words, the solution is pareto-optimal if there exists no
feasible vector that would decrease some criterion without causing a simultaneous
increase of another criterion. Figure 4.2 presents an example of dominated solu-
tions, nondominated solutions and pareto front for bi-objective problem.
Considering two solution vectors x and y for a minimization problem, x is
contained in the pareto front if:

8i 2 1; 2; . . .; k : fi ðxÞ  fi ðyÞ
and ð4:1:4Þ
9j 2 1; 2; . . .; k : fj ðxÞ\fi ðyÞ

Fig. 4.2 An example of the bi-objective problem


80 4 Structural Intelligent Optimization

The pareto optimum does not always give a single solution, but a set of solutions
called nondominated solutions or efficient solutions.
Most of the bio-inspired multiobjective algorithms are based on pareto concept.
Some earlier implementations of such algorithms and gradient-based techniques use
scalarization methods that involve preferences a priori. Parameters, coefficients,
constraint limits, and so on, have to be specified in order to complete the
pareto-optimal set [5, 51]. The most popular methods are:
• global criterion method,
• min–max method,
• weighting min–max method,
• weighting sum method,
• e-constraint method,
• lexicographic method,
• goal programming.
The authors have performed different types of engineering multiobjective opti-
mization problems with the use of:
• weighting sum method,
• e-constraint method,
• MOEA—multiobjective evolutionary algorithm based on Fonseca Fleming idea,
• MOOPTIM—multiobjective optimization library based on pareto concept and
EAs,
• NSGA-II—nondominated sorting genetic algorithm.
Except NSGA-II all implementations are authors’ codes written in C ++.

4.1.2.1 Scalarization Methods

The weighting sum method and e-constraint method belong to the scalarization
methods and did not require modification of the core evolutionary algorithm (se-
quential evolutionary algorithm is used). For the former one, the problem is
transformed into single objective (Fig. 4.3) by applying the following formula:

X
k
f ð xÞ ¼ w i f i ð xÞ ð4:1:5Þ
i¼1

where k is the number of objective functions, and wi are weights of each criterion.

Fig. 4.3 Scalarization of the


multiobjective problem
4.1 Formulation of Single- and Multiobjective Optimization Problems 81

In order to obtain pareto-optimal solutions, optimization task has to be per-


formed many times; moreover, it is not possible to obtain a solution for the non-
convex pareto-optimal front. Another important problem is how to choose the
weights.
For the e-constraint method one criterion is arbitrarily chosen, and other criteria
are treated as constraints. Like for the weighting sum method, to obtain
pareto-optimal solutions, optimization procedure has to be performed many times
and choosing appropriate bounds for other criteria may be difficult.

4.1.2.2 MOEA—Multiobjective Evolutionary Algorithm Based


on the Fonseca Fleming Idea

MOEA is an own-improved implementation of the Fonseca and Fleming multi-


objective evolutionary algorithm. This algorithm uses pareto concept, so there are
more differences than in the typical sequential version of EA.
The flowchart of the MOEA is shown in Fig. 4.4. The proposed evolutionary
algorithm starts with a population of chromosomes randomly generated. Two kinds
of mutation are applied: a uniform mutation and a Gaussian mutation. The operator
of the uniform mutation replaces a randomly chosen gene of the chromosome with a
new random value. This value corresponds to the design parameter with its con-
strains. For the Gaussian mutation a new value of the gene is created with the use of
Gaussian distribution. The probability of the mutation decides how many genes will
be modified in each population. The operator of the simple crossover creates two
new chromosomes from the two randomly selected chromosomes.
Both chromosomes are cut in a randomly position and merged together. In order
to compute k objective functions, the proper boundary-value problem is solved. The
selection is performed on the basis of a ranking method proposed by Fonseca and
Fleming [69], information about pareto-optimal solutions and the similarity of
solutions.
The pareto set is determined in the current population by means of Eq. (4.1.4).
The Euclidean distance between all chromosomes is defined as follows:
rffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi
 Xpopsize  2
ED xi ; xj ¼ n¼1
x i ð nÞ  x j ð nÞ ð4:1:6Þ

The rank of each chromosome depends on the number of individuals by which it


is dominated and the scaled value of the Euclidean distance. This scheme helps to
preserve the diversity in the population. The most similar chromosomes have less
probability to survive.
The next iteration is performed if the stop condition is not fulfilled. The stop
condition is expressed as the maximum number of iterations. The pareto set in each
generation is stored in a file. On the basis of such files, the collective pareto set of
optimal solution is generated.
82 4 Structural Intelligent Optimization

Fig. 4.4 The flowchart of MOEA

4.1.2.3 MOOPTIM—Multiobjective Optimization Tool

MOOPTIM is an improved version of the multiobjective evolutionary algorithm


which uses nondominated sorting procedure in the selection. Some ideas are
inspired by Deb’s algorithm (NSGA-II).
4.1 Formulation of Single- and Multiobjective Optimization Problems 83

The algorithm consists of two parts: an initialization and a main loop. Figure 4.5
shows the flowchart of the multiobjective evolutionary algorithm. In the initial-
ization step, all the settings of the algorithm are determined, populations Qi and Pi
are generated and the fitness functions are evaluated for population Qi. In the main
loop, after the evaluation of fitness function values for Pi and checking stop

Fig. 4.5 The flowchart of MOOPTIM


84 4 Structural Intelligent Optimization

conditions, populations Qi and Pi are combined. The selection is performed on the


set Ri, which is twice bigger than the Pi. The nondominated sorting procedure is
used for the classification of the individuals in population. Moreover, to preserve
diversity in the population, a crowding coefficient is calculated for each solution
[54]. The individuals from the population Ri are put to Pi+1 on the basis of the
nondomination level and the crowding coefficient. Individuals from Pi+1 are copied
to Qi+1 and then evolutionary operators change the population Pi+1. Two types of
mutation: uniform and Gaussian; and two types of crossover: simple and arithmetic
are used. The algorithm works until stop condition is fulfilled. In comparison to the
NSGA-II, the proposed implementation has more evolutionary operators. The other
difference between these algorithms is related to the formation of population Pi+1.
There is no binary tournament selection operator in MOOPTIM. The algorithm was
tested on several benchmark problems and some engineering problems. The results
obtained by means of the proposed library in most cases are better than the results
obtained by using NSGA-II [57, 85].

4.1.2.4 NSGA-II—Nondominated Sorting Genetic Algorithm

NSGA-II—Non-Dominated Sorting Genetic Algorithm, proposed by Srinivas and


Deb [53, 54] is a consecutive version of NSGA algorithm. NSGA-II is more effi-
cient, uses elitism and keeps diversity without specifying any additional parameters
comparing to the previous NSGA implementation. NSGA-II and SPEA2 (Strength
Pareto Evolutionary Algorithms) are most widely used evolutionary algorithms to
solve different practical multiobjective optimization problems. The code is an open
source, written in C.

4.1.3 Intelligent Optimization System

Different intelligent computing techniques coupled with an engineering optimiza-


tion problem create an optimization system. The applied intelligent computing
techniques are described in Chap. 3 and can be used for solving particular problems
(considering their pros and cons). Boundary-value problems are solved by means of
appropriate numerical technique, such as FDM, FEM, BEM [8–10, 126]. These
problems can be solved by means of own implementation or commercial codes.
Own codes can be simply adopted in the optimization but efficiency is usually
worse than in case of commercial codes. The FEM commercial programmes are
highly optimized and have the ability to solve a wide range of problems. The most
popular FEM packages are:
• MSC.Nastran;
• MSC.Marc;
• MSC.Dytran;
4.1 Formulation of Single- and Multiobjective Optimization Problems 85

• ANSYS Multiphysics;
• ABAQUS;
• COMSOL.
Coupling of the computational techniques and the optimization problem requires
the creation of the proper interface (Fig. 4.6). Communication is usually performed
through files. These interfaces should read values of the design variables and
prepare appropriate information for the solution of the boundary-value problem.
The flowchart of the fitness function evaluation is presented in Fig. 4.7. The
optimization algorithm (EA, MOEA, AIS, PSO) sends values of design variables.
On the basis of the design variables, the first geometry of the structure is created.
The next steps are generation of the finite-element mesh, boundary and initial
conditions and definition of all necessary settings of the analysis. After solving the
boundary-value problem, the results are read from the output files generated by the
FEM or BEM packages. On the basis of the results, fitness function (for single
optimization task) or fitness functional (for the multiobjective optimization task) is
calculated.
It should be mentioned that the preparation of the model, mesh generation, and
so on can be done by means of own codes or appropriate pre-processor for com-
mercial systems. It is very convenient to use pre-processors (e.g. there is no need to

Fig. 4.6 Coupling of the technique and the problem

Fig. 4.7 The flowchart of the fitness function evaluation


86 4 Structural Intelligent Optimization

use external mesher procedure), but it requires the usage of internal script languages
implemented in the pre-processors and may be more time-consuming.

4.1.4 Geometry Modelling

The choice of the geometry modelling method and the design variables has a great
influence on the final solution of the optimization process. There are a lot of
methods for geometry modelling. In the proposed approach, nonuniform rational
B-spline (NURBS) and Bezier curves are used to model the geometry of the
structure [98]. The use of these curves in optimization makes the reduction of the
number of design parameters possible. It provides the flexibility to design a large
variety of shapes by manipulating the control points.
An nth-degree Bezier curve is defined by:

X
n
C ð uÞ ¼ Bi;n ðuÞPi ð4:1:7Þ
i¼0

where u is a coordinate with changes and range h0; 1i, and Pi are control points.
The basis function Bi;n is given by:

n!
Bi;n ðuÞ ¼ ui ð1  uÞn1 ð4:1:8Þ
i!ðn  iÞ!

The fourth-degree Bezier curve is described by the following equation:

C ðuÞ ¼ ð1  uÞ4 P0 þ 4uð1  uÞ3 P1 þ 6u2 ð1  uÞ2 P2 þ 4u3 ð1  uÞP3 þ u4 P4


ð4:1:9Þ

An example of the fourth-degree Bezier curves is shown in Fig. 4.8. The flex-
ibility to design a large variety of shapes is provided by manipulating the control
points.

Fig. 4.8 The example modelling of the shape of the structure by fourth-degree Bezier curve
4.1 Formulation of Single- and Multiobjective Optimization Problems 87

Successive points of the curve are obtained by changing the value of u between 0
and 1. For u ¼ 0; CðuÞ ¼ P0 and for u ¼ 1; CðuÞ ¼ P4 . The shapes of the Bezier
curve depend on the position of control points. In order to obtain more complicated
shapes, it is necessary to raise-up the degree of the Bezier curve and introduce more
control points.
A NURBS curve is more adjustable and flexible in comparison to the Bezier
curve. The curve is defined by the following formula:
Pr
Nj;n ðuÞwj Pj
C ðuÞ ¼ Pj¼0
r ; aub ð4:1:10Þ
k¼0
Nk;n ðuÞwj

where Pj are control points, wj is the weight of control points, Nj,n is the nth-degree
B-spline basis functions defined by the knot vector
8 9
< =
U ¼ a; . . .; a; un þ 1 ; . . .; umn1 ; b; . . .; b ð4:1:11Þ
:|fflfflffl{zfflfflffl} |fflfflffl{zfflfflffl};
nþ1 nþ1

When the position and the weight of the control points are changed, it is possible
to manipulate the curve precisely. From the practical point of view, a very
important feature of NURBS curves is local approximation property. It means that
if the control point Pj is moved
 and/orthe weight wj is changed, only a part of the
curve on the interval u 2 ui ; ui þ p þ 1 is modified.
In the case of 3D structures the boundaries of the NURBS surfaces (Fig. 4.9) are
modelled. Due to the use of NURBS curves and surfaces, the number of optimized
parameters can be decreased.

Fig. 4.9 The modelling of the boundary by means of the NURBS surface
88 4 Structural Intelligent Optimization

4.2 Shape, Topology, Material and Size Optimization


and Their Parameterization

Shape and topology optimization have been an active research area for some time.
Recently, several innovative approaches for topology optimization have been
developed. One of the simplest optimization approaches is the method based on
removing inefficient material from a structure. This method is named evolutionary
structural optimization [120]. However, this method is not based on the application
of the evolutionary algorithm but on different rejection criteria for removing
material which depends on the types of design constraints.
One of the most famous topology optimization approaches is based on the
material homogenization method [16, 18]. It has been applied to various opti-
mization problems. The homogenization design method assumes the introduction of
the periodic microstructures of a particular shape into the finite elements of the
discretized domain. The size and orientation of microstructures in the elements
determine the density and structural characteristics of the material they are made of.
An optimization process consisting of application of the mathematical program-
ming techniques leads to the minimization of the structure compliance by changing
the orientation and size of the microstructures. As a result of the optimization
process, composite structures emerge. As a variation and simplification of the
homogenization method, the solid isotropic microstructure with penalization
(SIMP) method [17, 18] has been introduced. In this approach, the densities of the
basic element play the role of the design variables. The convergence of this method
is strongly dependent on the value of penalization of the term. Another interesting
approach assumes the discretization of the domain into binary material/void ele-
ments introduced by Anagnostou et al. [4]. This approach was developed by
Kirkpatrick et al. [78], who proposed finding the optimal material configuration
within the design domain by using simulated annealing. Jensen and Sandgren [103]
proposed the application of the genetic algorithm in order to solve similar opti-
mization problems. This approach has been developed by Chapman et al. [49].
Another interesting approach to the structural optimization problem is the method
named multi-GA system introduced by Woon et al. [119], which assumes the
application of two operating simultaneously and in parallel genetic algorithms. The
first external genetic algorithm is used to define the optimum shape of the structure
through operating on the external boundary, while the second (internal) is used to
optimize the internal topology. This method does not require the application of the
post-processing or additional algorithms to generate smooth boundaries. Another
approach to the structural optimization is based on generating a new void (so-called
bubble) inside a domain on the basis of special criteria and next on performing
simultaneous shape and topology optimization. This approach was originated by
Eschenauer and Schumacher [67]. Coupling of this approach, the boundary ele-
ments and the genetic algorithms, was considered by Burczyński and Kokot [31].
From the mathematical point of view, this approach is based on replacing
one-connected domain with a multiconnected domain. The topology optimization
4.2 Shape, Topology, Material and Size Optimization and Their Parameterization 89

method of Sokołowski and Żochowski [108] is based on the original concept of


topological derivative which measures the influence of small holes introduced to the
design domain and thus allowing the consideration of the topology changes. In the
original works, the authors introduced the topological derivative which was
applicable to domain functionals defined as integrals of some functions depending
on solutions of the Poisson equation or of the elasticity boundary-value problems.
Novotny et al. [95] modified this method by calculating the topological derivative
on the basis of the theory of shape derivative which measures the sensitivity of
boundary perturbations. Sethian and Wiegmann [105] introduced the level set
method. In this approach a level set function determines the addition and removal of
material in the structure domain by merging or splitting the holes uniformly dis-
tributed in the structure domain at the beginning of the optimization process. The
material is eliminated for the removal rate which represents a percentage of the
maximal initial stress and is added in the residual rate, without using meshes.
Allaire et al. [3] applied the shape derivative, topological derivative and the level set
method for topology optimization. The topological derivative prompts the initial
holes distribution and facilitates creating new holes during the optimization process.
Recently, bio-inspired methods of global optimization methods like the particle
swarm optimizer (PSO) [76], the artificial immune system (AIS) [118] and the
evolutionary algorithm (EA) [90] have been applied to optimization problems. In
this chapter the method based on the application of soft computing methods and the
finite-element method to the simultaneous optimization of topology, shape and
material of the structure is presented. By means of this approach, shape, topology
and material optimization is performed simultaneously. The important feature of
this approach is the strong probability of finding the global optimal solutions
received by the implementation of the soft computing methods. The described
approach is free from limitations connected with classic gradient optimization
methods. Coupling the finite-element method with the bio-inspired algorithms gives
an effective and efficient alternative optimization tool, which enables solving a large
class of the optimization problems of mechanical structures. The main feature of the
proposed optimization method is the evolutionary distribution of the material in the
construction changing its material properties. This process leads to the elimination
of a part of material from the structure and as a result, a new shape and topology of
the structure emerges [43, 45, 109, 111, 112].

4.2.1 Formulation of the Problem

Consider a structure which, at the beginning of an evolutionary process, occupies a


domain X0 in E d ; d ¼ 2 or 3 , bounded by a boundary C0 . The domain X0 is filled
by elastic homogeneous and isotropic material of a Young’s modulus E0, mass
density q0 and a Poisson’s ratio m. The structures are considered in the framework
of the linear theory of elasticity. During the evolutionary process the domain Xt , its
90 4 Structural Intelligent Optimization

boundary Ct and the field of mass densities qð X Þ ¼ qt ; ð X Þ 2 Xt (or thickness


gð X Þ ¼ gt ; ð X Þ 2 Xt for 2D) can change for each generation t (for t = 0, q0 =
const). The evolutionary process proceeds in the environment in which the structure
fitness is described by various criteria:
(a) the minimization of the stress of the structure
Z
J ¼ wðrÞdX ð4:2:1Þ
X

where w is an arbitrary function of the stress tensor r, with a constraint imposed


on the volume of the structure

V  jXj  Vmax ð4:2:2Þ

(b) the minimization of the mass of the structure


Z
J ¼ qdX ð4:2:3Þ
X

with constraints imposed on equivalent stresses req and displacements u of the


structure

req ðx; y; zÞ  rad ; ðx; y; zÞ 2 X ð4:2:4Þ

juðx; y; zÞj  uad ; ðx; y; zÞ 2 X ð4:2:5Þ

(c) the minimization of elastic strain energy

1Z
J¼ updC ð4:2:6Þ
2C

where u and p are boundary displacement and direction fields, respectively,


with a constraint imposed on the volume of the structure (4.2.2).

4.2.2 Concept of Generalized Evolutionary Optimization


of Structures

The distribution of mass density qð X Þ; ð X Þ 2 Xt or thickness gð X Þ; ð X Þ 2 Xt


(Fig. 4.10) in the structure is described by a surface Wq ð X Þ; Wg ð X Þ; ð X Þ 2 H 2 (for
2D) or a hypersurface Wq ð X Þ; ð X Þ 2 H 3 (for 3D). The surface (hypersurface)
Wa ð X Þ; a ¼ q; g is stretched
 under
 H d  E d ; ðd ¼ 2; 3Þ and the domain Xt is
d d
included in H , that is, Xt H :
4.2 Shape, Topology, Material and Size Optimization and Their Parameterization 91

Fig. 4.10 The illustration of the idea of evolutionary generation for a 2D structure

The shape of the surface (hypersurface) Wa ð X Þ; a ¼ q; g is controlled by genes


dj , j ¼ 1; 2; . . .; G; which create a chromosome

ch ¼ d1 ; d2 ; . . .; dj ; . . .; dG ð4:2:7Þ

djmin  dj  djmax ð4:2:8Þ

where djmin , djmax are the minimum and maximum values of the gene, respectively.
Genes are the values of the function Wa ð X Þ; a ¼ q; g in the control points ð X Þj of
h i
the surface (hypersurface), that is, dj ¼ Wa ð X Þj ; j ¼ 0; 1; 2; . . .; G:
The finite-element method [126] is applied in the ana-lysis of the structure. The
domain X of the structure is discretized by means of the finite elements,
X ¼ [ Ee¼1 Xe .
Assigning of the mass density and thickness to each finite element Xe ; e ¼
1; 2; . . .; E is adequately performed by the mappings:
 
qe ¼ Wq ð X Þe ; ð X Þe 2 Xe ; e ¼ 1; 2; . . .; E ð4:2:9Þ
 
ge ¼ Wg ð X Þe ; ð X Þe 2 Xe ; e ¼ 1; 2; . . .; E ð4:2:10Þ

It means that each finite element can have different mass density or thickness.
92 4 Structural Intelligent Optimization

When the value of the mass density or thickness for the eth finite element is
included in the interval 0  qe \qmin (or 0  ge \gmin ), the finite element is elim-
inated and the void is created, and in the interval qmin  qe \qmax (or
gmin  ge \gmax ), the finite element remains.
In the next step, the Young’s modulus for the eth finite element is evaluated by
means of the following equation:

r
qe
Ee ¼ Emax ð4:2:11Þ
qmax

where Emax ; qmax are Young’s modulus and mass density for the same material,
respectively, r is a parameter which can change from 1 to 9.
The dependence between Young’s modulus and mass density in the topology
optimization was proposed for the first time by Bendsøe [15]. For the topology
optimization of 2D structures the expression (4.2.11) was applied by Kutyłowski
[87]. The material properties or the thickness of finite elements change evolution-
ally and some of them are eliminated by means of the proposed method. As a result,
the optimal shape, the topology and the material or the thickness of the structures
are obtained.

4.2.3 Parameterization

Parameterization is the key stage in the structural optimization. The large number of
design variables makes the optimization process ineffective. A connection between
the design variables (genes) and the number of finite elements leads to poor results.
Better results can be obtained when the surface (or hypersurface) of mass density
distribution is interpolated by a suitable number of values given in control points
ð X Þj . This number, on the one hand, should provide the good interpolation, and on
the other hand, the number of design variables should be small.
Two different types of interpolation procedures were applied. First, the multi-
nomial interpolation described below for 3D structure was introduced (the proce-
dure for 2D structure is particular case of it). The hypersurface Wa is interpolated as
follows:
2 3
 1  d1
Wa ð X Þ ¼ U D  E 1  F 1 4 5; a ¼ q; g ð4:2:12Þ
d27

where
4.2 Shape, Topology, Material and Size Optimization and Their Parameterization 93

U ¼ ½1; x; x2 ½1; y; y2 ½1; z; z2 ¼½1; z; z2 ; y; yz; yz2 ; y2 ; y2 z;


y2 z2 ; x; xz; xz2 ; xy; xyz; xyz2 ; xy2 ; xy2 z; xy2 z2 ; x2 ; x2 z; x2 z2 ; x2 y; x2 yz; ð4:2:13Þ
x2 yz2 ; x2 y2 ; x2 y2 z; x2 y2 z2 

and D, E and F are matrices described as follows:


2 3
1 0 0
D ¼ E ¼ F ¼ 41 1 15 ð4:2:14Þ
1 2 4

The structure which is under the optimization process is inserted into a cube H 3
whose edges have length A = 2, B = 2, C = 2, and 27 control points are arranged
regularly (Fig. 4.11). In this case the number of control points is fixed. In the case
when the body has a complex geometry whose overall dimensions are considerably
different from the space H 3 , this approach can lead to the lower accuracy of the
interpolation process. Then, the domain X does not cover the working space
(Fig. 4.12).
In order to overcome these difficulties, the second interpolation procedure based
on some nodes overlapping selected FEM nodes has been introduced. This pro-
cedure (Table 4.1) is based on the analysis of the neighbourhoods of the individual
nodes and enables introduction of an optional number of the control points in any
nodes of the finite-element mesh.
This interpolation procedure works in an iterative way:
 
Ik þ 1 ¼ f Ik ; k ¼ 0; 1; 2; . . .; K ð4:2:15Þ

Fig. 4.11 Arrangement of


control points
94 4 Structural Intelligent Optimization

Fig. 4.12 Working space for two different interpolations

Table 4.1 Interpolation procedure in the optimization of 2D and 3D structures

Load nodes I = 1, 2, ..., N and elements e = 1, 2, …, E


For I = 1, 2 ,..., N load the initial vector of interpolation parameters
For k = 0, 1 ,2 ,..., K “k – step of iteration”
{
For I = 1, 2 ,..., N “for all the nodes”
{
If Ti = 0 “i-th node does not contain a control point”
{
For l = 1, 2 ,..., M “for all neighbouring nodes of i-th node”
Calculate max(pl)
Calculate min(pl)
Calculate pik + 1 = 1/2[max(plk) + min( plk)]
}
If Ti = 1 pik + 1= dj, j = j + 1 “i-th node contains a control point”
}
}

where the approximations of the interpolation vector in the following steps k are
given by the expression
 
I k ¼ pk1 ; pk2 ; . . .; pki ; . . .; pkN ; i ¼ 1; 2; . . .; N; k ¼ 0; 1; 2; . . .; K ð4:2:16Þ

and the interpolation parameters pki are the values of the function Wak ; a ¼ q; g, in
the interpolation nodes ð X Þi (nodes of the finite element mesh):
 
pki ¼ Wak ð X Þi ; i ¼ 1; 2; . . .; N; k ¼ 0; 1; 2; . . .; K; a ¼ q; g ð4:2:17Þ

The number and the arrangement of the control points of the interpolation
function Wak ; a ¼ q; g are the input data to the optimization programme. The control
points are located in selected nodes of the finite-element mesh and the given
inequality is satisfied:
4.2 Shape, Topology, Material and Size Optimization and Their Parameterization 95

GN ð4:2:18Þ

The number of control points equals the number of design variables. The number
and the locations of control points are arbitrarily declared by the user of the opti-
mization programme, who simultaneously introduces value 1 in the additional
vector Ti ; i ¼ 1; 2; . . .; N at the position which corresponds to the number of
chosen node. Therefore, in order to distinguish the nodes which play the role of the
control points, the additional vector Ti ; i ¼ 1; 2; . . .; N is introduced. If Ti ¼ 1
then the node pki ¼ dj ; j ¼ 1; 2; . . .; G plays the role of the control point. In
another way Ti ¼ 0 and interpolation parameters are calculated by the equation

1    
pki þ 1 ¼ max pkl þ min pkl ; l ¼ 1; 2; . . .; M ð4:2:19Þ
2

where M is the number of neighbours Sl ; l ¼ 1; 2; . . .; M for ith node Ri ; i ¼


1; 2; . . .; N; pki þ 1 is the value of the interpolation parameter for ith node, in step
k + 1; pkl is the value of the interpolation parameter for lth node which is a
 
neighbour for node i, in step k; max pkl is the maximal value of the interpolation
 
parameter for nodes which are neighbours for node i, in step k, min pkl is the
minimal value of the interpolation parameter for nodes which are neighbours for
node i, in step k.
The step of the iteration procedure depends on the density of finite-element mesh
and of the number and arrangement of the control points.
The value of optimization parameter for each finite element is computed on the
basis of the values in its nodes.
The interpolation procedure is presented in Table 4.2.

4.2.4 Additional Procedure Supporting the Bio-Inspired


Optimization

In order to improve the optimization process, an additional procedure is introduced


(Fig. 4.13). Values rmin and p (minimum stress and stress increment, respectively)
are the input data to the procedure. The accuracy of obtained solutions depends on
the ascribed values of rmin and p. Small values of rmin and p guarantee the more
precise solution but it is compensated by the long computation time.
Implementation of this procedure increases:
• the number of chromosomes that fulfil the imposed constraints,
• the effectiveness of the optimization algorithm by removing unnecessary
material which is not strained enough.
Moreover, additional procedure facilitates the smooth shape of the structure
boundary.
96 4 Structural Intelligent Optimization

Table 4.2 Smooth procedure in the optimization of 2D and 3D structures

For k = 0, 1, 2 ,..., K “k – step of iteration”


{
For I = 1, 2 ,...,W “for all nodes”
{
If T[i]=0
{
determine_number_of_neighbours();

search min(
(4.2.20)

search min( )
(4.2.21)
search max( ) (additionally for 3-D)
search min( ) (additionally for 3-D)
(additionally for 3-D) (4.2.22)
}
}
}

Two different types of the procedure have been introduced. The first one
(Fig. 4.13a) is applied to the task of minimization of the stress functional, and the
second one (Fig. 4.13b) to the task of minimization of the mass functional. In the
first case the procedure is performed until the volume constraint is fulfilled. In the
second one, the material is eliminated to the moment when the admissible limit of
the stresses is exceeded. Then the procedure of adding the material (finite elements)
around the regions with high stresses is performed. The structure is analysed by the
FEM and the stress constraint is checked. If the constraint is satisfied, the procedure
is finished and the fitness function is computed. If not, the last structure which has
fulfilled the stress constraint is analysed by the FEM, and the fitness function is
evaluated and transferred to the evolutionary algorithm.

4.2.5 Smoothing Procedure

The final structure obtained after the optimization process has a rough external and
internal segments of the boundary. In order to get the smooth shape of the
boundary, the smoothing procedure has to be used. The procedure can be used
during or after the optimization process. If the procedure is used during the opti-
mization process, the smooth structures which fulfil all the imposed constraints are
4.2 Shape, Topology, Material and Size Optimization and Their Parameterization 97

(a)

(b)

Fig. 4.13 The additional procedure aiding evolutionary optimization: a for the minimization of
the stress functional; b for the minimization of the mass functional

obtained. If the procedure is used after the optimization process, the smooth
structures which do not have to fulfil the imposed constraints are obtained. So they
must be analysed by the finite-element method once again and it must be checked if
they fulfil the constraints. The procedure smoothes the boundaries of the structures
by changing the coordinates of the nodes in an iterative way (Table 4.3).

Table 4.3 Characteristic Diameter of the wheel LW 355.6 mm


dimensions of a car wheel
Width of a tyre LF 175 mm
Diameter of the wheels spacing LK 110 mm
Diameter of a wheel hub LP 60 mm
Thickness of the wheel hub 30 mm
Thickness of a tyre 8 mm
98 4 Structural Intelligent Optimization

Fig. 4.14 A base node and


its neighbour nodes

The function determine_number_of_neighbours() is performed in the following


way:
The neighbour nodes are searched for each base node (Fig. 4.14). Then, the new
coordinates of the base node are calculated on the basis of the coordinates of the
neighbour nodes (Table 4.2—(4.2.20), (4.2.21) and (4.2.22)). The nodes with
boundary conditions are fixed.

4.3 Optimization of Elastic Structures Under Static Loads

The intelligent computing methods are widely applied in different problems of


science and engineering, and also in mechanics and structural optimization [2, 52,
75, 97, 101, 106, 121]. The chapter is devoted to an application of the intelligent
computing methods, like evolutionary algorithms, artificial immune systems, par-
ticle swarm optimizer and the finite-element method to the optimization of 2D
structures (plane stress, bending plates and shells), 3D structures and combination
of 2D and 3D structures [43, 45, 109, 111, 112]. The optimization method of the
shape, the topology and the material with constraints imposed on the mass, stresses
or displacements of the structure, described in Sect. 4.2 is considered. The
numerical examples demonstrate that the methods based on intelligent computing
are an effective technique for solving computer-aided optimal design.

4.3.1 Evolutionary Optimization of Shape, Topology


and Thickness or Mass Density of Structures

4.3.1.1 Evolutionary Optimization of a Car Wheel

The task of the optimization of shape, topology and thickness of a car wheel by the
minimization of the stress functional and with the volume constraint is considered.
A car wheel geometry with characteristic dimensions, included in Table 4.3, is built
of three surfaces of revolution (Fig. 4.15): the central surface with the holes for the
4.3 Optimization of Elastic Structures Under Static Loads 99

Fig. 4.15 Geometry and characteristic dimensions of a car wheel

fastening bolts, the surface of the ring of the wheel and the surface connecting the
two mentioned earlier. The last one is subjected to the optimization process. The
shell structure is loaded with the tangent force s0 (torsion of the wheel) and with a
pressure c0 (pressure in the tyre). The loadings are applied to the ring of the wheel
(Fig. 4.16b). The structure is rigidly supported around the holes destined for the
fastening bolts and is also supported on the central surface in the direction of the
rotation axis of the wheel (Fig. 4.16b). In the considered task the symmetry of
the car wheel (revolution of the 1/5 part of the structure) during the distribution
of the control points of the interpolation hypersurface has been used (Fig. 4.16a). In
this way the number of design variables (genes) could be decreased and the sym-
metrical results could be obtained. This reasoning is purposeful because of the
necessity of the car wheel balance. Input data for the optimization task and the
parameters of evolutionary algorithm are included in Tables 4.3 and 4.4, respec-
tively. The results of the optimization are presented as the maps of thickness and the
maps of stresses for the best obtained solutions in the 100th generation (Fig. 4.17).

Fig. 4.16 A car wheel: a the distribution of the control points of the interpolation hypersurface;
b boundary conditions
100 4 Structural Intelligent Optimization

Table 4.4 Input data to the optimization task


Tangent Pressure c0 (MPa) Number of Number of Number of the
force s0 (N) design variables control points chromosomes
500 0.22 23 86 100
Material Range of the change of Existing of an Elimination of Vmax (cm3)
the genes (mm) element an element
Aluminium 4.0–20.0 4  ge < 10 10  ge  20 5500

min max

(a) (b)

(c) (d)

Fig. 4.17 The results of the car wheel optimization: a, b the best solution from the first
population; c, d the best obtained solution; a, c maps of thicknesses; b, d maps of stresses

4.3.1.2 Evolutionary Optimization of a Tank Supporting Structure

The task of the optimization of the shape, the topology and thickness of a tank
supporting a structure by the minimization of the stress functional and with the
volume constraint is considered. The considered construction is stiffly supported on
4.3 Optimization of Elastic Structures Under Static Loads 101

a = 480 mm, b = 600 mm, c = 450 mm,


d = 500 mm, e = 800 mm

Fig. 4.18 The geometry and the dimensions of a tank supporting structure

the lower boundary. The tank is loaded with pressure c0 and the construction is
loaded with deadweight. The geometry and dimensions of the construction are
presented in Fig. 4.18. The tank supporting structure presented in Fig. 4.19a is
subjected to the optimization process. In order to reduce the number of design
variables and to get the symmetrical results, a quarter of the construction has been
analysed. The distribution of the control points of the interpolation hypersurface is
shown in the Fig. 4.19b. The input data for the optimization task and the parameters
of the distributed evolutionary algorithm are included in Tables 4.5 and 4.4,
respectively. The results of the optimization process with the application of three
metal plates of different thicknesses are presented as the maps of thickness
(Fig. 4.19c) and the maps of stresses (Fig. 4.19d) for the best obtained solution.

4.3.1.3 Evolutionary Optimization of L-like 3D Structure

In the next example an “L” structure (Fig. 4.20a) is optimized. The criterion of
optimization is the minimization of the mass. Computational results obtained after
73 generations are presented in the form of a map distribution of mass density
(Fig. 4.20b, c). The structure after smoothing is presented in Fig. 4.21. Table 4.6
contains input data. The dimensions, loading of 3D structure and constraint are
included in Tables 4.7 and 4.8.
102 4 Structural Intelligent Optimization

min max

(a) (b)

(c) (d)

Fig. 4.19 Tank supporting structure: a the finite element mesh; b the distribution of the control
points of the interpolation hypersurface; c, d the results of the evolutionary optimization of the
tank supporting structure. The best individual in the t = 100th generation; c the map of
thicknesses; d the map of stresses

Table 4.5 The input data to the optimization task of a tank supporting structure
The number of design The number of rmin; Pressure c0 Vmax
variables control points p (MPa) (MPa) (cm3)
29 29 1.0; 1.0 5.0 17 000
Range of ge (mm); the existence or elimination of the finite The thickness of the metal
element plates (mm)
2.5  ge < 7.5 elimination 1. g = 10
7.5  ge  22.5 existence for 7.5  ge < 12.5
2. g = 15
for 12.5  ge < 17.5
3. g = 20
for 17.5  ge  22.5
4.3 Optimization of Elastic Structures Under Static Loads 103

Fig. 4.20 L-like structure: a the scheme of loading, b the distribution of mass density after first
generation, c the distribution of mass density after optimization

Fig. 4.21 L-like structure after smoothing


104 4 Structural Intelligent Optimization

Table 4.6 Input data Minimal mass density Numbers of chromosomes


0.4
7.85 g/cm3 80
Step of iteration in smooth procedure
25

Table 4.7 The dimensions Dimensions (mm)


and loading of 3D structure
a 48
b 48
c 24
d 24
e 24
Loading (kN)
Q 8.45

Table 4.8 Constraints Constraints


Maximal displacement Maximal stress Genes 1–27
0.08 mm 600 MPa 0–1

4.3.2 Immune Optimization of the Shape, the Topology


and Mass Density of Structures

4.3.2.1 Immune Optimization of a Plate Structure in Plane Stress

A rectangular 2D structure (plane stress) of dimensions 100


200 mm, loaded
with the concentrated force P at the centre of the lower boundary and fixed at the
bottom corners is considered. In order to obtain symmetrical results, half of the
structure has been analysed. The input data for the optimization programme are
included in Table 4.9. The geometry and the distribution of the control points of the
interpolation surface are shown in Fig. 4.22a and b, respectively. The results of the
optimization process are presented in Fig. 4.23.

Table 4.9 The input data to the optimization task of a plate in plane stress
rad The thickness rmin; p (MPa) P (kN) Range of qe (g/cm3)
(MPa) (mm)
80.0 4.0 1.0; 1.0 2.0 7.3  qe < 7.5 elimination
7.5  qe  7.86 existence
4.3 Optimization of Elastic Structures Under Static Loads 105

Fig. 4.22 The plate (Example 1); a the geometry; b the distribution of the control points of the
interpolation surface

Fig. 4.23 The results of the immune optimization of the plate: a the solution of the optimization
task; b the map of mass densities; c the map of stresses; d the map of displacements

4.3.2.2 Immune Optimization of 3D Solid Body

A 3D structure with dimensions and loading is presented in Fig. 4.24a and b. The
input data for the optimization procedure are included in Table 4.10. The geometry
and the distribution of the control points of the interpolation hypersurface are
shown in Fig. 4.24c. The results of the optimization process are presented in
Figs. 4.25 and 4.26.
106 4 Structural Intelligent Optimization

Fig. 4.24 Two cases of loading with the hypersurface: a first case (compression), b second case
(tension), c the distribution of the control points of the interpolation hypersurface

Table 4.10 Input data: geometry, loading and constraints


Dimensions (mm) Loading Q Constraints
a b c Compression Tension Compression Tension
100 100 100 −36.3 (KN) 36.3 (KN) req = 33 MPa req = 22 MPa
u = 0.06 mm u = 0.03 mm

Fig. 4.25 a The distribution of mass density for the first case (compression), b structure after 50
iteration (the best solution) and c structure after smoothing

4.3.2.3 Immune Optimization of a Shell-Solid Structure

The structure is stiffly supported at the bottom boundary of a solid body. The upper
surface is loaded with pressure. The geometry, the boundary conditions and the
4.3 Optimization of Elastic Structures Under Static Loads 107

Fig. 4.26 a The distribution of mass density for the second case (tension), b structure after 50
iteration (the best solution) and c structure after smoothing

Fig. 4.27 The shell-solid structure (Example 3); a the geometry; b the boundary condition; c the
distribution of the control points of the interpolation hypersurface

Table 4.11 The input data to the optimization task of a plate in plane stress
rad The thickness rmin; p (MPa) Range of qe (g/cm3)
(MPa) (mm) p (MPa)
150.0 15.0 2.0; 2.0 3.0 7.3  qe < 7.5
elimination
7.5  qe  7.86
existence

distribution of the control points of the interpolation hypersurface are presented in


Fig. 4.27. The structure is discretized by tetrahedron finite elements for
3D structure and by triangular elements for 2D structure. The special elements
which combine 2D finite element with 3D finite element (MSC
NASTRAN RSSCON shell-to-solid element connector) are used. The input data for
the optimization task and the parameters are included in Table 4.11. The results of
the optimization process are presented in Fig. 4.28.
108 4 Structural Intelligent Optimization

min max

(a)

(b) (c)

Fig. 4.28 The results of the immune optimization of the shell-solid structure: a the solution of the
optimization task; (the map of mass densities); b the map of stresses; c the map of the displacement

4.3.3 Evolutionary Optimization of a Bending Plate

A quadratic plate loaded with the concentrated force Q applied at the centre of the
structure and fixed at the boundary is considered. In order to obtain the symmetrical
results, a quarter of the structure has been analysed. The input data to the opti-
mization programme are included in Table 4.12. The results of the optimization
process with different values of the stress constraint are presented in Table 4.13.

Table 4.12 The input data to the optimization task of a bending plate
a
b (mm) Thickness (mm) rmin; p (MPa) Q (N) Range of qe (g/cm3)
200
200 4.0 5.0; 1.0 200.0 7.3  qe < 7.5 elimination
7.5  qe  7.86 existence
4.3 Optimization of Elastic Structures Under Static Loads 109

Table 4.13 The influence of The value of the stress constraint


the value of the stress
constraint 100 MPa

150 MPa

200 MPa

4.3.4 Swarm Optimization of a Shell Bracket

The task of optimization of a shell bracket structure is considered. The considered


construction is stiffly supported around the holes destined for the clamping screw.
The geometry, the dimensions and the loading with the concentrated forces F1, F2
of the construction are presented in Fig. 4.29a. In order to reduce the number of
design variables and to get the symmetrical results, half of the construction has been
analysed. The distribution of the control points of the interpolation hypersurface is
shown in Fig. 4.29b. The input data for the optimization task are included in
Table 4.14. The results of the optimization process are presented in Fig. 4.30.
110 4 Structural Intelligent Optimization

Fig. 4.29 The shell bracket


(Example 3): a the geometry;
b the distribution of the
control points of the
interpolation hypersurface

Table 4.14 The input data to the optimization task of a shell bracket
rad Thickness rmin; Q1; Q2 Range of qe (g/cm3)
(MPa) (mm) p (MPa) (kN)
110.0 5.0 2.0; 2.0 1.0; 7.3  qe < 7.5
1.0 elimination
7.5  qe  7.86
existence

4.4 Optimization of Elastic Structures Under Dynamical


Loads

Structures are frequently subjected to dynamic loads and it is very important to


analyse their transient dynamic response. The important properties of vibrating
structures are eigenfrequencies [77]. The dynamic response or natural frequencies
4.4 Optimization of Elastic Structures Under Dynamical Loads 111

Fig. 4.30 The results of the swarm optimization of the shell bracket structure: a the map of mass
densities; b the map of stresses; c the map of the displacement, for the best obtained solution

of structures can be established by changing the shape, topology and material


properties of structures [92, 93, 116, 122, 125]. Another possibility of the response
improvement is applying stiffeners [80]. Dynamic response of structures with an
arbitrary geometry, material properties and boundary conditions can be obtained by
carrying out laboratory tests but they are usually very expensive and
time-consuming. In order to reduce costs and time, computer simulations are per-
formed instead of experimental investigations. As a result, dynamic quantities of
interest, like displacements, velocities, accelerations, forces, stresses, can be
112 4 Structural Intelligent Optimization

determined. The chapter is devoted to new computational techniques in structural


dynamics where one tries to study, model, analyse and optimize very complex
phenomena, for which more precise scientific tools of the past were incapable of
giving low-cost and complete solution. Intelligent computing methods differ from
conventional (hard) computing as, unlike hard computing, they are tolerant of
imprecision, uncertainty, partial truth and approximation. In effect, the role model
for such intelligent computing is a human mind. The paper deals with the appli-
cation of the bio-inspired methods, like evolutionary algorithms (EA), artificial
immune systems (AIS) and particle swarm optimizers (PSO), to optimization
problems. The bio-inspired methods are applied to optimize the shape, topology
and material properties of 3D structures modelled by FEM and to optimize location
of stiffeners in 2D-reinforced plates modelled by the coupled BEM/FEM [46, 100,
110, 113]. The structures are optimized by means of the criteria dependent on
frequencies, displacements or stresses. Numerical examples demonstrate that the
methods based on the intelligent computation are an effective technique for solving
computer-aided optimal design problems.

4.4.1 Evolutionary Generalized Optimization of Structures


Modelled by the FEM

Consider a structure which, at the beginning of an bio-inspired process, occupies a


domain X0 ðin E 3 Þ; bounded by a boundary C0 . The domain X0 is filled with an
elastic homogeneous and isotropic material of a Young’s modulus E0 and a
Poisson’s ratio m. The 3D structures are considered in the framework of the linear
theory of elasticity. During the evolutionary process, the domain Xt , its boundary Ct
and the field of Young’s modulus E ðx; y; zÞ ¼ Et ; ðx; y; zÞ 2 Xt can change for each
generation t (for t = 0, E0 = const). The evolutionary process proceeds in an
environment in which the structure fitness is described by the maximization of the
objective functions:
(a) the maximization of the first eigenfrequency

maxðx1 Þ ð4:4:1Þ

with a constraint imposed on the volume of the structure

V  jXj
ð4:4:2Þ
V  V max

(b) the maximization of the difference between the first, second and third
eigenfrequencies
4.4 Optimization of Elastic Structures Under Dynamical Loads 113

max½ðx2  x1 Þ þ ðx3  x2 Þ ð4:4:3Þ

with a constraint imposed on the volume of the structure (4.4.2)


(c) the maximization of the difference between the first, second, third eigenfre-
quencies and forced vibration frequency xforced
 
max x1  xforced þ x2  xforced þ x3  xforced ð4:4:4Þ

with a constraint imposed on the volume of the structure (4.4.2).


The distribution of Young’s modulus Eðx; y; zÞ; ðx; y; zÞ 2 Xt in the structure is
described by a hypersurface W ðx; y; zÞ; ðx; y; zÞ 2 H 3 . The hypersurface W ðx; y; zÞ
is stretched under H 3  E3 and the domain Xt is included in H 3 , that is, ðXt H 3 Þ:
The shape of the hypersurface W ðx; y; zÞ is controlled by genes dj, j = 1, 2, …,
N, which create a chromosome vector

ch ¼ d1 ; d2 ; . . .; dj ; . . .; dN ð4:4:5Þ

Gene values are described by the functionh iW ðx; y; zÞ in interpolation nodes


(control points) ðx; y; zÞj , that is dj ¼ W ðx; y; zÞj ; j = 1, 2, …, N.
The following constraints are imposed on genes

djmin  dj  djmax ð4:4:6Þ

where djmin is the minimum value of the gene and djmax is the maximum value of the
gene.
The assigning of Young’s moduli to each finite element Xe ; e ¼ 1; 2; . . .; R is
performed by the mapping:
 
Ee ¼ W ðx; y; zÞe ; ðx; y; zÞe 2 Xe ; e ¼ 1; 2; . . .; R ð4:4:7Þ

It means that each finite element can have different material properties.
If the value of Young’s modulus for the eth finite element is included in the
interval 0  Ee \Emin , the finite element is eliminated and the void is created; in the
interval Emin  Ee \Emax , the finite element remains having the value of the
Young’s modulus from this material. As a result, the shape, topology and material
properties of the structure are changing simultaneously and this procedure is called
evolutionary generalized optimization.
Example 1: The maximization of the first eigenfrequency of a 3D bracket
A structure in the form of a 3D bracket (Fig. 4.31a) is optimized. The criterion of
optimization is the maximization of the first eigenfrequency. The best solution
obtained after 88 generations is presented in Fig. 4.31b. Table 4.15 contains input
data.
114 4 Structural Intelligent Optimization

Fig. 4.31 A 3D bracket:


a geometric dimensions,
b distribution of Young’s
moduli

Table 4.15 Input data Minimal Young’s modulus Maximal volume


0.4
2 * 105 MPa 4000 mm3
Numbers of chromosomes
100

Example 2: The maximization of the difference between the first, second and
third eigenfrequencies of a rectangular prism
A 3D structure in the form of a rectangular prism (Fig. 4.32a) is optimized. The
criterion of optimization is the maximization of the difference between the first,
second and third eigenfrequencies. The best solution in the form of the distribution
of Young’s moduli obtained after 169 generations is performed in Fig. 4.32b. Input
data are included in Table 4.16.
Example 3: The maximization of the difference between the first, second and
third eigenfrequencies and the forced vibration frequency of a rectangular prism
The last example concerns the optimization of a 3D structure from the previous
example (Fig. 4.32a). The criterion of optimization is the maximization of the
4.4 Optimization of Elastic Structures Under Dynamical Loads 115

Fig. 4.32 A rectangular


prism: a dimensions,
b distribution of Young’s
moduli

Table 4.16 Input data Minimal Young’s moduli Maximal volume


0.4
2 * 105 MPa 4.8e4 mm3
Numbers of chromosomes Dimensions of cubicoid
100 200
80
12 mm

difference between the first, second, and third eigenfrequencies and forced vibration
frequency. The best solution obtained after 134 generations is presented in
Fig. 4.33. Input data are included in Table 4.17.

4.4.2 Bio-Inspired Optimization of Reinforced Structures


by the Coupled BEM/FEM

A two-dimensional, homogenous, isotropic and linear elastic deformable body with


boundary C1 and occupying domain X1 is considered. The body is modelled as a
plate in plane stress or strain and it is reinforced by the stiffener occupying the
domain X2. The body is supported (displacements u(x, s) are known at a part of the
outer boundary) and subjected to dynamic tractions t(x, s) (where s is time), applied
at the outer boundary, as shown in Fig. 4.34.
116 4 Structural Intelligent Optimization

Fig. 4.33 The distribution of


Young’s moduli for a
rectangular prism obtained for
the resonance criterion

Table 4.17 Input data Minimal Young’s moduli Maximal volume


0.4
2 * 105 MPa 80,000 mm3
Numbers of chromosomes Dimensions of cubicoid
100 200
80
12 mm

The plate is modelled by the boundary element method (BEM) [65] and the
stiffener by the finite-element method (FEM) by means of beam finite elements,
attached along the C12 boundary (the interface). A perfect bonding between the
plate and the stiffener is assumed. The whole structure is analysed by the coupled
BEM/FEM and the subregion method [68]. The method allows modelling of bodies
with many plate subdomains and stiffeners of different properties. The numerical
equations, which are written for each plate and beam subdomain separately, are
coupled of using displacement compatibility conditions and traction equilibrium
conditions at all nodes along the common boundaries.
A set of algebraic equations for the plate in Fig. 4.34 has the following form:
1   
  u
€   u1   t1
M 1
M12 þ H1 H 12 ¼ G1 G 12
ð4:4:8Þ

u 12
u 12
t12

Fig. 4.34 A reinforced plate


subjected to dynamic loads
4.4 Optimization of Elastic Structures Under Dynamical Loads 117

where M is the mass matrix, H and G are the BEM coefficient matrices, u and u € are
displacement and acceleration vectors, respectively, t is a vector of tractions applied
at the outer boundary or the interface. The superscripts denote the matrices, which
correspond to the outer boundary or the interface.
The equation of motion for the stiffener in Fig. 4.34 in a matrix form is:

€21 þ K21 u21 ¼ T21 t21


M21 u ð4:4:9Þ

where K is the FEM stiffness matrix and T is the matrix, which expresses the
relationship between the FE nodal forces and the BE tractions. The latter matrix
allows treatment of the finite-element region as an equivalent boundary element
region.
If the structure is subjected to time-dependent boundary conditions, the dynamic
interaction forces between the plate and the stiffener act along the interface. These
tractions are treated as body forces distributed along the attachment line and they
are unknown of the problem. The displacement compatibility conditions and the
traction equilibrium conditions at the nodes along the interface are:

u12 ¼ u21 ; t12 ¼ t21 ð4:4:10Þ

If the above conditions are taken into account in the equations for the plate
(4.4.8) and stiffener (4.4.9), the following system of equations for the whole
structure is obtained:
8 9
    < u1 =
M 1
M12

u1
H 1
H12
G 12
þ u12 ¼ G1 t1 ð4:4:11Þ
0 M21 €12
u 0 K21 T21 : 12 ;
t

The unknowns are displacements and tractions on the external boundary and at
the interface in each time step.
Example 4: Reinforced rectangular plate
The optimization of a reinforced rectangular plate (Fig. 4.35) is performed by
means of AIS, PSO and EA. The plate is dynamically loaded and it is reinforced by
the frame-like structure composed of straight beams. The plate and the stiffeners are
modelled by the boundary elements and frame finite elements, respectively.
Different kinds of load and support are considered. The structure before opti-
mization (the reference plate) is shown in Fig. 4.35.
The length and height of the plate are L = 10 cm and H = 5 cm, respectively.
The thickness of the plate is g = 0.25 cm; the dimensions of beams cross-section
are 2a = 0.5 cm and b = 0.5 cm.
The material of the plate and frame is aluminium, and the mechanical properties
are: the Young’s modulus E = 70 GPa, Poisson’s ratio m = 0.34 and density
q = 2700 kg/m3. The material is homogeneous, isotropic and linear elastic and the
plane stress is assumed.
118 4 Structural Intelligent Optimization

Fig. 4.35 A reinforced


rectangular plate

The uniformly distributed load is applied at the upper edge of the plate. Two
kinds of time-dependent loads are considered (see Fig. 4.36): (a) the sinusoidal load
p(s) = posin(2ps/T) with the period of time T = 20p ls, and (b) the Heaviside load
p(s) = poH(s). The value of the load in both cases is po = 10 MPa. The time of
analysis is 600 ls and the time step Dt = 2 ls.
Three different supports are considered (see Fig. 4.37):
(a) support A—the plate is fixed on the left and right edges,
(b) support B—the plate is supported at two segments, each of 0.5 cm long,
(c) support C—the plate is fixed at the bottom edge.
The optimal positions of stiffeners are searched in order to maximize the stiffness
of the plate. The maximal dynamic vertical displacement on the loaded edge is

Fig. 4.36 Dynamic loadings: a sinusoidal, b Heaviside

Fig. 4.37 Types of supports: a support A, b support B and c support C


4.4 Optimization of Elastic Structures Under Dynamical Loads 119

Fig. 4.38 Design variables


and constraints

minimized. Because of the symmetry of the structure and boundary conditions, only
half of the structure is considered. The number of design variables defining the
position of the frame is 4: X1, X2, Y1 and Y2 (see Fig. 4.38). The longer beams are
parallel to x-axis. The end points of beams can move along the edges of the plate
within the constraints, as shown in Fig. 4.38. The constraints imposed on design
variables are: X1 and X2 variables are within the range from 0.5 to 4.75 cm, Y1
from 0.5 to 2.25 cm and Y2 from 2.75 to 4.5 cm. The parameters of AIS are: the
number of memory cells and the clones is 6, the crowding factor and the Gaussian
mutation is 0.5. The parameters of EA are: the number of chromosomes is 20, the
probability of the Gaussian mutation is 0.5, and the probability of a simple and
arithmetic crossover is 0.05. The parameters of PSO are: the number of particles is
20, inertia weight is 0.73 and two acceleration coefficients are 1.47.
The total number of boundary and finite elements in the BEM/FEM analysis is
120 and 120, respectively (each horizontal and vertical beam is discretized into 40
and 20 finite elements, respectively). The number of boundary and finite elements
during the optimization is constant.
The values of the design variables obtained by AIS, PSO and EA for the plate
subjected to the sinusoidal load, the Heaviside load and for three kinds of supports
are presented in Table 4.18. The results obtained by three different methods are

Table 4.18 Values of design variables, J and R


Load Support Design variables (cm) Jo (10−4 J (10−4 R (%)
X1 X2 Y1 Y2 cm) cm)
AIS, PSO and EA
Sinusoidal A 4.75 2.86 0.88 2.75 89 76 15
B 4.75 1.81 0.57 2.75 92 73 21
C 1.20 1.82 0.50 2.75 82 62 24
Heaviside A 0.50 4.75 0.50 4.50 112 91 19
B 4.75 1.41 0.50 4.50 211 149 29
C 0.50 2.20 1.70 2.80 49 42 14
120 4 Structural Intelligent Optimization

almost the same. The values of Jo and J (where Jo and J are the objective functions
for the reference and the optimal plate, respectively) and the reduction of R = (Jo –
J)/Jo 100%, are also presented.
A significant reduction of R resulting in the improvement of the dynamic
response of the optimal plates in comparison with the initial designs can be
observed. The optimal structures for different kinds of supports and for the sinu-
soidal and the Heaviside loads are shown in Fig. 4.39a and b, respectively. It can be
seen that in the present example most of the constraints are active.
The efficiency of bio-inspired methods EA, AIS and PSO measured by number
of fitness function evaluations is presented in Table 4.19.

Fig. 4.39 Optimal plates subjected to dynamic loads: a sinusoidal, b Heaviside

Table 4.19 The efficiency of Load Support EA AIS PSO


bio-inspired methods
Number of fitness function
evaluations
Sinusoidal A 2515 336 360
B 3705 408 440
C 1952 432 520
Heaviside A 303 276 60
B 1526 252 120
C 2797 528 580
4.4 Optimization of Elastic Structures Under Dynamical Loads 121

Example 5: Reinforced plate with a hole


The optimization of a rectangular reinforced plate with a hole (Fig. 4.40) is
performed by means of PSO with the same parameters like in the Example 4. The
plate is dynamically loaded and it is reinforced by eight symmetrically distributed
rods of circular cross-section. The plate and the reinforcing rods are modelled by
the boundary elements and beam finite elements, respectively. The structure before
optimization (the reference plate) is shown in Fig. 4.40.
The plate is stretched by a uniformly distributed load applied at its left and right
edges. The dynamical load is defined by the Heaviside impulse p(t) = poH(t) and
the value of the load is po = 10 MPa. The time of analysis is T = 300 ls and the
time step Dt = 3 ls.
The length and the height of the plate and the hole radius are L = 10 cm,
H = 5 cm and R = 1 cm, respectively. The thickness of the plate is g = 1 cm and
the diameter of each rod is d = 0.3 cm. The distance between the rod axes for the
reference plate is 1 cm; the length of the shorter and longer rods is 3 cm and 4 cm,
respectively. The distance between the end points of the rods to the left or right
edge of the plate is 0.5 cm.
The plane stress is assumed. The materials of the plate (p) and stiffeners (s) are
epoxy and steel, respectively. They are homogeneous, isotropic and considered in
the framework of the linear theory of elasticity. The values of mechanical properties
are: the Young’s modulus Ep = 4.5 GPa and Es = 210 GPa, Poisson’s ratio
mp = 0.37 and ms = 0.3, density qp = 1160 kg/m3 and qs = 7860 kg/m3.
The optimal location of reinforcement in the interior of the plate is searched and
the following objective function J is minimized:
A 2
ZT
r ðt Þ
x
J¼ dt ð4:4:12Þ
0
ro

where rAx ðtÞ is the x-component of stress at the point A (see Fig. 4.40), ro is
a nominal stress at the weakened cross-section, defined as the ratio of the applied
load to the area of this cross-section; T is the time of analysis.

Fig. 4.40 Reinforced plate with a hole


122 4 Structural Intelligent Optimization

The objective function (4.4.12) is minimized with respect to design variables


(Xij, Yij, i, j = 1, 2), defining the coordinates of the jth end point of the ith rod. It is
assumed that during the optimization the reinforcement is symmetrical with respect
to two symmetry axes. Thus, only a quarter of the plate with two rods is modelled
(the appropriate boundary conditions at the symmetry axes are assumed) and the
number of design variables is 8.
The constraints on design variables are imposed. The distance between the rods
and the outer boundary (of the quarter of the plate) cannot be lower than 0.5 cm.
The intersection of rods is not allowed.
The total number of boundary and finite elements in the BEM/FEM analysis is
92 and 64, respectively (each rod is discretized into 32 finite elements).
For this example five tests were performed and similar results were obtained.
The values of design variables for the optimal solutions, rounded off to two decimal
places, are: X11 = 0.97 cm, Y11 = 1.03 cm, X12 = 4.50 cm, Y12 = 1.50 cm, X21
= 1.57 cm, Y21 = 2.00 cm, X22 = 4.50 cm and Y22 = 2.00 cm. The optimal
structure is shown in Fig. 4.41.
Example 6: A reinforced cantilever plate
The optimization of a reinforced cantilever plate (Fig. 4.42) is performed by
means of PSO with the same parameters like in the Example 4. The dynamically
loaded plate is reinforced at the whole nonfixed outer boundary and between two
holes (at the interface between two BE regions). The reinforcement has a rectan-
gular cross-section. The plate and the reinforcement are modelled by the boundary
elements and frame finite elements, respectively. The structure before optimization
(the reference plate) is shown in Fig. 4.42.
The uniformly distributed load is applied at the upper edge. The plate is sub-
jected to the sinusoidal load p(t)= posin(2pt/T). The amplitude of the load is
po= 1 MPa and the period of time is T = 5 ms. The time of analysis is 12 ms and
the time step Dt = 0.02 ms.
The length and the height of the plate are L = 50 cm and H = 40 cm, respec-
tively. The other dimensions are: a = 5 cm, b = 1 cm, c = 5 cm and g = 1 cm. The
L1, L2 and H1, H2 defining the shape of the cantilever are design variables of the
problem and they are within the range from 15 to 35 cm and 0 to 25 cm,
respectively.

Fig. 4.41 The optimal


location of rods in the plate
4.4 Optimization of Elastic Structures Under Dynamical Loads 123

Fig. 4.42 Reinforced cantilever plate

The plane stress is assumed. The cantilever is made of steel and considered as
a homogeneous and isotropic material in the framework of linear theory of elas-
ticity. The values of mechanical properties are: the Young’s modulus E = 210 GPa,
Poisson’s ratio m = 0.3 and density q = 7860 kg/m3.
The optimal shape of the cantilever is searched and the following objective
function J is minimized:

A 2
ZT
uy ð t Þ
J¼ dt: ð4:4:13Þ
0
uo

where uAy ðtÞ is a vertical displacement at the point A (see Fig. 4.42), uo is an
admissible displacement and T is the time of analysis.
The objective function (4.4.13) is minimized with respect to design variables (Li,
Hi, i, j = 1, 2), defining the dimensions of the structure.
The total number of boundary and finite elements in the BEM/FEM analysis is
84 and 72, respectively The quadratic elements (with two degrees of freedom per
node) are used for the BEM mesh. The frame elements (with three degrees of
freedom per node) are used for the FEM mesh. During the optimization the number
of boundary and finite elements is constant.
For this example five tests were performed and similar results were obtained.
The values of design variables for the optimal solutions are (rounded off to two
decimal places): L1 = 30.62 cm, L2 = 35.00 cm, H1 = 25.00 cm and
H2 = 25.00 cm. The optimal structure is shown in Fig. 4.43.
124 4 Structural Intelligent Optimization

Fig. 4.43 The optimal shape


of the cantilever

4.5 Optimization of Structures with Stiffeners

Reinforced structures are often used in practice because they are resistant, stiff and
stable. A typical area of application of such structures is an aircraft industry where
light, stiff and highly resistant structures are required. Many aircraft elements are
made as thin panels reinforced by stiffeners. The choice of the optimal shape of the
structure or of the proper stiffeners arrangement in a domain of the structure decides
about the effectiveness of the construction or about the effectiveness of reinforce-
ment. Optimal properties of structures can be searched by means of computer-aided
optimization tools. The stiffeners layout is usually achieved by modifying the
thickness of each element of the finite element mesh or using the homogenization
method. However, the results obtained by means of these approaches do not give
clear stiffeners layout. Bendsoe and Kikuchi [16] analysed composites with per-
forated microstructures using the homogenization method. As a result of topology
optimization, the grey-scaled structures emerged. Cheng and Olhoff [50] considered
the problem of stiffener layout using the method based on thickness distribution to
maximize the stiffness of rectangular and axisymmetric plates. Ding and Yamazaki
[56] generated stiffener layout patterns introducing a growing and branching tree
model and topology optimization method. Diaz and Kikuchi [55] searched for the
optimal reinforcement layout for the plates by adding a declared amount of rein-
forcing material to increase the fundamental frequency. Bojczuk and Szteleblak
[21] proposed a heuristic algorithm in order to find the optimal reinforcement
layout. This algorithm consists of two stages: first, the initial localization of new
fibre or rib is determined by the information from sensitivity analysis (analogous to
the topological derivative approach of Sokołowski and Zochowski [108]); next, the
gradient optimization method is performed to correct their positions. Another
method is based on the optimization of the layout of isogrid stiffeners applied as
special triangular patterns. Due to their efficiency, these isogrid members have been
applied for example in launch vehicles and spacecraft components [107]. In the
present chapter, coupling FEM with bio-inspired methods, like the distributed
evolutionary algorithm [115] and the particle swarm optimizer [76], in optimization
of statically loaded reinforced structures is presented. The structures are optimized
by means of the criteria dependent on displacements or stresses. Numerical
4.5 Optimization of Structures with Stiffeners 125

examples demonstrate that the method based on the soft computation is an effective
technique for solving computer-aided optimal design problems.

4.5.1 Formulation of the Optimization Problem

Consider a 2D structure (a plate in plane stress, a bending plate or a shell) which is


stiffened by several bars. The domain of the 2D structure and the domains of the
bars are filled by a homogeneous and isotropic material of Young’s modulus E and
Poisson’s ratio m. The location and shape of the bars can change for each iteration
t of the evolutionary process. The stiffened structures are considered within the
framework of the theory of elasticity. The evolutionary process proceeds in an
environment in which the structure fitness is described by the minimization of the
stress functional
Z
J ¼ wðrÞdX ð4:5:1Þ
X

where w is an arbitrary function of stress tensor r, or maximization of the structure


stiffness by minimization of the displacement functional
Z
J ¼ nðuÞdX ð4:5:2Þ
X

where n is an arbitrary function of displacements u.


Two different types of optimization tasks are considered:
• optimization of the location of the straight stiffeners (Fig. 4.44a),
• optimization of the location and shape of curved stiffeners (Fig. 4.44b).
The locations and shapes of the stiffeners in the domain of 2D structures are
controlled by genes which create a chromosome. In order to reduce the number of
the genes, the chromosome representation, presented in Fig. 4.44, has been

Fig. 4.44 Chromosome representation: a straight stiffeners in 2D structure geometry, b curved


stiffeners in 2D structure geometry
126 4 Structural Intelligent Optimization

introduced. The connection of the stiffeners that ends with the 2D structures
boundary has been assumed; therefore the location of the stiffener in the 2D
structure domain is determined by two points: Pi—beginning and end of the stiff-
ener (Fig. 4.44a).
In order to minimize the number of design parameters, the curved stiffener is
defined by means of nonuniform rational B-spline (NURBS) curve [98]. The shape
of this curve is defined by the control points Ck, k =1, 2, …, L; Ck  X2D (L is the
number of control points).
The location of the stiffeners in the domain of 2D structures is controlled by
genes hi, i = 1, …, N and their shape by genes g1 , j = 1, …, M (Fig. 4.44b). The set
of the genes creates a chromosome
 
ch ¼ h1 ; h2 ; . . .; hi ; . . .; hN ; g1 ; g2 ; . . .; gj ; . . .; gM ð4:5:3Þ

hmin  hi  hmax ; gmin  gj  gmax ð4:5:4Þ

where: hmin is the minimum value of the gene h, hmax is the maximum value of the
gene h, gmin is the minimum value of the gene g; gmax is the maximum value of
the gene g:
In order to solve the formulated problems, the finite-element models of the
structures are considered [126]. The 2D structure domain X2 D is divided into
triangular finite elements Xs ; s ¼ 1; 2; . . .; R (for plane stress, bending plate or
shell), according to the geometry mapped on the basis of the chromosome. The
edges of the triangular finite elements, which belong to the curves mapped on the
basis of the chromosome and playing the role of the stiffeners, create the bar
elements Xb ; b ¼ R þ 1; R þ 2; . . .; C (Fig. 4.45).
After the geometry discretization, finite-element analysis is performed and node
displacements are calculated by solving a system of linear algebraic equations

KU ¼ F ð4:5:5Þ

where U is a column matrix of unknown displacements, F is a known column


matrix of acting forces and K is a known global stiffness matrix of the structure
whose elements are given as follows:

Fig. 4.45 Mesh of 2D


structure and bar finite
elements
4.5 Optimization of Structures with Stiffeners 127

Z
ks ¼ BTs Ds Bs dA ð4:5:6Þ
A

for 2D structure elements, and


Z
kb ¼ BTb Db Bb dV ð4:5:7Þ
l

for the bar elements, where Ds, Bs and Db, Bb are the known elasticity and geo-
metrical matrices for the 2D structure and bar elements, respectively; l represents
the length of the bar element and A represents the area of the finite element.
After the finite-element analysis, the value of the fitness function given for
example by:
Z
J¼ req dX2D ð4:5:8Þ
X2D

is evaluated and the evolutionary algorithm is applied.


The formulation of the optimization task, which assumes the possibility of the
stiffeners intersection, causes some problems connected with the impossibility of
the proper discretization of the structures geometry mapped on the genes basis. The
problems appear when the distance between the ends of the two stiffeners or
between the end of the stiffener and a corner of the 2D structure is too small. Then,
the angles between the stiffeners and the boundary appear very small and the
automatic mesh generator [104] has difficulties in creating the proper mesh and
generates errors which cause breaks in the optimization programme. The intro-
duction of additional constraints imposed on the genes values is necessary. It was
assumed that the distance between the ends of two stiffeners or between the end of
the stiffener and a corner of the 2D structure could not be less than the declared
value. Another possibility of solving the problem is the improvement of the
geometry, mapped on the basis of the chromosome, by connecting the stiffeners
ends when the distance between them is less than the declared value. The problem
can be easily solved by the introduction of the proper constraints, but it will be more
complex in the case of optimization task of many stiffeners’ locations. Then, many
intersection points and many small angles between the stiffeners appear. The
implementation of the very resistant mesh generator would be the best solution of
the problem.

4.5.2 Examples of the Optimization of the Stiffeners


Location

Four numerical examples of the optimization of the stiffeners location in the


geometry of 2D structures are considered [30, 43, 109, 112]. Example 1—the
128 4 Structural Intelligent Optimization

Table 4.20 Parameters of distributed evolutionary algorithm


Number of subpopulations 2
Number of chromosomes in each subpopulation 10
Probability of Gaussian mutation 100%
Probability of simple crossover 100%
Selection method Rang selection

Table 4.21 Parameters of Number of chromosomes 20


particle swarm optimizer
Inertia weight w 0.73
Acceleration coefficient c1 1.47
Acceleration coefficient c2 1.47

evolutionary optimization of a plate in plane stress stiffened with three ribs,


Example 2—the evolutionary optimization of a bending plate stiffened with four
ribs. Example 3—the swarm optimization of a shell structure stiffened with five
ribs. Example 4—the evolutionary optimization of a plate in plane stress stiffened
with two curved ribs. The domain of 2D structures and domains of the bars in each
example are filled by a homogeneous and isotropic material of a Young’s modulus
E0 = 2 * 105 MPa and a Poisson ratio m = 0.3. The value of the maximal stress
rmax ¼ 100 MPa: The stiffened structures are considered within the framework of
the theory of elasticity. The results for the examples are obtained by the use of
optimization method based on evolutionary or swarm algorithm with parameters
included in Tables 4.20 and 4.21, respectively. The stiffeners in each of the
numerical examples have rectangular cross-section of dimensions w
h.

4.5.2.1 Example 1

The optimization task of location of three stiffeners by the minimization of the


stress functional in a plate in plane stress with boundary conditions shown in
Fig. 4.46 is considered. Input data to the optimization programme and the
parameters of the evolutionary algorithm are included in Tables 4.22 and 4.20,
respectively. The results of the optimization process are presented in Fig. 4.47.

4.5.2.2 Example 2

The optimization task of location of four stiffeners by the minimization of the stress
functional in a bending plate loaded with the pressure p and fixed at the boundary
(Fig. 4.48) is considered. Input data to the optimization programme and the
parameters of the evolutionary algorithm are included in Tables 4.23 and 4.20,
respectively. The results of the optimization process are presented in Fig. 4.49.
4.5 Optimization of Structures with Stiffeners 129

Fig. 4.46 Geometry and boundary conditions for the plate in plane stress (Example 1)

Table 4.22 Input data to the optimization programme for Example 1


a
b F (N) Number of Number Rectangular cross-section of Thickness of
(mm) stiffeners of genes dimensions w
h (mm) the plate (mm)
400
600 300 3 6 10
20 8

min max
(a)

(b)

Fig. 4.47 The location of three stiffeners in the plate in plane stress and the map of stresses; a 1st
iteration, b 54th iteration
130 4 Structural Intelligent Optimization

Fig. 4.48 Geometry and boundary conditions for the bending plate (Example 2)

Table 4.23 Input data to the optimization programme for Example 2


a
a p (MPa) Number Number Rectangular cross-section of Thickness of
(mm) of of genes dimensions w
h (mm) the plate (mm)
stiffeners
400
400 0.1 4 8 25
35 10

4.5.2.3 Example 3

The optimization task of location of five stiffeners by the minimization of the stress
functional in a cylindrical shell is considered. The structure is stretched with con-
tinuous load q and is fixed, as presented in Fig. 4.50. Input data to the optimization
programme and the parameters of the swarm algorithm are included in Tables 4.24
and 4.21, respectively. The results of the optimization process are presented in
Fig. 4.51.

4.5.2.4 Example 4

The optimization task of location and shape of two stiffeners in a plate in plane
stress with boundary conditions shown in Fig. 4.52 is considered. The optimal
positions of stiffeners are searched in order to maximize the stiffness of the plate.
The maximal nodal displacement in the structure is minimized. The stiffeners are
modelled using three-point NURBS curves. The value of weight of each control
point is 1 (no influence on distance between the control point and the NURBS
curve). Input data to the optimization programme and the parameters of the evo-
lutionary algorithm are included in Tables 4.25 and 4.26, respectively. The results
of the optimization process are presented in Fig. 4.53.
4.6 Optimization of Structures Under Thermo-Mechanical Loading 131

min max
(a)

(b)

Fig. 4.49 The location of four stiffeners in the bending plate and the map of stresses; a 1st
iteration, b 527th iteration

4.6 Optimization of Structures Under


Thermo-Mechanical Loading

4.6.1 Introduction

Temperatures changes cause thermal effects on materials. A material expands when


thermal energy is added to it. Positive changes caused thermal expansion, while
132 4 Structural Intelligent Optimization

Fig. 4.50 Geometry and boundary conditions for the cylindrical shell (Example 3)

Table 4.24 Input data to the optimization programme for Example 3


a
b q (N/ Number of Number Rectangular cross-section of Thickness of
(mm) mm) stiffeners of genes dimensions w
h (mm) the plate (mm)
300
200 450 5 10 10
20 10

min max
(a)

(b)

Fig. 4.51 The location of five stiffeners in the plate in plane stress and the map of stresses; a 1st
iteration, b 186th iteration

negative caused thermal contraction. In many practical problems thermal stresses


play a significant role and may overtake stresses caused by mechanical loads [9,
102].
4.6 Optimization of Structures Under Thermo-Mechanical Loading 133

Fig. 4.52 Geometry and boundary conditions for the plate in plane stress (Example 4)

Table 4.25 Input data to the optimization programme for Example 4


a
b F (N) Number of Number Rectangular cross-section of Thickness of
(mm) stiffeners of genes dimensions w
h (mm) the plate (mm)
400
600 1000 2 8 10
20 8

Table 4.26 Parameters of T01 300 °C


the boundary conditions
T02 20 °C
q0 0
p0 100 kN/m
a1 1000 W/m2K
a2 20 W/m2K
u0 0

In order to optimize structure under thermo-mechanical loading, the


thermo-elasticity analysis has to be performed. The proper boundary conditions
(mechanical and thermal) also have to be taken into account.
When coupling between thermal and mechanical fields is considered there are
two types of thermo-elasticity analysis [47]:
• weakly coupled,
• strongly coupled.
For the weakly coupled analysis (also called uncoupled thermo-elasticity) the
strain field depends on the temperature field but the temperature field does not
depend on the strain field. For the second one, coupling between thermal and
mechanical field is mutual. In the book, the authors considered uncoupled linear
thermo-elasticity.
134 4 Structural Intelligent Optimization

min max
(a)

(b)

Fig. 4.53 The location of two stiffeners in the plate in plane stress and the map of stresses; a 1st
iteration, b 339th iteration

To solve practical engineering thermo-elasticity problems, proper numerical


methods have to be chosen. The own implementation of BEM and FEM com-
mercial software MSC.Marc is used by the authors. Details concerning BEM and
FEM in thermo-elasticity are described by Zienkiewicz and Taylor [126] and
Burczyński [27]. Evolutionary algorithms are used in the optimization of
thermo-elastic structures [90].
4.6 Optimization of Structures Under Thermo-Mechanical Loading 135

4.6.2 Objective Functions for Thermo-Mechanical


Problems

Optimization task for structures under thermo-mechanical loading requires proper


definition of the functions (functionals). Generally for the thermo-elasticity prob-
lem, functionals may depend on some quantities:
• mechanical (displacements, strains, stresses, forces, etc.),
• thermal (temperatures, heat fluxes, heat sources, etc.),
• others (area, volume, weight, cost of the structure, etc.).
In the chapter, the following functionals are considered:
• minimization of the displacements on a selected part of the boundary:

2n
Z uð X Þ
min dC ð4:6:1Þ
X
C
u0

where u is a field of boundary displacements, u0 is a reference displacement and


n is a natural number,
• the minimum volume of the structure:

min V ðXÞ ð4:6:2Þ


X

with imposed constrains on the maximal value of temperature (T  T ad  0Þ and


the maximal value of equivalent stress (req  rad
eq  0Þ,
• the minimization of the maximal value of the equivalent stress:

eq ðXÞ
min rmax
X
ð4:6:3Þ

• the minimization of the maximal value of the temperature in the structure:

min T max ðXÞ ð4:6:4Þ


X

with imposed constrains on the maximal value of the volume of the structure
(V  V ad  0Þ,
• the maximization of the total dissipated heat flux:

max qðXÞ ð4:6:5Þ


X

with imposed constrains on the costs (c  cad  0Þ and the maximal value of
equivalent stress (req  rad
eq  0Þ.
136 4 Structural Intelligent Optimization

X is the vector of design parameters which is represented by a chromosome with


the floating-point representation. The fitness function is created by the method of
penalty function taking into account the volume of the structure, the equivalent
stress, the temperature, heat flux, and so on, and imposed box constrains on each
design variable.

4.6.3 Numerical Examples

Example 1: Shape optimization of the cooling gap in the square plate with circular
void
A square plate with a circular void is considered (Fig. 4.54). For the sake of
symmetry, only a quarter of the structure is taken into consideration. The consid-
ered quarter of the structure contains the internal boundary shown in Fig. 4.55.
The values of the boundary conditions are contained in Table 4.26.
To solve boundary-value problem, BEM is used. The model consists of 90
boundary elements. The objective of shape function is the minimization of the
radial displacements given by the functional (4.6.1) on the boundary where trac-
tions p0 are prescribed. The optimization problem consists in searching an optimal:
• shape of the internal boundary;
• width of the gap;
• distribution of the temperature T0 on the internal boundary.
The shape of the internal boundary is modelled by means of Bezier curve which
consists of seven control points, whereas the width of the gap and temperature T0
by means of Bezier curve consist of six control points (Fig. 4.56).
For the sake of symmetry along line AB (Fig. 4.55), the total number of design
parameters is equal to 13. The range of the variability of each control point for the

Fig. 4.54 A square plate


with circular void
4.6 Optimization of Structures Under Thermo-Mechanical Loading 137

Fig. 4.55 Boundary


conditions for the structure

Fig. 4.56 Modelling the


shape, width of the gap and
distribution of the temperature
on the boundary

width of the gap is between 0.2 and 0.8, whereas for the temperature it is between 5
and 80 °C. Table 4.27 and Fig. 4.57 contain results of the optimization [29].
Example 2: Shape optimization of the three types of heat exchangers
The aim of the optimization is to find the optimal shape of the heat exchangers
used to dissipate heat from the electrical devices shown in Fig. 4.58a, b.
The optimal distribution of the material in the radiator (Fig. 4.58c) is also
considered. The fixed dimensions and the values of boundary conditions along Z
138 4 Structural Intelligent Optimization

Table 4.27 The results of the optimization


The shape of the internal boundary x1 coordinate of control point 1 1.1124
x2 coordinate of control point 1 13.3259
x1 coordinate of control point 2 2.4609
x2 coordinate of control point 2 7.0000
x1 coordinate of control point 3 1.6232
x2 coordinate of control point 3 5.0000
x1 = x2 coordinate of control point 4 −4.0853
The width of the gap Control point 1 0.4313
Control point 2 0.2752
Control point 3 0.8000
The distribution of the temperature Control point 1 48.9698
on the internal boundary Control point 2 41.8679
Control point 3 5.0000

Fig. 4.57 The optimal shape,


width and distribution of the
temperature on the gap

axis are assumed. Due to the above reasons a problem is modelled as


two-dimensional (2D). The fitness function is computed by means of FEM soft-
ware, MSC.Mentat/Marc.
4.6 Optimization of Structures Under Thermo-Mechanical Loading 139

Fig. 4.58 The three types of considered heat radiators

In the modelled structure, mechanical as well as thermal boundary conditions are


applied. Besides the applied heat and convection, radiative boundary conditions are
also taken into account. Two radiators (Fig. 4.58a, b) are made of copper whose
material properties are shown in Table 4.28. Table 4.29 contains the values of the
parameters of the parallel evolutionary algorithm.

Table 4.28 Material Parameter Value


properties
Young’s modulus 120,000 MPa
Poisson’s ratio 0.3
Thermal expansion coef. 16.5
10−6 1/K
Heat conductivity 400 W/mK
Emissivity 0.8
140 4 Structural Intelligent Optimization

Table 4.29 The parameters Parameter Value


of the parallel evolutionary
algorithm Number of chromosomes in each population 20
Number of generations 500
Probability of Gaussian mutation 1
Probability of simple crossover 0.5
Rank selection pressure 0.8

Heat exchanger: Type 1


The first type of heat radiator is considered (Fig. 4.58a). The geometry of the
cross-section and the boundary conditions are shown in Fig. 4.59a. This problem is
solved by the minimization of the volume of the structure (4.6.2). On the edge of
each fin the force P equal to 10 N is applied. The radiator dissipates 80 W, so the
heat flux applied on the bottom side depends on the width of the radiator. The
ambient temperature is 25 °C, heat convection coefficient is 2 W/m2K and emis-
sivity is 0.8. Five design variables are assumed. The method of modelling the shape
of the radiator is shown in Fig. 4.59b.
Several tests have been performed with constrains imposed on the maximal
value of equivalent stress rad
eq ¼ 20 MPa, and the maximal value of the temperature
is equal to 70 °C, 80 °C and 90 °C respectively.
Table 4.30 contains the admissible values of the design parameters, whereas
Table 4.31 and Fig. 4.60 show the results of optimization [20, 30, 44, 58, 59, 64].

Fig. 4.59 Heat radiator: a the geometry and design variables, b the boundary conditions

Table 4.30 The admissible Design variable Range (mm)


values of the design variables
Z1 20–100
Z2 2–10
Z3, Z4, Z5 4–10
4.6 Optimization of Structures Under Thermo-Mechanical Loading 141

Table 4.31 The results of optimization


Z1 Z2 Z3 Z4 Z5 Volume
(mm) (mm) (mm) (mm) (mm) (mm3)
T ad ¼ 90 C 24.19 10 4.129 4 4 10,367
ad
T ¼ 80 C 31.42 10 4 4.844 4 14,107
T ad ¼ 70 C 41.46 9.817 4 5.698 4 19,649

Fig. 4.60 The optimal shape of the radiator for: a T ad ¼ 90 C; b T ad ¼ 80 C; c T ad ¼ 70 C

Heat exchanger: Type 2


The problem of the optimal shape of the second type of a radiator is considered
(Fig. 4.58b). The geometry of the cross-section, fixed dimensions (in mm) and
boundary conditions are shown in Fig. 4.61. The values of boundary conditions are
presented in Table 4.32.
This problem is solved by the minimization of three proposed functionals
(4.6.2), (4.6.3) and (4.6.4). In the case of minimization volume of the structure, the
constraints on the maximal value of equivalent stress rad eq ¼ 15 MPa and maximal
value of temperature in the structure T ad ¼ 70 C are applied, whereas for the
minimization of the maximal value of the equivalent stress and temperature, the
constraint on the maximal volume of the structure Vad = 150,000 mm3 is applied.
The constant number of fins, equal to ten, is assumed. The height and width of
the fins can vary during the optimization process. It is modelled using Bezier curves
consisting of six control points (Fig. 4.62).
The control polygon of the height (P0–P5) and control polygon of the width (N0–
N ) of the fins are shown in Fig. 4.62. The values of the control points P0–P5 are
5

responsible for the shape of the radiator, whereas the values of the control points
142 4 Structural Intelligent Optimization

Fig. 4.61 The geometry and


boundary conditions of the
heat radiator

Table 4.32 The values of the Boundary conditions Value


boundary condition
Heat flux 1000 W/m2
Heat convection coefficient 2 W/m2K
Ambient temperature 25 °C
Emissivity 0.8
Pressure 5000 Pa

Fig. 4.62 The method of


modelling the shape of
radiator
4.6 Optimization of Structures Under Thermo-Mechanical Loading 143

Table 4.33 The values of the Design variable Range (mm)


boundary condition
P0, P1, P2, P3, P4, P5 30–200
N0, N1, N2, N3, N4, N5 4–12
H 7–15

N0–N5 are responsible for the width of the fins The height of the bottom part of the
sym sym sym sym
structure can also vary. Due to symmetry (P0 $ P5 , P1 $ P4 , P2 $ P3 , N0 $ N5 ,
sym sym
N1 $ N4 , N2 $ N3 ), the total number of the design parameters is equal to 7.
The admissible values of the design parameters are shown in Table 4.33. Several
numerical tests have been performed for each case. The best results of the opti-
mization are presented in Table 4.34 and Fig. 4.63 [60].

Table 4.34 The result of optimization


P0 = P5 P1 = P4 P2 = P3 N0 = N5 N1 = N4 N2 = N3 H Fitness
(mm) (mm) (mm) (mm) (mm) (mm) function value
min T max ðXÞ 200 99.13 138.9 4.49 4 4 7 49.48 °C
X
min V ðXÞ 110.6 30 30 4.2 4 4 7 0.0073
X

X eq ðXÞ
min rmax 80.5 51.7 71.3 11.4 5.6 10.3 8.85 0.97 MPa

Fig. 4.63 The optimal shape of the radiator: a the minimization of the maximal value of the
temperature; b the minimization of the volume of the structure; c the minimization of the maximal
value of equivalent stresses
144 4 Structural Intelligent Optimization

Fig. 4.64 The geometry of


the third type heat radiator
(in mm)

Heat exchanger: Type 3


The problem of the optimal distribution of the material in the third type of heat
radiator is considered. The proposed geometry of the radiator is fixed during the
optimization (Fig. 4.64). Each of 15 fins is made of aluminium, copper or silver,
whereas the remaining part of the structure is made of aluminium. Table 4.35
contains the values of the material parameters. The symmetry along the horizontal
axis is assumed. Owing to the above reason, the design vector contains
eight variables. The value of each gene corresponds to the selection of the material.
The optimal distribution of the material is done by the maximization of the total
dissipated heat flux by the radiator (4.6.5). The constrains are imposed on the cost
of the radiator and the maximal value of equivalent stress (rad eq ¼ 20 MPaÞ. The
relationship between the costs of the material is 0.1, 0.2, and 1 for aluminum,
copper and silver, respectively. The cost of the radiator c is the sum of the above
factors for all fins.
The radiator subjected to the thermo-mechanical boundary conditions and the
values of the boundary conditions are presented in Fig. 4.65 and Table 4.36.
Several tests have been performed for the different values of admissible cost. The
results of the optimal distribution of the material are shown in Fig. 4.66.

Table 4.35 The material properties for aluminium, copper and silver
Parameter Aluminium Copper Silver
Young’s modulus (MPa) 68,000 110,000 76,000
Poisson’s ratio 0.34 0.35 0.39
Thermal expansion coef. (1/K) 24
10−6 16.5
10−6 19.5
10−6
Heat conductivity (W/mK) 210 380 420
4.6 Optimization of Structures Under Thermo-Mechanical Loading 145

Fig. 4.65 The boundary conditions for the third type heat radiator

Table 4.36 The values of the boundary condition


Boundary condition Value
Fixed temperature 80 °C
Heat convection coefficient 40 W/m2K
Ambient temperature 25 °C
Pressure 1000 Pa

Fig. 4.66 The optimal distribution of the material in the radiator for different maximum cost
constrain for: a c =2.5; b c = 4; c c = 9
146 4 Structural Intelligent Optimization

4.6.4 Concluding Remarks

Several types of fitness functions can be formulated. The minimized (maximized)


feature of the domain may come from elasticity (minimum stress), heat transfer
(maximum dissipation of heat), geometry (minimum of the volume) or depends on
the distribution of the material. Also several constrains can be imposed for each
formulation of the fitness function (geometrical, thermal, mechanical). The prepa-
ration of the model may be aided by parametric curves. This approach allows
reducing the number of design parameters too.
Besides typical thermal boundary conditions, radiation can also be taken into
account. The radiating portion of the surface can be concave, thus a mutual irra-
diation of the boundaries may take place. Applying the internal script language
implemented in MENTAT can be useful for calculating the view factors responsible
for the irradiation of the boundaries. This script language also makes the production
of the geometry, mesh, boundary conditions and settings of the analysis possible.
One of the disadvantages of the application of the evolutionary algorithm is the
time-consuming calculations. This is connected with solving a boundary-value
problem for each chromosome in each generation in evolutionary process to cal-
culate the fitness function value. The application of the parallel evolutionary
algorithm can partly eliminate the above disadvantage, but it is still a major barrier
in the optimization of structures with a large number of degrees of freedom.

4.7 Optimization of Structures with Cracks

4.7.1 Introduction

Many spectacular accidents and catastrophes were caused by fracture. Cracks can
occur in structural elements because of imperfections in material, the manufacturing
process or came into existence by a cycling loading. To some extent cracks are
present in all structures, but they become dangerous if they extend to a critical
length. The ability of the crack identification during the exploitation of the structure
is essential. There are different methods of nondestructive crack identification,
based mainly on the measurements of the responses of the structure. Cracks and
other defects identification problems are presented in Sect. 5.3.
The reduction of the crack negative influence on the structure can be obtained by
means of the shape optimization methods. Publications devoted to the shape
optimization of the structures with cracks divide problems into two general groups:
• the minimization of the stress intensity factors (e.g. Vrbka and Knésl [117]),
• the maximization of the fatigue life-time of the structure (e.g. Gani and Rajan
[71]).
4.7 Optimization of Structures with Cracks 147

If a cyclic load occurs, it is important to calculate the life-time of the structure.


The life-time of the structure can be increased if the shape of a structural element is
optimized.
In order to solve the optimization task, a boundary-value problem has to be
solved. This problem can be solved by means of finite-element method (FEM) or
the boundary element method (BEM). Since the crack is a part of the boundary, the
boundary element method seems to be especially convenient. The boundary ele-
ment mesh generation in shape optimization does not cause difficulties because
boundary element discretization is able to adapt itself to a new configuration
without major distortion of the boundary element nets. It is convenient to imple-
ment an adaptive grid refinement and incorporate an automatic generator of
boundary element meshes to increase the efficiency and accuracy.
Two attitudes to the shape optimization are considered:
• an optimization during the design phase: if the probability of the crack existence
is high, it is possible to re-design the structure,
• an optimization connected with the necessity of repairing the working structure,
especially after the crack identification. The second attitude is typically con-
nected with the increase of the element volume.
The shape optimization problems in most cases result in a large number of
design variables. In order to reduce the number of design variables, parametric
curves, like Bézier curves, B-splines or NURBS curves are used. They allow
modelling complicated shapes with a relatively small number of control points.
NURBS parametric curves are used in the present chapter.
Global optimization methods in the form of evolutionary algorithm presented in
Sect. 3.3 are used to solve the optimization problem.

4.7.2 Formulation of the Optimization Task

The optimization problem is formulated as the minimization of the objective


function J0 with respect to design variables vector x:

minðJ0 Þ ð4:7:1Þ
x

with limiting conditions and variable limitations:

Ja ðxÞ ¼ 0; a ¼ 1; 2; . . .; m
Jb ðxÞ  0; b ¼ 1; 2; . . .; n ð4:7:2Þ
ximax  xi  ximin ; i ¼ 1; 2; . . .; k

where Ja, Jb are constrain functionals; n, m, k are constants.


148 4 Structural Intelligent Optimization

The aim is to develop an application of evolutionary algorithms and the


boundary element method to the shape optimization of cracked structures. The
following optimization criteria are examined [11, 28]:
1. The minimization of the maximum crack opening (MCO):
!
X
n
minðJ0 Þ ¼ min MCOred ¼ wi MCOi ð4:7:3Þ
x
i¼1

where MCO = max (u+ – u−); u+, u− are the displacement values of the
coincident nodes lying on the opposite sides of the crack; wi = MCOi/RMCOi
are weight factors (Rwi = 1); n is number of cracks.
2. The minimization of the reduced J-integral:
!
X
2n
minðJ0 Þ ¼ min J ¼ wi Ji ð4:7:4Þ
x
i¼1

where Ji is the J-integral for i-tip of the crack; wi= Ji/RJi.


3. The minimization of the reduced stress intensity factor in the form:
!
X
4n
minðJ0 Þ ¼ min Kred ¼ wi Ki ð4:7:5Þ
x
i¼1

where Ki are stress intensity factors; wi = Ki/RKi.


4. The maximization of the loading cycle number N necessary to extend the crack:

minðJ0 Þ ¼ minðN Þ ð4:7:6Þ


x

Traction-free and unconstraint parts of the external boundary are modified


during the optimization. The restrictions for the maximum value of the boundary
von Mises reduced stresses and the volume of the structure are employed.

4.7.3 Fatigue Crack Growth

Cracks arising may significantly reduce the life-time of real structures. The most
common fracture case is caused by fatigue crack growth. It is extremely dangerous
for structures, as a crack grows from a very small size to a critical one with no
visible effect. As a result, damage of the structure occurs.
The possibility of predicting the element life-time is a crucial problem. The
life-time of structure can be described in a general form by the velocity of the crack
growth [6]:
4.7 Optimization of Structures with Cracks 149

dl
¼ f ðr; l; C; Y; R; vÞ ð4:7:7Þ
dN

where N is the number of loading cycles, l is the current crack length, r is stress
expressed by stress amplitude, C are material constants, Y are geometrical param-
eters of the element or crack, R = rmax/rmin is the cycle ratio, and v is functional
representing loading history.
There exist many formulas for f( ) function describing the velocity of the crack
growth. One of the most frequently used is a Paris equation in the form:

dl
¼ cðDK Þm ð4:7:8Þ
dN

where c, m are experimentally determined material constants, DK = Kmax – Kmin are


the maximum and minimum values of stress intensity factor for single-mode
fracture analysis.
The Paris law is suitable for the crack propagation velocities between 10−9 and
−6
10 m/cycle. The number of cycles N necessary to extend the crack from l1 to l2
may be obtained by the integration of the Eq. (4.7.8):

l2
Z 1
N¼ dl ð4:7:9Þ
l1
c ð DK Þm

For the mixed-mode fracture analysis DK is replaced by DKeff [114]:

DKeff
2
¼ DKI2 þ 2DKII2 ð4:7:10Þ

The stress intensity factor range for the particular fracture mode i is given by:

DKi ¼ Ki max  Ki min ¼ Ki max ð1  RÞ ð4:7:11Þ

where R = rmin/rmax is the stress amplitude ratio of the loading cycle.


The crack growth process is simulated numerically by incremental analysis.
A boundary-value problem is solved for each step of the crack expansion. The
direction of the crack growth is determined by the maximum principal stress
criterion:

KI sin ht þ KII ð3 cos ht  1Þ ¼ 0 ð4:7:12Þ

where ht is the angular coordinate of the tangent to the crack path, KI, KII are mode I
and II stress intensity factors.
The angular coordinate ht indicates the direction perpendicular to the maxi-mum
principal stress direction.
150 4 Structural Intelligent Optimization

The optimization strategy for the maximization of the number of cycles


N necessary to extend the crack is as follows:
• for each generated possible solution (a chromosome representing modified
geometry), the boundary-value problem is solved, and the boundary point with
the maximum value of the von Mises stresses is located;
• an initial crack in the direction perpendicular to the maximum principal stress
direction in found boundary point is introduced;
• the boundary-value problem is solved once again and the number of cycles
N necessary to extend the crack is calculated.
The block diagram of the optimization procedure is presented in Fig. 4.67.

Initial geometry

Shape modification

BEM

Introducing crack in
stress_max position

- N calculation (BEM)
- stress_max
calculation
DEA block

[Termination condiction fulfilled]

Fig. 4.67 Optimization procedure for N maximization


4.7 Optimization of Structures with Cracks 151

4.7.4 The Dual-Boundary Element Method for Crack


Problems

To solve the boundary-value problem for cracked structures, one of the numerical
methods has to be used. The most popular and widely applied one is the
finite-element method (FEM), but in the presented case the boundary element
method (BEM) is more convenient. The main reason is that cracks state parts of the
boundary, so assuming the lack of body forces, it is not necessary to discretize the
interior of the body. As a result, the dimension of the boundary-value problem is
reduced by one. The BEM is also capable of accurate modelling the high stress
gradients near the crack tip [25].
An elastic body occupying a domain X and having a boundary C  ∂X is
considered (Fig. 4.68). Two fields are prescribed on the boundary C: a field of
displacements u0(x), x 2 Cu and a field of tractions p0(x), x 2 Cp, while Cu [
Cp = C and Cu \ Cp = ∅. The body contains internal traction-free cracks Ci.
Displacements are allowed to jump across C:

½½u  u þ  u 6¼ 0 ð4:7:13Þ

Assuming the lack of body forces, the displacement of an arbitrary point x can
be represented by the boundary displacements integral equation:
Z Z
cðxÞuðxÞ ¼ Uðx; yÞpðyÞdCðyÞ  Pðx; yÞuðyÞdCðyÞ; x2C ð4:7:14Þ
C C

where U(x, y), P(x, y) are fundamental solutions of elastostatics; c(x) is a constant
depending on the position of the collocation point; x; y are boundary points.
If the foregoing equation is applied on both surfaces of the same crack, two
identical equations are formed. As a result, the set of algebraic equations obtained
after the discretization of the body becomes singular. There are a few techniques
allowing overcoming this problem. The most versatile seems to be the dual
boundary element method (dual BEM). In this technique an additional equation—a
tractions integral equation—is introduced [99]:

Fig. 4.68 An elastic body


containing cracks
152 4 Structural Intelligent Optimization

" #
1 Z Z
pðxÞ ¼ n Dðx; yÞpðyÞdCðyÞ  Sðx; yÞuðyÞdCðyÞ ; x2C ð4:7:15Þ
2 C C

where D(x, y), S(x, y) are the third-order fundamental solution tensors, n is the unit
outward normal vector at the collocation point x.
The tractions integral equation is applied on one surface of each crack, while the
displacements integral equation is applied on the opposite side of each crack and the
remaining boundary.

4.7.5 NURBS Parametric Curves

The nonuniform rational B-splines (NURBS) curves are used to model the modified
part of the boundary. Such attitude allows reducing the number of design variables
of the optimization procedure. NURBS can be treated as generalized nonrational
B-splines and nonrational and rational Bezier curves. They are industrial standard
tools for the geometry representation and design. The main advantages of NURBS
curves are:
• one mathematical form for standard analytical shapes and for free-form shapes;
• flexibility to design a large variety of shapes;
• fast evaluation by numerically stable and accurate algorithms;
• invariance under transformations (affine and perspective).
The NURBS curve is defined as [98]:
Pr
j¼0 Nj;n ðtÞwj Pj
C ð t Þ ¼ Pr ; atb ð4:7:16Þ
k¼0 Nk;n ðtÞwk

where Pj are control points, wj is weight of control points, Nj,n is nth-degree


B-spline basis functions defined by the knot vector:
8 9
< =
T¼ a; . . .; a ; t ; . . .; tmn1 ; b; . . .; b ð4:7:17Þ
:|fflfflffl{zfflfflffl} n þ 1 |fflfflffl{zfflfflffl} ;
nþ1 nþ1

An example of the NURBS curve is presented in Fig. 4.69.


The precise manipulation of the NURBS curve is possible by changing the
position of control points and/or the weight of control points. A feature of NURBS
which is very significant from the practical point of view is a local approximation
property: only a part of the curve on the interval t 2 [ti, ti+p+1] is modified if the
control point Pj is moved and/or the weight wj is changed.
4.7 Optimization of Structures with Cracks 153

Fig. 4.69 An example of a P4 P5


closed NURBS curve
NURBS

P3

P1=P6

control w2 N control point


polygon weight
P2

The application of such curves results in a relatively small number of design


variables and the simplicity of data preparation in comparison with other methods,
that is, if the coordinates of boundary nodes (in BEM) or mesh nodes (in FEM) are
taken as the design variables.
The vector x= (xr), r = 1, …, R represents the coordinates of the control points
of NURBS curves. Design variables limited by geometrical constrains, limitations
for the maximum von Mises stresses on the boundary and for the element volume
are introduced.

4.7.6 Numerical Examples

4.7.6.1 Numerical Example 1: Minimization of Kred

A boundary of a 2D structure containing two cracks C1 and C2 (Fig. 4.70) is


optimized. The objective of the optimization is to minimize the reduced stress
intensity factor Kred. The material constants of the structure are: E = 2
105 MPa,
m = 0.25. Structure is fixed at the bottom edge and loaded by three tractions (p = 10
MN/m2). Remaining parts of the boundary are modified during optimization.
Constraints on the equivalent von Mises stresses are imposed on the boundary.
Two variants are considered: (i) the maximum element area is equal to the initial
area; (ii) the maximum element area can be increased by 20%.
Final shapes for constant and increased areas of the optimized structures are
presented in Fig. 4.71. Initial and final values of Kred, J and MCO are collected in
Table 4.37.

4.7.6.2 Numerical Example 2: Minimization of MCO

A boundary of a 2D structure containing one crack (Fig. 4.72) is optimized. The


objective of the optimization is to minimize the reduced maximum crack opening
154 4 Structural Intelligent Optimization

Fig. 4.70 A structure with two cracks—initial shape, cracks localization and boundary conditions

(a) (b)

Fig. 4.71 The structure after optimization a constant area; b increased area (max. 20%)

MCO. The material constants of the structure are: E = 2.105 MPa, m = 0.25.
Structure is fixed and loaded as presented in Fig. 4.72 (p = 10 MN/m2). Free parts
of the boundary are modified during optimization. Constraints on the equivalent
von Mises stresses are imposed on the boundary.
4.7 Optimization of Structures with Cracks 155

Table 4.37 Numerical Example 1: optimization results


Parameter Initial shape Constant area Increased area
Kred 4.2679 3.9072 2.84251
J1, J2 15.6422, 15.0033 11.8879, 9.4548 5.9265, 6.1936
J3, J4 19.2496, 10.3772 15.5943, 8.936 7.8138, 4.9886
MCO_I, MCO_II 1.3821, 0.8829 1.1099, 0.8001 0.8035, 0.5599
rred_max (MPa) 75.6664 51.2198 47.3481

Fig. 4.72 A structure with one crack—initial shape, crack localization and boundary conditions

Two variants are considered: (i) the maximum element area is equal to the initial
area; (ii) the maximum element area can be increased by 20%.
Final shapes for constant and increased areas of optimized structures are pre-
sented in Fig. 4.73. Initial and final values of Kred, J and MCO are collected in
Table 4.38.

4.7.6.3 Numerical Example 3: Maximization of N

A 2D structural element loaded and fixed as shown in Fig. 4.74 is optimized. Two
cases are considered: nonsymmetrical and symmetrical. In the nonsymmetrical case
each chromosome consists of 24 design variables representing coordinates of 12
control points (three control points for each of four NURBS curves). In the sym-
metrical case each chromosome consists of 12 design variables representing
coordinates of six control points (three control points for each of three NURBS
curves). The vertical axis is the symmetry axis.
156 4 Structural Intelligent Optimization

(a) (b)

Fig. 4.73 The structure after optimization of a constant area; b increased area (max. 20%)

Table 4.38 Numerical Example 2: optimization results


Parameter Initial shape Constant area Increased area
KI1, KII1 4.7027, 1.2300 1.9777, 0.5727 1.8358, 0.5821
KI2, KII2 4.7572, 0.9730 1.8634, 0.3902 1.9084, 0.2843
J1, J2 12.3060, 12.2800 2.2081, 1.8877 1.9318, 1.9391
Jred 12.3816 2.0604 1.9279
MCO 3.1656 1.9207 1.67

(a) (b)

Fig. 4.74 The optimized structure a loaded and fixed b modified parts of the boundary and ranges
of the control points
4.7 Optimization of Structures with Cracks 157

(a) (b)

Fig. 4.75 A structural element—optimal shapes for nonsymmetrical case: a fixed area, b increased
area

The parameters of Paris equation are assumed as: c = 4.62E−12, m = 3.3 and
the amplitude ratio of the cyclic load is R = 2/3. To obtain the number of cycles
N for the initial shape, the position of the maximum von Mises stress is located and
the reference crack is introduced in the proper direction. Then, the boundary-value
problem is solved and N value is calculated.
Two cases are considered:
– the final area of the element is not bigger than the area of the initial element;
– the final area of the element can be increased by 10%.
Maximum von Mises reduced stress value is limited to rp= 120 MPa.
Shapes obtained after optimization are presented in Fig. 4.75 for the nonsym-
metrical case and Fig. 4.76 for the symmetrical case. The initial and final values of
cycle numbers, maximum stresses and areas of the element are collected in
Table 4.39.

4.7.7 Concluding Remarks

The chapter is devoted to the application of computational intelligence methods and


the boundary element method to the shape optimization of structural elements
containing cracks. The aim was to reduce the influence of cracks in static and
dynamic cases. Evolutionary algorithm has been used as the global optimization
method. Boundary element method allows the reduction of the dimension of the
boundary-value problem. Owing to the impossibility of using the “pure” BEM for
fracture mechanics problem, the dual boundary element method has been used to
158 4 Structural Intelligent Optimization

(a) (b)

Fig. 4.76 A structural element—optimal shapes for symmetrical case: a fixed area, b increased
area

Table 4.39 Numerical Example 3: optimization results


Number of rmax Allow area Area
cycles N (MPa) (m2) (m2)
Primary shape with a 1.9324
107 115.05 – 0.7080
ref. Crack
Nonsymmetrical Final shape with 2.4739
108 118.63 0.0708 0.07078
case constant area
Final shape with 1.1097
1010 112.57 0.0779 0.07514
increased area
Symmetrical Final shape with 1.7073
108 114.79 0.0708 0.06289
case constant area
Final shape with 1.6019
1010 108.98 0.0779 0.07129
increased area

solve the boundary-value problem for cracked structures. To reduce the number of
design variables, parametric NURBS curves have been used. Enclosed numerical
examples illustrate the efficiency of presented attitude.

4.8 Optimization of Structures with Nonlinearities

The shape optimization problem of structures with elastoplastic nonlinearities can


be solved by means of methods based on sensitivity analysis information [19, 89] or
nongradient methods based on genetic algorithms [32, 81–85]. This chapter is
devoted to the method based on the parallel and distributed evolutionary algo-
rithms. Applications of evolutionary algorithms in optimization need only
4.8 Optimization of Structures with Nonlinearities 159

information about values of an objective (fitness) function. The fitness function is


calculated for each chromosome in each generation by the solution of the
boundary-value problem of elastoplasticity by means of the finite-element method
(FEM) [79, 126] or the boundary element method (BEM) [26]. This approach does
not need information about the gradient of the fitness function and gives the great
probability of finding the global optimum. The main drawback of this approach is
long time taken for calculations. The applications of the parallel and distributed
evolutionary algorithms can shorten the time of calculations but additional
requirements are needed: a multiprocessor computer or a cluster of computers are
necessary. The chapter describes the evolutionary optimization of structures with
material and geometrical nonlinearities. Two types of analysis are presented, static
nonlinear and time-dependent—forging process optimization. The evolutionary
optimization of structures with nonlinearities using distributed and parallel evolu-
tionary algorithms can be found in papers [32–37, 39, 41, 42]. The optimization of
forging processes based on gradient algorithms is presented in Badrinarayanan [24],
Zabaras et al. [123], Zhao et al. [124]. The evolutionary approach was considered in
António and Douardo [7], Burczyński and Kuś [38, 40]. The numerical examples of
plates, shells and axisymmetrical structures optimizations are shown.

4.8.1 Objective Functions for the Evolutionary Optimization


of Structures with Nonlinearities

4.8.1.1 Structures Made of Nonlinear Material with Hardening

A body which occupies the domain X bounded by the boundary C is considered.


The body is made of an elastoplastic material with hardening. Boundary conditions
in the form of displacements and tractions are prescribed and body forces are given.
One should find the optimal shape of the body to minimize the areas of the plastic
zones in the domain X. Such an optimization criterion can be achieved by mini-
mizing the fitness function:


ra
Z req when req  rp
F¼ dX where ra ¼ ð4:8:1Þ
r0 0 when req \rp
X

where req means the Huber–von Mises equivalent stress, rp is the yield stress and
r0 is the reference stress.
The shape optimization of structures with geometrical nonlinearities is per-
formed by minimizing structure displacements. The fitness function can be for-
mulated in the following form:
160 4 Structural Intelligent Optimization


2
Z q
F¼ dX ð4:8:2Þ
X
q 0

where q is the displacement and q0 is the reference displacement.


Constrains in the form of admissible volume of the structure and the boundary
values of design variables are imposed. The shape of the optimized structure can be
defined using NURBS (non-uniform rational B-splain). The curves had to be
converted into line segments and then the structure was meshed using triangle finite
elements (FEM) or boundary elements and cells (BEM). The triangle [104] code
was used for body meshing. Coordinates of NURBS curve control points played the
role of genes in the chromosome.

4.8.1.2 Forging Process Optimization

The forging process is highly nonlinear. Three different fitness functions were used
during the optimization. The first one is a measure between axisymmetric shape of
the forged detail and the desired one.
Z
F ¼ Dr ð yÞdy ð4:8:3Þ
y

The meaning of Dr ð yÞ is shown in Fig. 4.77. The optimal fitness function value
is known and is equal to zero.
The MSC.Marc was used to solve the forging problem. The axisymmetrical
bodies were considered. The forging process was modelled with the use of two
bodies: rigid for an anvil and elastoplastic for a preform. The contact with Coulomb
friction was used. The isothermal conditions were considered. The material was
modelled as viscoplastic using the following equation:

r ¼ Aðe0 þ eÞm þ B_en ð4:8:4Þ

where r is a stress, e is the strain, e_ is strain velocity, e0 is preliminary strain, A, B, n,


m are material coefficients.

Fig. 4.77 The obtained and


desired shape of the forged
detail
4.8 Optimization of Structures with Nonlinearities 161

The second and third fitness functions depend on plastic strains values. The idea
of using these functions is to equalize plastic strains distribution in the body. The
fitness function can be expressed as a double integral over time and over the area of
the structure and the difference between plastic strain ep and mean plastic strain esr :

ZT Z  
F¼ ep  esr dXdt ð4:8:5Þ
oX

The third fitness function is a double integral over time and over the area of the
structure of plastic strains:

ZT Z
F¼ ep dXdt ð4:8:6Þ
0X

4.8.2 Numerical Examples

A material with the characteristic presented in Fig. 4.78 is used in test problems
Sects. 4.8.2.1–4.8.2.3. E1 and E2 are Young’s moduli, ep is yield strain and rp is
yield stress.

4.8.2.1 The Elastoplastic Plate Modelled by Means of FEM

A 2D structural element is considered (Fig. 4.79a). The material data and param-
eters of the distributed evolutionary algorithm are: E1 = 20 GPa, E2 = 0.5 GPa,
rp = 250 MPa, m = 0.3, thickness 5 mm, load value 110 N/mm, maximum body
area 8000 mm2, the number of chromosomes 500, the number of generations 250,
and number of populations 4.

Fig. 4.78 Uniaxial stress–


strain curve for material used
in tests
162 4 Structural Intelligent Optimization

Fig. 4.79 Optimized plate: a geometry, b the best after 1st generation, c the best after 196th
generation

The external boundary and the hole boundary undergo shape optimization. The
external boundary was modelled by using the NURBS curve with three control
points (one of them can be moved—two design variables) and the internal hole was
modelled by using NURBS curve with four control points (each can be moved—
eight design variables). The fitness function was computed by using FEM. The
shape of the boundary after the first and 196th generations is shown in Fig. 4.79b,
c. The plastic areas are coloured in grey.
In order to examine the DEA for the various number of computers, the com-
puting time was measured for 15,000 fitness function evaluations. The computers
had AMD Duron 750 processors. The computing time versus the number of
computers is given in Table 4.40. The number of computed fitness functions as the
function of the number of computers is shown in Fig. 4.80. The starting population
was the same for each test. The problem was simpler that one shown above;
finite-element mesh had a lower number of elements.

Table 4.40 The computing Number of computers Computing time (s)


time in function of number of
computers 1 745
2 374
3 258
4 195
4.8 Optimization of Structures with Nonlinearities 163

Fig. 4.80 The speedup of computations

4.8.2.2 The Elastoplastic Plate Modelled Using BEM

The problem of shape optimization of a half K-structure is considered (Fig. 4.81a).


The material data and parameters of the DEA are: E1 = 20 GPa, E2 = 0.5 GPa,
rp= 150 MPa, m = 0.3, thickness = 5 mm, load value = 50 N/mm, maximum body
area = 30,000 mm2, the number of chromosomes = 200, the number of genera-
tions = 500, the number of populations = 4. The traction-free boundary is modelled
by two NURBS curves with three control points each. The fitness function was
evaluated by the BEM. The shape of the structure after the first and 476th gener-
ations is shown in Fig. 4.81b, c. Grey colour was used to mark the plastic areas.

4.8.2.3 The Elastoplastic Shell Structure

A shell is considered (Fig. 4.82a). The shell has 10 holes with constant radii. The
holes can be moved. The optimization criterion is to minimize integral over shells
displacements. The fitness function was evaluated using MSC.Nastran.

Fig. 4.81 A half of K-structure: a geometry, b the best after 1st generation, c the best after 476th
generation
164 4 Structural Intelligent Optimization

Fig. 4.82 A shell: a geometry, b the best after 1st generation, c the best after 500th generation

The shape of the shell after the first and 500th generations is shown in
Fig. 4.82b, c.

4.8.2.4 The Preform Optimization

The shape optimization of the preform was considered. The open die forging was
simulated. The flat anvil was used. The goal of the optimization is to find the shape
of the preform which leads to the cylindrical shape after forging. The geometrical
parameters are shown in Fig. 4.83. The material parameters for aluminium in 350 °C
used were: A = 26.478, B = 24.943, m = 0.1629, n = 3.4898. The friction
coefficient was equal to 0.5. The time step was 0.002 s, the number of steps was 200,
and speed of the anvil 75 mm/s. The fitness function (4.8.4) was used during
optimization.

Fig. 4.83 The desired shape


of the preform after forging
4.8 Optimization of Structures with Nonlinearities 165

Fig. 4.84 The geometry of the preform

Table 4.41 The constraints on the genes values


Gen Minimum (mm) Maximum (mm)
g1 50 250
g2 50 250
g3 50 300
g4 10 100
g5 50 300
g6 110 190

The geometry of the preform (Fig. 4.84) was modelled by NURBS curve with
four control points. The coordinates of the control points were defined by six genes
values (g1–g6).
The constraints on the genes values are shown in Table 4.41.
The number of chromosomes was 25, the probability of uniform mutation 25%,
the probability of Gaussian mutation 62.5%, the probability of simple crossover
6.25%, and the probability of arithmetic crossover 6.25%.
The best result was achieved after 638 generations (15,362 fitness function
computations). The best found shape of the preform is presented in Fig. 4.85a, and
the shape after forging in Fig. 4.85b.

4.8.2.5 Optimization of Anvil Shape in Forging

The goal of the example is to perform evolutionary optimization in two-stage


axisymmetric preform forging. The optimization of the shape of the anvils was
performed in the first stage. The first stage is open die forging and the second is
closed die forging. The optimization criteria were expressed as (4.8.5) and (4.8.6).
166 4 Structural Intelligent Optimization

Fig. 4.85 a The best found shape of the preform, b the shape of the preform after forging

Fig. 4.86 The shape of the anvil

The results obtained for both criteria are very close to each other. The shape of the
anvil described using NURBS curve is shown in Fig. 4.86. Eight parameters of the
NURBS curve were searched.
The preform had a cylindrical shape. The material parameters were the same as
in example Sect. 4.8.2.4. The friction coefficient was equal to 0.3. The model was
discretized by quadrilateral elements. The evolutionary algorithm with 10 chro-
mosomes was used. The Gaussian mutation and simple crossover operators were
applied.
Figure 4.87a shows the results obtained after flat anvil forging in the first stage
and Fig. 4.87b after closed die forging in the second stage. The best found result is
presented in Fig. 4.88.
The speedup of computation was measured for the presented example and was
expressed as:

t1
s¼ ð4:8:7Þ
tn

where t1 is the computing time using one processor, and tn is the computing time for
n processors. The speedup of computations is presented in Fig. 4.89.
4.8 Optimization of Structures with Nonlinearities 167

Fig. 4.87 The shape of the preform after a first stage, b second stage of forging

Fig. 4.88 The shape of the preform obtained by means of the best anvils after a first stage,
b second stage of forging

Fig. 4.89 Speedup of optimization of anvil shape


168 4 Structural Intelligent Optimization

4.8.3 Concluding Remarks

The application of distributed evolutionary algorithms to the optimization of


structures with nonlinearities was presented. The optimization of plates, shells and
axisymmetric bodies was considered. The shape optimization of preform or anvils
during forging process has been performed. The speedup of computations for
selected problems was measured.

4.9 Optimization of Composites

4.9.1 Introduction

Composite materials play an important role in modern industry. Composites are


materials constructed of two or more materials joined together on the macroscopic
level. Most of the composite materials consist of two phases: a continuous (matrix)
phase and reinforcement. Properties of composite materials can be designed by the
appropriate selection of selected parameters. The application of optimization
methods in such problems is a natural attitude to this problem.
An important group of composites state laminate materials which are
fibre-reinforced composites made of several layers. The optimal design process of
laminates typically involves the optimization of the following four parameters [88]:
(i) plies (or laminas) materials; (ii) ply thicknesses; (iii) ply orientations; and
(iv) stacking (or lay-up) sequence of the laminate.
The optimization of laminate material seems to be the most complex problem
which can lead to the designing of hybrid laminates [23, 71]. The application of
different materials in different plies allows obtaining new materials with properties
which are hard or not possible to obtain for simple laminates (with laminates made
of the same materials).
As the optimization of laminates is a global optimization problem, computa-
tional intelligence methods in the form of evolutionary algorithms (EAs) and
artificial immune systems (AISs) have been employed. These algorithms do not
require the information about objective function gradient, which often can be hard
or impossible to obtain. Ply angles and ply thicknesses in laminates are typically
treated in literature as continuous design variables, while from the industrial point
of view they should be usually treated as discrete ones (e.g. angles of fibres in
particular plies are often limited to a small set of admissible angles). Applied global
optimization algorithms could also deal with such kind of design variables.
Simple and hybrid laminates have been considered. Different optimization cri-
teria connected with dynamic behaviour of laminate structures have been taken into
account. To solve the boundary-value problem for laminates MSC.Pat-ran/Nastran,
commercial finite-element method (FEM) software has been applied. The appro-
priate software interfaces have been developed to couple optimization algorithms
with FEM software.
4.9 Optimization of Composites 169

4.9.2 Laminates and Laminate Mechanics

A laminate is a set of certain number of stacked plies/laminas composed of usually


unidirectional fibres permanently joined with a matrix. The direction of the fibres in
plies can be identical (one-directional laminates) or different (multidirectional
laminates). Generally, only the fibre direction in plies and ply thicknesses is dif-
ferent, while the materials remain the same.
If the layers are distributed symmetrically to the mid-plane, the laminate is called
symmetrical. For the symmetrical laminates the ply angles have to satisfy the relation:

K
hi ¼ hK þ 1i ; i ¼ 1; 2; . . .; ð4:9:1Þ
2

where K is the total number of plies in a laminate.


In general, composites are anisotropic materials. In the fully anisotropic material,
the number of the independent material constants is equal to 21. This number is
reduced if the material is symmetric with respect to specified planes. Multilayered
laminates can be usually treated as orthotropic materials. The constitutive equation for
a single layer of the laminate in the in-axis orientation has the following form [73]:
2 1v23 v32 v21 v31 v23 v31 v21 v32 3
E2 E3 D E2 E3 D E2 E3 D 0 0 0
2 3 6 6 v21 v31 v23 1v13 v31 v32 v12 v31
72 3
7 e
r11 6 E2 E3 D 0 0 0 7 11
6 r22 7 6 6 E1 E3 D E1 E3 D 76 7
6 7 6 v31 v21 v32 v32 v12 v31 1v12 v21 76 e22 7
6 r33 7 6 E E D
6 7¼6 2 3 0 0 0 776 e33 7
6 r33 7 6
E1 E3 D E1 E2 D
76 7 ð4:9:2Þ
6 7 6 0 76 e23 7
6
7
4 r31 5 6 0 0 G 0 0 7
23
74 e31 5
6 7
r12 6 0 0 0 0 G31 0 7 e
4 5 12
0 0 0 0 0 G12

where rij is the stress vector; eij is the strain vector; E1, E2, E3 are Young’s modules
in the main material axes 1, 2 and 3; G23, G23 and G12 are bulk modules in planes
(2, 3), (1, 3) and (1, 2); mij are Poisson’s ratios corresponding to the strains in
direction “j” if loading acts in direction “i”;
1  m12 m21  m23 m32  m31 m13  2m21 m32 m13
D¼ ð4:9:3Þ
E1 E2 E3

Assuming thin plate Kirchhoff-Love hypothesis, constitutive equation for the


single layer of the laminate contains four independent elastic constants:
8 9 2 E1 m21 E1 38 9
< r11 = 1m12 m21 1m12 m21 0 < e11 =
r22 ¼ 4 m12 E2 E2
0 5 e ð4:9:4Þ
: ; 1m12 m21 1m12 m21 : 22 ;
r12 0 0 G12 e12
170 4 Structural Intelligent Optimization

There is the following relation between the fifth elastic constant in foregoing
equation and the other elastic constants:

E2
m21 ¼ m12 ð4:9:5Þ
E1

The resultant laminate forces N and moments M referred to the unit


cross-section width of the laminate can be obtained by the integration of the
Eq. (4.9.4) and presented in the matrix form:
    
N AB e0
¼ ð4:9:6Þ
M BD j0

where A = [Aij], B=[Bij], D=[Dij] are in-plane, coupling and out-of-plane stiffness
matrixes, respectively; e0 are strains at the mid-plane; j0 are curvatures at the
mid-plane.
In the symmetrical laminates the coupling matrix B is a null one (Bij = 0). As
a result, there is no coupling of shield and bending states; the shield state is fully
described by the A matrix while the D matrix fully describes the bending state. The
foregoing equation takes the uncoupled form:
 
fNg ¼ ½A e0 
ð4:9:7Þ
fMg ¼ ½D j0

Another important feature of the symmetrical laminates is that the resultant


thermal moments do not exist and there is no buckling tendency during the lami-
nating process.
The dynamical behaviour of the structure can be determined by the modal
analysis methods [70] which are useful for the diagnostics and optimization of the
structures. The modal model of the dynamic structure is an ordered set of eigen-
frequencies, damping coefficients and vibration forms. The modal analysis of the
structure is carried out in two ways:
• the theoretical one—typically by means of the finite-element method (FEM).
After such analysis it is possible to modify the considered structure to reduce the
propagation of vibrations,
• the experimental one—carried out on real structures to verify the numerical
results. It consists of the excitation of the structure and measurements of the
structure response, characterizing the dynamic behaviour of the structure.
The eigenvalue problem for a laminate plate of length a, width b and thickness
h in directions x, y and z, respectively can be presented in the form [1]:

qhx2 w ¼ D11 wxxxx þ 4D16 wxxxy þ 2ðD12 þ 2D66 Þwxxyy þ 4D26 wxyyy þ D22 wyyyy
ð4:9:8Þ
4.9 Optimization of Composites 171

where w = deflection in the z direction; x = eigenvalue vector; Dij = bending


stiff-ness; q = mass density.
The bending stiffness Dij can be calculated as:
h  2
Dij ¼
Z2
 ðijkÞ dz
zð k Þ Q ð4:9:9Þ
h2

where zðkÞ is the distance from the middle plane to the top of layer k; Q  ðijkÞ is the
plane stress reduced stiffness component of the layer k.
If different materials are used for distinct layers, the laminate is called the hybrid
one. There exist different groups of hybrid laminates [96]: (i) interply hybrids with
layers made of different materials; (ii) intraply hybrids with at least two types of
reinforcement in the same layer; (iii) intermingled hybrids with constituent fibres
mixed as randomly as possible to avoid their concentrations; (iv) selective place-
ment hybrids with extra reinforcement placed in the critical regions; (v) superhy-
brids composed of metal foils or metal composite plies stacked in a specified
sequence and orientation.
The cost of laminates enlarges rapidly with their properties (e.g. strength).
Subsequently, it is sometimes advantageous to couple very stiff and expensive
material for the surface layers with low stiffness but cheaper material for the core
layers (interply hybrids). Such attitude has been applied in present chapter to reduce
the structure cost ensuring a high performance of the laminate.

4.9.3 Formulation of the Optimization Task

The optimization task is generally formulated as the minimization (or maximiza-


tion) of the objective function J0 with respect to design variables vector x:

minðJ0 Þ ð4:9:10Þ
x

with constrains:

Ja ðxÞ ¼ 0; a ¼ 1; 2; . . .; m
Jb ðxÞ  0; b ¼ 1; 2; . . .; n ð4:9:11Þ
ximax  xi  ximin ; i ¼ 1; 2; . . .; k

where Ja, Jb are constrain functionals; n, m, k are constants.


The aim is to find the optimal set of ply angles for structures made of multi-
layered, symmetrical laminates for given criteria. Simple and hybrid laminates are
considered. Two variants of the design variables are taken into account: (i) with
continuous design variables; (ii) with discrete design variables.
172 4 Structural Intelligent Optimization

The following optimization criteria connected with laminates’ modal properties


are proposed [12]:
1. the maximization of the first eigenfrequency:

argmaxfx1 ðxÞ; x 2 Dg ð4:9:12Þ

2. the maximization of the distance between two consecutive eigenfrequencies:

argmaxfxi ðxÞ  xi1 ðxÞ; x 2 Dg ð4:9:13Þ

3. the maximization of the distance between the external excitation frequency xex
and the closest eigenfrequency xi_cl:

argmaxfjxex ðxÞ  xi cl ðxÞj; x 2 Dg ð4:9:14Þ

To solve the laminates’ optimization task the computational intelligence meth-


ods have been used:
• a distributed evolutionary algorithm (DEA) described in Sect. 3.3;
• a parallel artificial immune systems (PAIS) presented in Sect. 3.6.
To calculate the objective function value for each candidate solution, the com-
mercial FEM software has been used in both cases.

4.9.4 Numerical Examples

4.9.4.1 Numerical Example 1: Evolutionary Optimization of Hybrid


Laminates

A symmetric hybrid laminate plate made of two materials is considered. The


external plies of the laminate are made of material Me, the core of the plate is made
of the material Mi [13] (Fig. 4.90).
The properties of materials are:
• material Me (graphite-epoxy, T300/5280): E1 = 181 GPa, E2 = 10.3 GPa,
G12 = 7.17 GPa, m12 = 0.28, q = 1600 kg/m3;
• material Mi (glass-epoxy, Scotchply 1002): E1 = 38.6 GPa, E2 = 8.27 GPa,
G12 = 4.14 GPa, m12 = 0.26, q = 1800 kg/m3.
The aim of the optimization is to find the optimum ply angles of the hybrid
laminate for the given number and thicknesses of the laminates. It is assumed that
the number of laminates made of particular materials is constant.
The DEA is used to solve the optimization problem. Each population of the DEA
is divided into two subpopulations consisting of the same number of chromosomes
4.9 Optimization of Composites 173

(a)
y
(b)
0.2 x Me

Mi
symmetry
0.5

Fig. 4.90 The hybrid laminate plate: a dimensions and bearing; b location of materials (for
10-plies case)

(individuals). Each chromosome is composed of genes representing ply angles. Due


to symmetry, the number of genes in each chromosome is equal to a half number of
plies. The parameters of the DEA are:
• the number of subpopulations: Nsp = 2;
• chromosomes in each subpopulation: Ne = 20;
• termination condition: no. of generations (gn = 70);
• selection method: rank selection;
• simple crossover probability: pc = 0.9;
• uniform mutation probability: pmu = 0.1;
• Gaussian mutation probability: pmg = 1/(individual length).
Two cases are considered: with K1 = 10 and K2 = 20 plies. The thickness of the
plate is assumed to be constant and equal to h = 0.02 m. The thicknesses of parts
made of particular materials are also the same. Each ply of the laminate in each i has
equal thickness hi = h/Ki, i = 1, 2.
The initial (arbitrary chosen) stacking sequences for 10-plies and 20-plies
variants are: (0/15/-15/45/-45)s and (0/0/15/15/-15/-5/45/45/-45/-45)s, respectively.
Two optimization criteria: (4.9.13) and (4.9.14) are considered. Different vari-
ants are taken into account:
• each ply angle can take real values from the range 〈−90°, 90°〉 (continuous
variant);
• each ply angle can take discrete values from the range 〈−90°, 90°〉 varying
every 5°, 15° and 45° (discrete variants).
The results for the maximization of the distance between first and second
eigenfrequencies are collected in Table 4.42.
In the considered case the best optimization results have been obtained for
continuous and 5° variants (with the largest searching space). Results for 20-plies
case are significantly better for each variant.
174 4 Structural Intelligent Optimization

Table 4.42 Numerical Example 1: optimization results for the x2 − x1 maximization


Variant Plies Stacking sequence x2 − x1
no. (Hz)
Initial 10 (0/15/-15/45/-45)s 153.256
20 (0/0/15/15/-15/-15/45/45/-45/-45)s 153.256
Continuous 10 (35.9/-31.5/-32.1/-32.2/32.0)s 282.129
20 (34.1/-37.1/-31.4/-19.5/31.9/27.7/58.1/-34.2/- 341.375
22.9/-61.1)s
5° 10 (35/-35/30/-30/-30)s 281.820
20 (35/-35/-30/25/30/35/-30/55/35/30)s 341.597
15° 10 (30/-45/-30/-30/30)s 275.589
20 (30/-30/45/-45/45/-45/-45/45/-45/-45/)s 326.971
45° 10 (45/-45/0/0/0)s 258.414
20 (45/-45/0/0/0/0/0/0/0/0)s 318.208

Table 4.43 Numerical Example 1: optimization results for the |xex − xcl| maximization
Variant Plies Stacking sequence |xex − xcl| xcl
(Hz) (Hz)
Initial 10 (0/15/-15/45/-45)s 20.268 99.732
20 (0/0/15/15/-15/-15/45/45/-45/-45)s 20.268 99.732
Continuous 10 (76.9/88.9/61.1/6.1/61.3)s 86.874 33.126
20 (80.4/-76.3/61.9/87.5/-48.1/71.6/12.1/ 86.873 33.127
53.8/85.9/45.6)s
5° 10 (90/60/-45/50/90)s 86.845 33.155
20 (-80/90/65/55/65/25/-65/-85/85/15)s 86.880 206.880
15° 10 (-75/90/-60/15/-15)s 86.739 33.261
20 (90/75/45/90/60/-60/-45/90/-30/90/)s 86.813 33.639
45° 10 (90/-45/90/45/90)s 86.362 33.638
20 (90/90/90/45/45/45/90/90/90/90)s 86.802 33.198

The optimization results for the maximization of the distance between external
eigenfrequency and the closest eigenfrequency are gathered in Table 4.43. It is
assumed that the external excitation frequency xex is equal to 120 Hz. First five
eigenfrequencies of the laminate plate are considered.
Similar optimization results have been obtained for all cases and variants, but it
can be observed that results for 20-plies case are slightly better. It can be explained
by larger number of design variables which gives more possibilities of different
stacking sequences. Results obtained for continuous and 5° variants are typically
better than results achieved for remaining variants.
4.9 Optimization of Composites 175

4.9.4.2 Numerical Example 2: Immune Optimization

A box-beam with varying cross-section is considered (Fig. 4.91). The wider end of
the structure is fixed. All four walls of the structure are made of the same hybrid,
symmetric laminate with the same stacking sequence. External laminates are made
of graphite-epoxy material Me, while internal layers are built of glass-epoxy
material Mi [14].
The thickness of each ply hi is constant and is equal to 0.2e−3 m.
The properties of materials are:
• material Me (graphite-epoxy): E1 = 141.5 GPa, E2 = 9.80 GPa, G12 = 5.90
GPa, m12 = 0.42 q = 1445.5 kg/m3;
• material Mi (glass-epoxy): E1 = 38.6 GPa, E2 = 8.27 GPa, G12 = 4.14 GPa,
m12 = 0.26, q = 1800 kg/m3.
It is assumed that the number of plies on each wall is equal to 14 but external
plies angle is preset to 0. As a result, the number of design variables is equal to 6
(due eto the symmetry).The stacking sequence for each wall can be presented as
0=h1 =he2 =he3 =he4 =he5 =he6 s; where subscripts denote a design variable number whilst
superscripts refer to the materials: (e—external, i—internal). It is assumed that:
• each ply angle can take real values from the range 〈−90°, 90°〉 (continuous
variant);
• each ply angle can take discrete values from the range 〈−90°, 90°〉 varying
every 5°, 15° and 45° (discrete variants).
The aim of the optimization is to find an optimal stacking sequence to maximize
the fundamental eigenfrequency x1 of the structure. The PAIS is employed to solve
the optimization problem.
The parameters of the PAIS are:
• the number of memory cells nmc = 5;
• the number of clones ncl = 20;
• termination condition: no. of iterations (in = 30);

Fig. 4.91 The box-beam—


dimensions and bearing
176 4 Structural Intelligent Optimization

Table 4.44 Numerical Example 2: optimization results for the maximization of x1


Variant x1 (Hz) Stacking sequence for the best
The The Average Std. solution
best worst dev.
Cont. 900.125 893.649 898.790 1.444 (0/89.9/90/89.5/69.7/-44.4/38.5)s
5° 899.973 896.879 898.663 0.902 (0/90/90/85/-70/45/40)s
15° 898.764 867.803 893.512 10.600 (0/90/90/75/75/-45/45)s
45° 894.589 656.044 855.674 91.896 (0/90/90/90/45/-45/-45)s

• the number of design variables nv = 6;


• the minimum crowding distance cdist = 0.2;
• the mutation range mr = 0.5.
To obtain some statistical data, the calculations were repeated 30 times for each
ply angles variant. The results of the optimization are collected in Table 4.44.
The best optimization results were obtained for variants with wide search space
(continuous and 5° variants). Slightly better optimization results have been attained
for 20-plies case. It can also be observed that the repetitiveness of the algorithm
expressed by the standard deviation is much greater for continuous and 5° variants.

4.9.5 Concluding Remarks

The laminate plates’ optimization problem has been presented. Simple and hybrid
(with plies made of different materials) laminates have been taken into account.
Different optimization criteria related to the modal properties of optimized struc-
tures have been considered. To solve this task computational intelligence methods
(evolutionary algorithm, artificial immune system) coupled with the commercial
FEM software have been employed. The continuous as well as discrete optimiza-
tion has been performed. The proposed optimization method gave positive results in
all presented cases.

4.10 Multiobjective Optimization in Coupled Problems

4.10.1 Introduction

In many real-world engineering problems several goals must be satisfied simulta-


neously in order to obtain an optimal solution. In the first phase of the design
process the set of objectives is unclear and the designer has to define them as
precisely as possible. Moreover, for the multiobjective optimization the goals are
usually in conflict with each other. For example, the volume of the heat exchanger
4.10 Multiobjective Optimization in Coupled Problems 177

should be minimized while the total dissipated heat flux or maximal value of the
equivalent stress should be maximized (or also minimized). The common approach
in this sort of problems is to choose one objective (e.g. the volume of the structure)
and incorporate the other objectives as constraints, or use the weighting method.
Such attitude did not require modification of the core of the algorithm but it has
some disadvantages (see Sect. 4.1).
The evolutionary algorithms using the pareto approach are more convenient to
solve such problems. One run of the algorithms gives a set of pareto optimal
solutions for designers. In the chapter different algorithms are used to solve mul-
tiobjective problems (MOEA, MOOPTIM, NSGAII). Details of the algorithms are
described in Sect. 4.1.
Coupled field problems occur when two or more physical systems interact with
each other, with the independent solution of any one system being impossible
without simultaneous solutions of the others. Definition of the coupled systems can
be formulated as [126]:
Coupled systems and formulations are those applicable to multiple domains and dependent
variables which usually (but not always) describe different physical phenomena and in
which
• neither domain can be solved while separated from the other;
• neither set of dependent variables can be explicitly eliminated at the differential
equation level.

The coupling between the systems may be considered as weak or strong.


The first class couples different problems via boundary conditions imposed on
the interface or can be solved by transferring loads (e.g. fluid-structure interaction,
uncoupled thermo-elasticity, etc.).
For the strong-coupled systems, problems overlap totally or partially and cou-
pling occurs through the governing differential equations describing different
physical phenomena (e.g. piezoelectricity) [9].
Three different couplings between mechanical, thermal and electrical fields are
considered (Fig. 4.92):

Fig. 4.92 Considered


coupling between mechanical,
thermal and electrical fields
178 4 Structural Intelligent Optimization

• thermo-elasticity (M-T),
• thermo-electro-mechanical coupling (M-T-E),
• piezoelectricity (M-E).
To solve the considered coupled-field problems, the following FEM commercial
codes are used: MSC.Mentat/Marc and Ansys Mutiphysics. Thermo-elasticity and
thermo-electro-mechanical problems are solved as a weakly coupled analysis,
whereas piezoelectricity as a strongly coupled.

4.10.2 Objective Functions for the Multiobjective


Evolutionary Optimization

Generally for the considered problems, the definition of the objective functions
(functionals) may use results from each of the physical problems and additional
functionals may be defined as a volume or costs, and so on (see Sect. 4.6). The
multiobjective optimization tasks are solved for the functionals defined as:
• the minimum volume of the structure:

min V ðXÞ ð4:10:1Þ


X

• the minimization of the maximal value of the equivalent stress:

eq ðXÞ
min rmax
X
ð4:10:2Þ

• the minimization of the maximal value of the temperature in the structure:

min T max ðXÞ ð4:10:3Þ


X

• the maximization of the deflection of the structure.

max U ðXÞ ð4:10:4Þ


X

• the maximization of the total dissipated heat flux

max qðxÞ ð4:10:5Þ


x
4.10 Multiobjective Optimization in Coupled Problems 179

4.10.3 Numerical Examples

Example 1: Shape optimization of the supporting structure under


thermo-mechanical loading
A structure under thermo-mechanical loading is considered. The shape of the
structure, boundary conditions and method of modelling optimized part of the
boundary are presented in Fig. 4.93. The multiobjective problem concerns deter-
mining the shape of the structure which minimizes both the volume of the structure
(4.10.1) and the maximal value of the equivalent stress (4.10.2).
In order to minimize the number of design parameters, the optimized boundary is
modelled with the use of four-point Bézier curve. The geometrical constrains are
imposed on the position of the control points.
The number of design parameters is equal to six. The structure is made of steel
and it is modelled as a 2D plane stress shell.
Tables 4.45, 4.46 and 4.47 contain admissible ranges of the design parameters
and values of the boundary conditions and material properties. Boundary-value
problem is performed with the use of MSC.Mentat/Marc software. Several
numerical tests have been performed using algorithm NSGA-II. The set of
pareto-optimal solutions with examples of obtained shapes of the structure are
shown in Fig. 4.94 [61].

Fig. 4.93 Geometry of the


structure and boundary
conditions
180 4 Structural Intelligent Optimization

Table 4.45 Ranges of design variables


Design variable Range
Z1—y coord. of A 〈0.005; 0.07〉
Z2, Z3—x, y coord. of B 〈–0.025; 0.1〉 〈0; 0.09〉
Z4, Z5—x, y coord. of C 〈–0.025; 0.1〉 〈0.11; 0.02〉
Z6—y coord. of D 〈0.005; 0.07〉

Table 4.46 Values of the Boundary cond. Value


boundary conditions
Heat flux q0 1000 (W/m)
Core temp. Tot, 25 (°C),
Conv. coeff. a 10 (W °C/m2)
Force P 500 (N)

Table 4.47 Values of the Material prop. Value


material properties
Young’s modulus 210 (GPa)
Poisson’s ratio 0.3
Thermal cond. 30 (W °C/m)
Thermal exp. coeff. 12.5e−6 (1/°C)

Fig. 4.94 Results of the multiobjective optimization


4.10 Multiobjective Optimization in Coupled Problems 181

Fig. 4.95 a The design variables, b the geometry and the boundary conditions

Table 4.48 The admissible Design variable Min value (m) Max value (m)
values of the design
parameters Z1, Z2, Z3, Z4 0.01 0.05
Z5 0.0025 0.006
Z6 0.0025 0.008

Example 2: Shape optimization of the heat exchanger


Consider a radiator whose cross-section is shown in Fig. 4.95. The structure is
made of copper with the following material properties: Young’s modulus E =
110,000 MPa, Poisson’s ratio m ¼ 0:35; thermal expansion coefficient a ¼ 16:5

106 1/K and thermal conductivity k ¼ 380 W/mK.


Six design variables are assumed: the length of each fin (Z1–Z4), the width of
the fins (the same for all fins—Z5) and thickness Z6. The geometry of the radiator is
symmetric. The total width of the radiator is equal to 0.1 m. Table 4.48 contains
limitations of the design variables. Figure 4.95b shows thermo-mechanical
boundary conditions. Force P ¼ 10 N is applied on each fin. The temperature T 0 ,
ambient temperature T ot and the heat convection coefficient a are equal to 100 °C,
25 °C, 20 W/mK, respectively. The multiobjective problem is to determine the
specific dimensions of the structure which minimizes the set of proposed func-
tionals (4.10.1), (4.10.2) and (4.10.5).
Several numerical experiments were performed using algorithm MOEA. The set
of pareto-optimal solutions with an example of the obtained shape for the mini-
mization of both the volume of the radiator (f1) and the maximal value of the
equivalent stresses (f2) is presented in Fig. 4.96. Figure 4.97 contains the results for
the simultaneous maximization of the total dissipated heat flux and the minimiza-
tion of the equivalent stresses (f2). The set of pareto solutions obtained for three
proposed criteria (f1—volume, f2—equivalent stress, f3—heat flux) is presented in
Fig. 4.98 [59].
182 4 Structural Intelligent Optimization

Fig. 4.96 The set of pareto-optimal solutions

Fig. 4.97 The set of pareto-optimal solutions


4.10 Multiobjective Optimization in Coupled Problems 183

Fig. 4.98 The set of pareto-optimal solutions for three criterion

Fig. 4.99 Geometry of the thermal actuator

Example 3: Shape optimization of the thermal actuator


The U-shaped MEMS shown in Fig. 4.99 is considered. The structure is modelled
as microelectrothermal actuator, fabricated from polycrystalline silicon whose
material properties are shown in Table 4.49.
The deflection of the actuator can be produced when the electrical potential
difference is applied across the two electrical pads. It is possible due to material
properties—high electrical resistivity and different thermal expansion between thin
and wide arms. The device is subjected to the electrical, thermal and mechanical
boundary conditions. Electrical-thermal-mechanical analysis is performed with the
use of MSC.Mentat/Marc software.
184 4 Structural Intelligent Optimization

Table 4.49 Material Parameter Value


properties
Young’s modulus 158e3 MPa
Poisson’s ratio 0.23
Thermal expansion coeff. 3.0e−6 1/K
Thermal conductivity 140e8 pW/lmK
Resistivity 3.3e−11 TXlm

Table 4.50 Limitations for Parameter Value (lm)


the design variables
Z1, Z2, Z3 〈1.0; 3.0〉
Z4 〈12.0; 18.0〉
Z5 〈30.0; 100.0〉
Z6 〈2.0; 8.0〉

The length and the actuator is 260 microns, electrical pads is 20


20 microns. The
multiobjective problem concerns determining the specified dimension of the shape of
the actuator which minimize or maximize functionals (4.10.1), (4.10.2), (4.10.4). Six
design variables are assumed (Fig. 4.99). Table 4.50 contains limitations for the design
variables, whereas Table 4.51 contains parameters of multiobjective evolutionary
algorithm. For the multiobjective optimization task NSGA-II algorithm is used.
The set of pareto-optimal solutions with examples of the obtained shape are
shown in Figs. 4.100 and 4.101 [57, 62, 63].
Example 4: Shape optimization of the thermal actuator
The same thermal microactuator as in previous example is considered. In this
numerical example besides geometrical features (Z1–Z6), the electric potential on
the electrical pads is the design variable (Z7). Hence the total number of design
variables is equal to seven.
The multiobjective problems are solved simultaneously for two functionals
(4.10.3) and (4.10.4), and for three functionals (4.10.1), (4.10.3) and (4.10.4). For
three functionals the population size enlarges to 60. Values of the rest of the
parameters are the same as in previous example. The sets of the pareto-optimal
solutions with examples of obtained shape are shown in Figs. 4.102 and 4.103.

Table 4.51 Parameters of Parameter of NSGA-II Value


multiobjective evolutionary
algorithm NSGA-II # of design variables 7
# of objectives 2
# of constrains 7
Population size 30
Maximum generations 100
Crossover probability 0.9
Mutation probability 0.1
4.10 Multiobjective Optimization in Coupled Problems 185

Fig. 4.100 The set of pareto-optimal solutions for minimization of the volume and von Mises
stress

Fig. 4.101 The set of pareto-optimal solutions for minimization of the volume and maximization
of the deflection of the actuator
186 4 Structural Intelligent Optimization

Fig. 4.102 The set of pareto-optimal solutions for minimization of the maximal temperature and
maximization of the deflection of the actuator

Fig. 4.103 The set of pareto-optimal solutions for three functionals (4.10.1), (4.10.3) and (4.10.4)
4.10 Multiobjective Optimization in Coupled Problems 187

Fig. 4.104 The optimized


piezoelectric actuator

Example 5: Shape optimization of the piezoelectric actuator


L-shaped piezoelectric structure is considered (Fig. 4.104). The length of the
structure is 10 mm, whereas the thickness of the thin arm is equal to 1 mm. Left
side of the structure (segment AF) is clamped. Electric potentials −1000 V and
1000 V are applied on the segments AF and CD, respectively.
Four design variables: vertical coordinate of point A (range 〈0–6.0〉), vertical
and horizontal coordinates of point B (ranges 〈1.0–6.0〉 and 〈1.0–5.0〉), horizontal
coordinate of point C (range 〈5.0–9.0〉).
The PZT-5 ceramic material is applied. The multiobjective problem concerns in
determining the particular dimensions of the structures considering different pairs of
the proposed functionals (4.10.1) (4.10.2) (4.10.4). Linear piezoelectricity is solved
by using FEM software—Ansys Multiphysics.
MOOPTIM and NSGA-II are used for the optimization tasks. Size of the pop-
ulation and number of iterations are 50 for both the algorithms. For NSGA-II
crossover the probability is set to 0.9 and mutation probability to 0.1, as suggested
in the papers [53, 54]. The probabilities of arithmetic crossover, simple crossover
and uniform mutation were set to 0.1 for MOOPTIM. The probability of Gaussian
mutation is 0.7 and range of Gaussian mutation is 0.2.
Figures 4.105 and 4.107 show the results of optimization, whereas Fig. 4.106
shows the geometry and stress distribution of the three regions indicated in
Fig. 4.105 [57, 65].

Fig. 4.105 Pareto-optimal


solutions for volume
minimization and equivalent
stress minimization
188 4 Structural Intelligent Optimization

Fig. 4.106 Geometry and stress distribution for the three indicated regions

Fig. 4.107 Pareto-optimal


solutions for equivalent stress
minimization and deflection
maximization

4.10.4 Concluding Remarks

Multiobjective shape optimization for different coupled problems has been pre-
sented. The proposed method gives the designer the set of optimal solutions based
on more than one criterion. The application of the FEM software requires evalu-
ation in several steps for each single solution (modification of the geometry, cre-
ating finite element mesh, etc.). It can be a very time-consuming task, especially for
more complicated geometry. Solution of the coupled problems such as
thermo-elasticity, electro-thermo-mechanical or piezoelectric analysis is more
4.10 Multiobjective Optimization in Coupled Problems 189

time-consuming compared to the single-field problems. In order to reduce the time


of the computation, parallelization of the fitness function evaluation may be
introduced.
The application of the MOOPTIM to the real-world engineering problems, such
as multiobjective optimization of MEMS structures, shows its usefulness. The
results obtained using MOOPTIM are slightly better compared to the results
obtained using NSGA-II. For these problems, besides the convergence, especially
distribution of the pareto-optimal solutions is more extensive.

References

1. Adali S, Verijenko VE (2001) Optimum stacking sequence design of symmetric hybrid


laminates undergoing free vibrations. Compos Struct 54:131–138
2. Adeli H, Kumar S (1995) Distributed genetic algorithm for structural optimization. J Aerosp
Eng 8(3):156–163
3. Allaire G, de Gournay F, Jouve F, Toader AM (2004) Structural optimization using
topological and shape sensitivity via a level set method. Internal report, 555, CMAP, Ecole
Polytechnique
4. Anagnostou G, Rønquist E, Patera A (1992) A computational procedure for part design.
Comput Methods Appl Mech Eng 97:33–48
5. Andersson J (2000) A survey of multiobjective optimization in engineering design.
Technical Report: LiTH-IKP-R-1097
6. Anderson TL (2004) Fracture mechanics: fundamentals and applications. CRC Press
7. António CAC, Douardo NM (2002) Metal-forming optimization by inverse evolutionary
search. J Mater Process Technol 121:403–413
8. Banerjee PK (1994) The boundary element method in engineering. McGraw-Hill Book
Company, London
9. Beer G (1983) Finite element, boundary element and coupled analysis of unbounded
problems in elastostastics. Int J Numer Meth Eng 19:567–580
10. Belegundu AD, Chandrupatla TR (2011) Optimization concepts and applications in
engineering, 2nd ed. Cambridge University Press
11. Beluch W (2005) Evolutionary shape optimization in fracture problems. CAMES 12:111–
121
12. Beluch W, Burczyński T (2005) Evolutionary optimization of hybrid laminates. Recent Dev
Artif Intell Methods 21–24, Gliwice
13. Beluch W, Burczyński T, Kuś W (2006) Evolutionary optimization and identification of
hybrid laminates. In: Evolutionary computation and global optimization 2006, Oficyna Wyd.
Pol. Warszawskiej 156:39–48
14. Beluch W, Burczyński T, Kuś W (2010) Parallel artificial immune system in optimization
and identification of composite structures. In: Proceedings of parallel problem solving from
nature—PPSN XI, Part 2. Lecture notes on computational sciences, vol 6239.
Springer-Verlag, Berlin, pp 171–180
15. Bendsøe MP (1989) Optimal shape design as a material distribution problem. Struct Optim
1:193–202
16. Bendsøe MP, Kikuchi N (1988) Generating optimal topologies in structural design using a
homogenization method. Comput Methods Appl Mech Eng 71:197–224
17. Bendsøe MP, Sigmund O (1999) Material interpolation schemes in topology optimisation.
Arch Appl Mech 69:635–654
190 4 Structural Intelligent Optimization

18. Bendsøe MP, Sigmund O (2003) Topology optimisation—theory, methods and applications.
Springer-Verlag, Berlin
19. Bendsøe MP, Sokołowski J (1988) Design sensitivity analysis of elastic-plastic problems.
Mech Struct Mach 16(1):81–102
20. Białecki RA, Burczyński T, Długosz A, Kuś W, Ostrowski Z (2005) Evolutionary shape
optimization of thermolastic bodies exchanging heat by convection and radiation. Comput
Methods Appl Mech Eng (Elsevier) 194:1839–1859
21. Bojczuk D, Szteleblak W (2008) Optimization of layout and shape of stiffeners in 2D
structures. Comput Struct 86:1436–1446
22. Bonnans J-F, Gilbert JC, Lemarechal C, Sagastizábal CA (2006) Numerical optimization,
2nd ed. Springer-Verlag
23. Botelho EC, Silva RA, Pardini LC, Rezende MC (2006) A review on the development and
properties of continuous fiber/epoxy/aluminum hybrid composites for aircraft structures.
Mater Res 9(3):247–256
24. Badrinarayanan S (1997) Preform and die design problems in metalforming. PhD thesis,
Cornell University
25. Brebbia CA, Dominiguez J (1989) Boundary elements. An introductory course. Comput
Mech
26. Brebbia CA, Telles JCF, Wrobel LC (1984) Boundary element techniques. Springer-Verlag,
Berlin
27. Burczyński T (1995) The method of boundary elements in mechanics. WNT, Warsaw (in
Polish)
28. Burczyński T, Beluch W, Długosz A, Nowakowski M, Orantek P (2000) Coupling of the
boundary element method and evolutionary algorithms and optimization and identification
problems. In: Proceedings of the ECCOMAS 2000 European congress on computational
methods in applied sciences and engineering, Barcelona
29. Burczyński T, Długosz A (2002) Evolutionary optimization in thermoelastic problems using
the boundary element method. Comput Mech (Springer) 28(3–4):317–324
30. Burczyński T, Długosz A, Kuś W (2005) Evolutionary computation in shape optimization of
heat radiators. Numer Heat Trans NHT-2005, EUROTHERM 82
31. Burczyński T, Kokot G (1998) Topology optimisation using boundary elements and genetic
algorithms. In: Idelsohn SR, Ońate E, Dvorkin EN (eds) Proceedings of fourth congress on
computational mechanics, new trends and applications. Barcelona, CD-ROM
32. Burczyński T, Kuś W (2001) Shape optimization of elasto-plastic structures using
distributed evolutionary algorithms. In: Proceedings of the 2nd European conference on
computational mechanics, ECCM 2001, Kraków
33. Burczyński T, Kuś W (2001) Distributed evolutionary algorithms in shape optimization of
nonlinear structures. In: Proceedings of the parallel processing and applied mathematics,
PPAM 2001, Nałęczów
34. Burczyński T, Kuś W (2002) Shape optimization of elasto-plastic structures using coupled
BEM-FEM approach and distributed evolutionary algorithm. In: Proceedings of the world
conference on computational mechanics, WCCM 2002, Wien
35. Burczyński T, Kuś W (2002) Distributed evolutionary algorithm—tests and applications.
AI-METH 2002, Gliwice
36. Burczyński T, Kuś W (2003) Distributed and parallel evolutionary algorithms in
optimization of nonlinear structures. In: Proceedings of the 15th international conference
on computer methods in mechanics CMM-2003, Wisła
37. Burczyński T, Kuś W (2003) Distributed evolutionary algorithms with
master-co-evolutionary algorithm and slaves—fitness function evaluators. In: Proceedings
of the VI Krajowej Konferencji Algorytmy Ewolucyjne i Optymalizacja Globalna KAEiOG
2003, 26–28 maja, Łagów
38. Burczyński T, Kuś W (2003) Optimization of structures using distributed and parallel
evolutionary algorithms. In: Proceedings of the PPAM 2003, parallel processing and applied
mathematics, Częstochowa
References 191

39. Burczyński T, Kuś W (2003) Improvements of distributed evolutionary algorithm in


optimization of nonlinear structures. In: Symposium on methods of artificial intelligence
AI-METH 2003, Gliwice
40. Burczyński T, Kuś W (2004) Optimization of structures using distributed and parallel
evolutionary algorithms. In: Parallel processing and applied mathematics, PPAM2003,
Revised papers. Lecture notes on computational sciences, vol 3019. Springer, pp 572–579
41. Burczyński T, Kuś W, Długosz A, Orantek P (2004) Optimization and identification using
distributed evolutionary algorithms. Eng Appl Artif Intell 17:337–344
42. Burczyński T, Kuś W, Długosz A, Poteralski A, Szczepanik M (2004) Sequential and
distributed evolutionary computations in structural optimization. In: ICAISC international
conference on artificial intelligence and soft computing. Lecture notes on artificial
intelligence, vol 3070. Springer
43. Burczyński T, Poteralski A (2004) Advanced evolutionary optimization of 3-D structures.
In: 4th European congress on computational methods in applied sciences and engineering,
Jyvaskyla, Finland, full paper, CD-ROM
44. Burczyński T, Szczepanik M, Kuś W (2005) Evolutionary optimization of stiffeners
locations in 2-D structures. In: Proceedings of the CMM 2005 16th international conference
on computer methods in mechanics, Wisła
45. Burczynski T, Poteralski A, Szczepanik M (2007) Evolutionary computing in topology
optimization. In: The eleventh international conference on civil, structural and environmental
engineering computing, Abstract, St. Julians, Malta
46. Burczyński T, Górski R, Poteralski A, Szczepanik M (2011) Soft computing in structural
dynamics. In: III ECCOMAS thematic conference on computational methods in structural
dynamics and earthquake engineering, COMPDYN 2011, Corfu, Greece
47. Carter J, Booker J (1989) Finite element analysis of coupled thermoelasticity. Comput Struct
31(1):73–80
48. de Castro LN, Timmis J (2002) Artificial immune systems as a novel soft computing
paradigm. Soft Comput 7(8):526–544
49. Chapman C, Saitou K, Jakiela M (1995) Genetic algorithms as an approach to configuration
and topology design. ASME J Mech Des 45
50. Cheng KT, Olhoff N (1981) An investigation concerning optimal design of solid elastic
plates. Int J Solids Struct 17:305–323
51. Coello CA (1999) A comprehensive survey of evolutionary-based multiobjective optimiza-
tion techniques. Knowl Inf Syst 1(3):129–156
52. Dasgupta D, Forrest S (1999) Artificial immune systems in industrial applications. In:
Proceedings of the international conference on intelligent processing and manufacturing
material (IPMM). Honolulu, HI
53. Deb K (2007) Evolutionary multi-objective optimization without additional parameters. In:
Kacprzyk J (ed) Studies in computational intelligence, vol 54. Springer-Verlag, Berlin,
pp 241–257
54. Deb K, Agrawal S, Pratap A, Meyarivan T (2002) A fast elitist multiobjective genetic
algorithm: NSGA-II. IEEE Trans Evol Comput 6(2):182–197
55. Diaz AR, Kikuchi N (1992) Solution to shape and topology eigenvalue optimization
problems using a homogenization method. Int J Numer Methods Eng 35:1487–1502
56. Ding X, Yamazaki K (2004) Stiffener layout design for plate structures by growing and
branching tree model (application to vibration-proof design). Struct Multidiscip Optim
96:99–110
57. Długosz A (2010) Multiobjective evolutionary optimization of MEMS structures. Comput
Assist Mech Eng Sci 17(1):41–50
58. Długosz A, Burczyński T (2007) Multiobjective optimization of heat radiators using
evolutionary algorithms. In: Arabas J (ed) Computation and global optimization 2007,
Oficyna Wyd. Pol. Warszawskiej, Prace Naukowe, Elektronika
192 4 Structural Intelligent Optimization

59. Długosz A, Burczyński T (2007) Shape optimization of thermomechanical bodies using


multiobjective evolutionary algorithms. In: Proceedings of the 8th world congress on
structural and multidisciplinary optimization (WCSMO-8), Lisbon, Portugal
60. Długosz A, Burczyński T (2008) Multicriteria shape optimization of thermoelastic structures
using evolutionary algorithms. In: Proceedings of the 8th world congress on computational
mechanics (WCCM8) 5th. European congress on computational methods in applied sciences
and engineering (ECCOMAS 2008), Venice, Italy
61. Długosz A, Burczyński T (2009) Multiobjective evolutionary optimization of structures
under thermomechanical loading. In: Proceedings of the 18th international conference on
computer methods in mechanics CMM-2009, Zielona Góra, Poland, pp 161–162
62. Długosz A, Burczyński T (2010) Multi-objective design optimization of MEMS devices. In:
Proceedings of the 9th world congress on computational mechanics and 4th asian pacific
congress on computational mechanic (WCCM/APCOM 2010), Sydney, Australia
63. Długosz A, Burczyński T (2011) Shape optimization of electro-thermal-mechanical systems
by using multiobjective evolutionary algorithms. In: Burczyński T, Périaux J
(eds) Evolutionary and deterministic methods for design, optimization and control.
Applications to industrial and societal problems, CIMNE, Barcelona, pp 162–169
64. Długosz A, Burczyński T, Kuś W (2007) Application of multiobjective evolutionary
algorithm in shape optimization of heat exchangers. In: Proceedings of the 17-th
international conference computer methods in mechanics, Lodz-Spala, Poland
65. Dominguez J (1993) Boundary elements in dynamics. Computational Mechanics
Publications, Elsevier Applied Science, Southampton-Boston, London-New York
66. Dziatkiewicz G, Długosz A, Burczyński T (2010) Application of multi-objective
evolutionary algorithms in optimization of piezoelectric models Book of abstracts. In: IV
European congress on computational mechanics (ECCM IV), Paris, France
67. Eschenauer HA, Schumacher A (1995) Simultaneous shape and topology optimisation of
structures. In: Olhoff N, Rozvany GIN (eds) Proceedings of first world congress of structural
and multidisciplinary optimisation. Pergamon, Oxford, pp 177–184
68. Fedelinski P, Gorski R (2006) Analysis and optimization of dynamically loaded reinforced
plates by the coupled boundary and finite element method. Comput Model Eng Sci 15
(1):31–40
69. Fonseca CM, Fleming PJ (1995) An overview of evolutionary algorithms in multiobjective
optimization. Evol Comput 3(1):1–16
70. Fu ZF, He J (2001) Modal analysis. Butterworth-Heinemann
71. Gani L, Rajan SD (1999) Use of fracture mechanics and shape optimization for component
design. AIAA J 37(2):255–260
72. Grosset RL, Venkataraman S, Haftka R (2001) Genetic optimization of two-material
composite laminates. In: Proceedings of the 16th ASC technical meeting, Blacksburg, VA
73. Gurdal Z, Haftka RT, Hajela P (1999) Design and optimization of laminated composite
materials. Wiley
74. Güler O, Foundation of Optimization (2010) Series: graduate texts in mathematics, vol 258.
Springer
75. Huang X, Xie M (2010) Evolutionary topology optimization of continuum structures:
methods and applications. Wiley
76. Kennedy J, Eberhart RC (2001) Swarm intelligence. Morgamn Kauffman
77. Kenneth FA (1997) Efficient computation of eigenvector sensitivities for structural
dynamics. AIAA 35:1760–1766
78 Kirkpatrick S, Gelatt C, Vecchi M (1983) Optimisation by simulated annealing. Science
220:671–680
79. Kleiber M (ed) (1998) Handbook of computational solid mechanics: survey and comparison
of contemporary methods. Springer
80. Krog LA, Olhoff N (1999) Optimum topology and reinforcement design of disk and plate
structures with multiple stiffness and eigenfrequency objectives. Comput Struct 72:535–563
References 193

81. Kuś (2002) Coupled boundary and finite element methods in optimization of mechanical
structures. PhD thesis, Gliwice (in Polish)
82. Kuś W, Burczyński T (2001) Distributed evolutionary algorithms in optimization of
elasto-plastic structures with use of coupled FEM-BEM method. In: Burczyński T,
Cholewa W (eds) Proceedings of the AI-MECH 2001 symposium on methods of artificial
intelligence in mechanics and mechanical engineering, Gliwice
83. Kuś W, Burczyński T (2002) Evolutionary optimization of structures modeled using coupled
FEM-BEM method. In: Burczyński T (ed) Zeszyty Naukowe KWMiMKM, vol 1.
Computational sensitivity analysis and evolutionary optimization of systems with geomet-
rical singularities, Gliwice
84. Kuś W, Burczyński T (2002) Distributed evolutionary algorithms in optimization of
nonlinear solids. In: Proceedings of the IUTAM symposium on evolutionary methods in
mechanics, Cracow, pp 51–52
85. Kuś W, Burczyński T (2004) Distributed evolutionary algorithms in optimization of
nonlinear solids In: Burczyński T, Osyczka A (eds) IUTAM symposium on evolutionary
methods in mechanics. Kluwer, Dordrecht, pp 229–240
86. Kuś W, Długosz A, Burczyński T (2011) OPTIM—library of bioinspired optimization
algorithms in engineering applications. Comput Methods Mater Sci 11(1):9–15
87. Kutyłowski R (2004) Topology optimization of material continuum (in Polish), monograph.
Oficyna Wydawnicza Politechniki Wrocławskiej, Wrocław
88. Leiva JP, Ghosh DK, Rastogi N (2002) A new approach in stacking sequence optimization
of composite laminates using genesis structural analysis and optimization software. In: 9th
symposium on multidisciplinary analysis and optimization, Atlanta
89. Leu LJ, Mukherjee S (1993) Sensitivity analysis and shape optimization in nonlinear solid
mechanics. Eng Anal Bound Elem 12
90. Michalewicz Z (1996) Genetic algorithms + data structures = evolutionary programs.
Springer-Verlag, Berlin and New York
91. Michalewicz Z, Fogel DB (2004) How to solve it: modern heuristics. Springer-Verlag
92. Min S, Kikuchi N, Park YC, Kim S, Chang S (1999) Optimal topology design of structures
under dynamic loads. Struct Optim 17:208–218
93. Min S, Nishiwaki S, Kikuchi N (2000) Unified topology design of static and vibrating
structures using multiobjective optimization. Comput Struct 75:93–116
94. MSC.MARC (2010) Theory and user information, vol A–D. MSC Software Corporation
95. Novotny AA, Feijoo RA, Taroco E, Padra C (2003) Topological sensitivity analysis.
Comput Methods Appl Mech Eng 192:803–829
96. Pegoretti A, Fabbri E, Migliaresi C, Pilati F (2004) Intraply and interply hybrid composites
based on E-glass and poly(vinyl alcohol) woven fabrics: tensile and impact properties. Poly
Int 53(9):1290–1297
97. Perez R, Behdinan K (2007) Particle swarm approach for structural design optimization.
Comput Struct 85(19–20):1579–1588
98. Piegl L, Tiller W (1995) The NURBS book. Springer-Verlag, Berlin
99. Portela A (1993) Dual boundary element analysis of crack growth. Computational
Mechanics Publications
100. Poteralski A, Szczepanik M, Dziatkiewicz G, Górski R, Kuś W, Burczyński T (2009)
Immune optimization and identification of solids modelled by the boundary element method.
In: Proceedings of the 8th world congress on structural and multidisciplinary optimization
(WCSMO-8), Lisbon, CD-ROM
101. Rajasekaran S, Lavanya S (2007) Hybridization of genetic algorithm with immune system
for optimization problems in structural engineering. Struct Multidisc Optim 34:415–429
102. Rizzo FJ, Shippy DJ (1977) An Advanced boundary integral equation method for
three-dimensional thermoelasticity. Int J Numer Meth Eng 11:1753–1768
103. Sandgren E, Jensen E, Welton J (1990) Topological design of structural components using
genetic optimization methods. In: Proceedings of winter annual meeting of the American
society of mechanical engineers, Dallas, Texas, pp 31–43
194 4 Structural Intelligent Optimization

104. Shewchuk JR (1996) Triangle: engineering a 2D quality mesh generator and delaunay
triangulator. In: First workshop on applied computational geometry, association for
computing machinery, Philadelphia, Pennsylvania, USA, pp 124–133
105. Sethian JA, Wiegmann A (2000) Structural boundary design via level set and immersed
interface methods. J Comput Phys 163:489–528
106. Seyedpoor SM, Gholizadeh S, Talebian SR (2010) An efficient structural optimization
algorithm using a hybrid version of particle swarm optimization with simultaneous
perturbation stochastic approximation. Civil Eng Environ Syst 27(4):295–313
107. Silverman E, Rhodes M, Dyer M (1999) Composite isogrid structures for spacecraft
components. SAMPE J 35:51–59
108. Sokołowski J, Żochowski A (1999) On topological derivative in shape optimisation. SIAM J
Control Optim 37(4):1251–1272
109. Szczepanik M, Burczyński T (2003) Evolutionary computation in optimisation of 2-D
structures. In: Proceedings of the WCSMO 2003 5th world congress on structural and
multidisciplinary optimization, Italy, Venice
110. Szczepanik M, Kuś W, Burczyński T (2011) Swarm optimization of stiffeners locations in
2-D structures. In: Proceedings of the computer methods in mechanics CMM-2011,
Warszawa
111. Szczepanik M, Poteralski A, Kuś W, Burczyński T (2009) Shape and topology optimization
of shell, solid and shell-solid structures using artificial immune systems. In: Proceedings of
the 8th world congress on structural and multidisciplinary optimization (WCSMO-8),
Lisbon, CD-ROM
112. Szczepanik M, Poteralski A, Kuś W, Burczyński T (2010) Optimal de-sign of shell, solid
and shell-solid structures using particle swarm optimizer. In: 9th world congress on
computational mechanics and 4th asian pacific congress on computational mechanics,
Sydney
113. Szczepanik M, Poteralski A, Górski R, Kuś W, Burczyński T (2011) Shape swarm
optimization of reinforced 2-D structures, full paper, CD-Rom, In: 9th world congress on
structural and multidisciplinary optimization, June 13–17, Shizuoka, Japan
114. Tanaka K (1974) Fatigue crack propagation from a crack inclined to the cyclic tensile axis.
Eng Fract Mech 6:493–507
115. Tanese R (1989) Distributed genetic algorithms. In: Schaffer JD (ed) Proceedings of the 3rd
ICGA, San Mateo, USA, pp 434–439
116. Tcherniak D (2002) Topology optimization of resonating structures using SIMP method.
Int J Numer Methods Eng 54:1605–1622
117. Vrbka J, Knésl Z (1986) Optimized design of a high pressure compound vessel by FEM.
Comput Struct 24(5):809–812
118. Wierzchoń ST (2001) Artificial immune systems, theory and applications. EXIT (in Polish)
119. Woon SY, Tong L, Querin OM, Steven GP (2003) Optimising topologies through a
Multi-GA system. In: Proceedings of 5th world congress on structural and multidisciplinary
optimisation, WCSMO 2003, Italy, Venice
120. Xie YM, Steven GP (1997) Evolutionary structural optimisation. London: Springer;
Zitzler E, Laumanns M, Thiele L (2001) SPEA2: improving the strength pareto evolutionary
algorithm. TIK-Report 103
121. Xie YM, Huang X (2010) Recent developments in evolutionary structural optimization
(ESO) for continuum structures. IOP Conf Ser Mater Sci Eng 10
122. Yamakawa H (1984) Optimum structural designs for dynamic response. In: Atrek A,
Gallagher RG, Ragsdell KM, Zienkiewicz OC (eds) New directions in optimum structural
design. Wiley, New York, pp 249–266
123. Zabaras N, Bao Y, Srikanth A, Frazier WG (2000) A continuum Lagrangian sensitivity
analysis for metal forming processes with applications to die design problems. Int J Numer
Meth Eng 48:679–720
References 195

124. Zhao X, Zhao G, Wang G, Wang T (2002) Preform die shape design for uniformity of
deformation in forging based on preform sensitivity analysis. J Mater Process Technol
128:25–32
125. Zhu JH, Zhang WH, Qiu KP (2005) Investigation of localized modes in topology
optimization of dynamic structures. Acta Aeronaut ET Astronaut Sin 26:619–623
126. Zienkiewicz OC, Taylor RL (2000) The finite element method. Butterworth Heinemann,
Oxford
Chapter 5
Intelligent Computing in Inverse
Problems

Abstract This chapter is devoted to the application of the bio-inspired methods in


solving various inverse problems for mechanical systems. Several identification
problems are considered. Various problems such as identification of boundary
conditions for thermo-elastic and cracked structures, defects in elastic,
thermo-elastic and cracked structures, and of material properties are formulated,
numerically implemented and solved. Appropriate identification functionals for
specific problems are formulated and implemented. Real object models were sim-
ulated using finite-element method (FEM) or boundary element method (BEM).
The problems are solved for both ideal, deterministic values of measurements in
sensors and noisy data. Evolutionary algorithms (EAs) and artificial immune sys-
tems (AISs) are used to solve the identification problems. Additionally, identifi-
cation methods and procedures are supported by artificial neural networks (ANNs)
and neuro-fuzzy inference system (NFISs). The effectiveness of the proposed
method is demonstrated in several numerical examples of identification.

5.1 Formulation of the Inverse Problems

Identification problems belong to the group of inverse problems. To solve such


tasks, one can use optimization methods to adequately formulate the optimization
problem. Identification issues are related to the search for unknown values of the
construction parameters. These parameters are identified on the basis of measurable
displacement fields, temperatures, and so on. To solve the identification task, a real
object model, knowledge of boundary conditions and sensory data are necessary.
Identification problem may consist in determining material parameters of the
structure—for example, Young’s modulus—topologies—for example, internal
edges—voids, cracks in the structure as well as shape.
The chapter discusses the use of bio-inspired methods to solve the identification
problems. Real object models were built using FEM or BEM. Particular sub-
chapters concern issues related to the formulation of identification problems as well
as solving identification problems when looking for values of boundary conditions,
construction defects and unknown material parameters.

© Springer Nature Switzerland AG 2020 197


T. Burczyński et al., Intelligent Computing in Optimal Design,
Solid Mechanics and Its Applications 261,
https://doi.org/10.1007/978-3-030-34161-9_5
198 5 Intelligent Computing in Inverse Problems

5.2 Identification of Boundary Conditions

5.2.1 Introduction

The identification of boundary conditions plays an important role in many practical


problems. This type of task belongs to inverse problems, where unknowns are
identified using the knowledge of the responses to given excitations on its boundary
[8]. In the present case unknowns are represented by boundary conditions and
responses are represented by displacements; displacements and temperatures are
measured at the sensor points.
The problems consist in finding such values of the boundary conditions that give
the solution to the fields which differ the least from the measured ones known from
the numerical experiment. The identification of the boundary conditions for the
cracked and thermo-elastic structures is considered [3, 19]. The fields on the
boundary such as:
• displacements, for the elastic problems with cracks,
• displacements and temperatures for the thermo-elasticity
are known in a given number of boundary points called sensors. The sensor points
are located on the surface of the structure. The inverse problems are not easy to
solve as they are ill-posed problems from the mathematical point of view.
The problem can be solved by means of conventional optimization methods
(gradient methods). Unfortunately, these methods have many disadvantages and the
main are:
• the objective (fitness) function has to be continuous,
• the information about objective function gradient is necessary,
• the shape variation of the boundary defect should be regular,
• the Hessian of the objective function should be positively defined,
• there is a strong probability of convergence to a local optimum (computations
start from a single point),
• the choice of the starting point may influence the convergence of the method.
In order to avoid the above mentioned drawbacks, the evolutionary algorithm is
applied to solve the presented problems [17]. The only information it needs to work
is the objective (fitness) function value. It also works on the population of
admissible solutions, so the probability of the global optimum finding is very high.
To solve the boundary-value problems, both own implementation and com-
mercial BEM and FEM software are used [2, 21].
5.2 Identification of Boundary Conditions 199

5.2.2 Formulation of the Problem

From the mathematical point of view, the identification problem is expressed as the
minimization of the following functional:
X X
J¼ wj ðxi  ^xi Þ2 ð5:2:1Þ
j i

where xi is the measured quantity (temperature, displacement), ^xi is the quantity


computed for the structure with the parameters generated by the evolutionary
algorithm, and w is the weight.
The identification problem is solved by finding the vector of design variables,
minimizing the functional (5.2.1).
For the cracked structures the displacement field u(x) is calculated by solving
boundary-value problem by means of boundary element method (BEM). This
method seems to be the most suitable one, because—assuming the lack of the body
forces—it is not necessary to discretize the inside of the body.
The direct application of BEM is not possible to apply—the set of algebraic
equations after discretization of the body becomes singular since the points on two
sides of cracks have the same coordinates. In the technique called dual BEM [1, 18]
two different equations are used, namely the boundary displacement integral
equation and the hypersingular tractions integral equation. The second one is
applied on one side of each crack, the first one on the opposite side of each crack
and the remaining on the boundary.
For the thermo-elastic structures, the displacement and temperature fields are
calculated by solving the boundary-value problem by means of BEM and FEM.
As mentioned before, evolutionary algorithms are used to solve the identification
problem. For this purpose, a coupling of EA and BEM or FEM is needed [12]. The
block diagram of the applied evolutionary programme coupled with the considered
boundary-value problem is presented in Fig. 5.1.
The evolutionary programme contains two main blocks: the evolutionary algo-
rithm block and the evaluation block, where the fitness function value is being
computed. The floating-point gene representation is used and six evolutionary
operators are applied: uniform mutation, boundary mutation, Gaussian mutation,
simple crossover, arithmetical crossover and heuristic crossover. The ranking
selection is used as the selection method.
The chromosomes ch of the evolutionary algorithm consist of genes ai
describing the boundary condition values:

ch ¼ ½a1 ; a2 ; . . .; aR  ð5:2:2Þ
200 5 Intelligent Computing in Inverse Problems

Fig. 5.1 The block diagram of EA coupled with thermo-elasticity and cracked analysis

with constrains:

amin
t  ar  amax
k ; r ¼ 1...R ð5:2:3Þ

Depending on the task, the gene in the chromosome can represent either the
value or the position of the boundary condition.
Measurements are practically never ideal, so it is proper to take for granted that
there occurs a measurement error. This problem has been solved as well as for ideal
deterministic (no noise) and disturbed randomly (with noise) values of measured
displacements and temperatures. For nonideal deterministic measured values, the
Gaussian distribution is applied. The density function N ðl; rÞ is shown in Fig. 5.2,
where q ¼ ^ uk or q ¼ T^ l .
The expected value l is equal to the ideal deterministic value of measured
displacements or measured temperatures. The standard deviation r is equal to 1/3 of
the maximal error. The maximal error of measurement is assumed at 10%.
5.2 Identification of Boundary Conditions 201

Fig. 5.2 The density function of the Gaussian distribution N(l, r)

5.2.3 Numerical Examples of Identification of Boundary


Conditions
Example 1: Evolutionary identification of traction field
A 2D structural element containing a single crack (Fig. 5.3) is considered. The
assumption is that only the nonzero constant tractions of known positions and
directions (but not known sense) are identified. The aim is to identify the values of
three tractions p1, p2 and p3 having measured displacements at four sensor points
A–D on the boundary. Two cases are considered: measurements are perfect and not
ideal (disturbed by the stochastic Gaussian noise). The actual and final values of
tractions are presented in Table 5.1. All the results are averaged for five indepen-
dent computations.
Example 2: Evolutionary identification of traction field
A 2D structural element containing a single crack (Fig. 5.4) is considered. It is
assumed that only one traction of known (rectangular) shape exists. The aim is to
find the position (surface) of its application (p1) and the value (p2) of the traction
having measured displacements at four boundary sensor points A–D. The influence
of the noise is considered as well. The actual and final values of the tractions are
presented in Table 5.2. All the results are averaged for five independent
computations.
Example 3: Evolutionary identification of a circular hole in rectangular plate
The identification of a circular hole and temperature on the boundary of the hole in
the rectangular plate shown in Fig. 5.5 is considered. The fitness function given by
(5.2.1) is applied. In order to assure the comparable contribution of the displace-
ment and the temperature fields, the appropriate values of weights are chosen. The
202 5 Intelligent Computing in Inverse Problems

Fig. 5.3 Example 1: identification of three tractions

Table 5.1 Example 1: identification of three tractions


Number of individuals 100 Variable no. Limitations
Max. number of gener. 100 1 –50; 50
Number of state variables 3 2 –100; 100
3 –50; 50
Mutation probabilities Crossover probabilities
Uniform 0.1 Simple 0.15
Boundary 0.04 Arithmetic 0.15
Heuristic 0.15
Variable values
No noise Noise
No. Actual Final Error (%) Final Error (%)
1 15.0 15.7546 5.03 15.4583 3.06
2 10.0 10.0484 0.48 10.4867 4.87
3 15.0 15.7516 5.01 15.4428 2.95
5.2 Identification of Boundary Conditions 203

Fig. 5.4 Example 2: identification of one traction

Table 5.2 Example 2: identification of one traction


Number of 50 Variable no. Limitations
individuals
Max. number of 50 1 0, 1, …, 10
gener.
Number of state 2 2 0; 50
variables
Mutation Crossover probabilities
probabilities
Uniform 0.1 Simple 0.15
Boundary 0.04 Arithmetic 0.15
Heuristic 0.15
Variable values
No noise Noise
No. Actual Final Error (%) Final Error (%)
1 5 5 0 5 0
2 10.0 10.0534 0.53 10.0665 0.67
204 5 Intelligent Computing in Inverse Problems

Fig. 5.5 Rectangular plate with circular hole

displacements are measured at the sensor points 1 and 2. The temperatures are
measured at the sensor points 3 and 4.
The position and radius of the hole and temperature on the boundary of the hole
are searched for. The boundary-value problem is solved by using the own imple-
mentation of BEM for the linear steady-state thermo-elasticity. The boundary of the
structure is discretized with 48 linear boundary elements. Table 5.3 contains evo-
lutionary parameters which were applied.
Four numerical tests were performed. Table 5.4 contains the results while
Table 5.5 contains relative errors of X and Y coordinates, radius R and temperature
on internal boundary.

Table 5.3 The applied Number of chromosomes 100


parameters of evolutionary
Number of iterations 200
algorithm and boundary
conditions value Number of design parameters 4
Probability of uniform mutation 0.02
Probability of boundary mutation 0.015
Probability of simple crossover 0.10
Probability of arithmetic crossover 0.10
Probability of heuristic crossover 0.10
T01 20 °C
T02 500 °C
q0 0
p0 100 kN/m
a1 20 W/m2K
a2 1000 W/m2K
5.2 Identification of Boundary Conditions 205

Table 5.4 The results of the Test 1 Test 2 Test 3 Test 4


tests
X coordinate 24.96 25.02 24.99 25.07
Y coordinate 2.86 3.03 2.99 3.09
Radius R 0.90 1.02 0.99 1.06
T temperature 501.8 499.1 500.8 498.4
Value of fitness 0.087 0.99 0.002 0.05
function
X coordinate 0.15 0.10 0.06 0.28
error (%)
Y coordinate 4.70 0.89 0.41 3.03
error (%)
R radius error 9.54 1.78 0.42 6.27
(%)
T temperature 0.36 0.18 0.16 0.31
error (%)

Table 5.5 The average of Average of X coordinate error 0.14%


relative errors
Average of Y coordinate error 2.26%
Average of R radius error 4.50%
Average of T temperature error 0.25%

Example 4: Evolutionary identification of boundary temperature in the internal


boundaries
The identification of the unknown boundary temperatures is performed by the
minimization of the same functional as in the previous example. The rectangular
plate with five circular holes shown in Fig. 5.6 is considered.
The plate was subjected to the thermal and mechanical boundary conditions. The
plate is supported on the left boundary, whereas the pressure P = 5 MPa is applied
on the opposite side. In the case of thermal boundary conditions, the heat flux
q = 10 W/m on the bottom boundary is given. The boundary temperatures T1 , T2 ,
T3 , T4 , T5 at the five internal boundaries are identified. Six sensor points of dis-
placement and temperatures are located on the external boundary (Fig. 5.6). The
plate is made of steel whose material properties are as follows: Young’s modulus
E ¼ 2e11 MPa, Poisson’s ratio m ¼ 0:3, thermal expansion coefficient
aT ¼ 12:5  106 1/°C and thermal conductivity k ¼ 25 W/m °C.
The thermo-elasticity problem is solved using FEM commercial software MSC.
Marc. The plate is divided into 1243 four-node (quad4) elements. The deformed
shape of the plate and distribution of the temperature for the reference structure are
presented in Fig. 5.7.
Five numerical tests are performed for the population size 15 and the number of
generation 50. Table 5.6 contains the best solutions and the fitness function values
206 5 Intelligent Computing in Inverse Problems

Fig. 5.6 Geometry, boundary conditions and location of the sensor points

Fig. 5.7 The distribution of temperature and the deformation of the plate

for the numerical tests, whereas Table 5.7 contains relative errors for these tests.
Figure 5.8 presents the change in objective function during the identification
process.
5.2 Identification of Boundary Conditions 207

Table 5.6 The results of five numerical tests


T1 T2 T3 T4 T5 ff value
Exact value 10 20 30 40 50 0
Test 1 10.14 20.23 29.81 39.71 49.84 0.91
Test 2 9.87 20.13 29.14 39.95 50.60 1.25
Test 3 9.90 20.08 29.81 39.98 50.60 0.87
Test 4 9.94 19.87 30.30 40.20 49.72 0.60
Test 5 9.00 20.20 30.03 40.17 49.94 1.07

Table 5.7 Relative errors for T1 (%) T2 (%) T3 (%) T4 (%) T5 (%)
the tests
Test 1 1.39 1.14 0.63 0.73 0.33
Test 2 1.32 0.66 2.87 0.12 1.21
Test 3 0.98 0.39 0.63 0.06 1.19
Test 4 0.56 0.65 1.01 0.51 0.56
Test 5 9.96 1.02 0.11 0.43 0.11

Fig. 5.8 The graph of fitness function

Example 5: Evolutionary identification Robin boundary condition in the box


structure.
The box structure under thermo-mechanical loading presented in Fig. 5.9 is con-
sidered. One surface of the box is supported, whereas on the opposite surface, a
point load is applied at each node (the total load is equal to 224 kN). The
208 5 Intelligent Computing in Inverse Problems

Fig. 5.9 Geometry, boundary conditions and location of the sensor points

temperature T = 10 °C is applied on the supported surface of the structure. Thermal


condition of the third-type thermal boundary condition (convection—Robin con-
dition) is specified on the internal surface, where the ambient temperature T 1 and
heat convection coefficient a are identified.
The identification has been performed for four sensor points of temperatures and
four sensor points of displacements located on the external surfaces of the structure
(Fig. 5.9). The structure is made of steel that has material properties identical as in
the previous example. Figure 5.10 shows the deformation and distribution of the
temperature in the model, which consists of 5712 hex8 elements.
The exact value of the heat convection coefficient a and the ambient temperature
T 1 are a ¼ 5 W/m2K and T 1 ¼ 50  C, respectively.
Five numerical tests are performed for the population size 15 and the number of
generation 50. Table 5.8 contains the best solutions, relative errors and the fitness
function values for the numerical tests.
Owing to the poor quality of the results (only for one test the results are satis-
factory), numerical tests are also performed for the population size 50 and the
number of generation 100. Unfortunately, the identification results are also unsat-
isfactory. For the different pairs of values of heat convection coefficient and ambient
temperature, the structure gives similar response of the temperature and displace-
ment field, so the fitness functional is strongly multimodal and the identification
becomes very difficult.
5.2 Identification of Boundary Conditions 209

Fig. 5.10 The distribution of the temperature and deformation of the structure

Table 5.8 Result of five numerical tests


a Error of a (%) T1 Error of T 1 (%) ff value
Test 1 3.80 23.90 61.56 23.13 0.006799
Test 2 7.04 40.80 39.32 21.35 0.0121
Test 3 4.90 2.06 50.78 1.56 0.001299
Test 4 3.84 23.17 61.10 22.19 0.006701
Test 5 6.34 26.85 42.21 15.57 0.008

5.2.4 Concluding Remarks

The presented evolutionary method of the boundary conditions optimization and


identification gives positive results. For the cracked structures and for the identi-
fication of the boundary temperature, relative errors of the obtained results are very
low. Even for the nonideal deterministic measurements, the results of the identifi-
cation are acceptable. Only for the identification of the Robin boundary condition
210 5 Intelligent Computing in Inverse Problems

the results are unsatisfactory. This shows that inverse problems are not easy to
solve. For some ill-posed problems for the case where the responses of the model
are nonunique, the identification cannot be performed correctly.
The presented approach could be very effective especially when it is not possible
or hard to use classical optimization methods.
Evolutionary calculations are time-consuming, but nowadays this problem
becomes less burdensome because of the very fast increasing efficiency of the
computers.

5.3 Identification of Defects

5.3.1 Introduction

Many real structures contain internal defects in the form of voids, cracks or addi-
tional masses (inclusions), which can reduce the life-time of the structure. The
identification of the defect seems to be a practically important problem.
Nondestructive identification methods have to be employed to identify the internal
defects.
There exist many methods that allow the identification of internal defects on the
basis of knowledge about boundary state fields like displacements, stresses, tem-
perature or natural frequencies. One group of methods is based on the sensitivity
analysis [7]. This approach is very fast and precise but can lead to local optima. In
the present chapter the global optimization methods in the form of evolutionary
algorithms (EAs) are used to solve the identification problems.
The finite-element method (FEM) or the boundary element method (BEM) is
used to solve the boundary-value direct problem. Artificial neural networks (ANNs)
and neuro-fuzzy inference systems (NFISs) are employed to approximate the
boundary-value problem in order to reduce the computational time.
Identification tasks belong to inverse problems which are mathematically
ill-posed [8]. In such problems the kind of measured values and the number of
measurements is significant. The number of useful measurement data in many
practical cases is small which can lead to an indeterminate set of equations. The set
of equations may also be ill-conditioned. On the other hand, a large number of
measurements (and sensors) can be expensive and also not easy to apply in practice.
The chapter is devoted to application of intelligent techniques for nondestructive
identification of multiple internal defects (crack and voids) in mechanical systems
being under static loads, dynamical loads and in the free vibration state.
5.3 Identification of Defects 211

5.3.2 Formulation of the Defect Identification Task

The aim of the identification problem is to find the vector of parameters


p, describing the number, shape and position of the defects. Classical approach to
solve the identification problem is the minimization of some measure of distance
between experimentally measured and numerically simulated state fields values
(displacements, stresses, natural frequencies, etc.). Numerical simulation is typi-
cally performed by means of the FEM or the BEM.
The identification problem is expressed as the minimization of the objective
functional J0 with respect to a design vector which is represented by a design
variable vector x:

minðJ0 Þ ð5:3:1Þ
x

If the evolutionary algorithm is used as the optimization method, the vector x is a


chromosome ch representing one candidate solution. In the present chapter the
minimized objective function has a form [9]:
• in a static case:

Z
^ðyÞdC
J0 ¼ ½qðyÞ  q ð5:3:2Þ
C

• in a dynamical case:

tZF Z
J0 ¼ ^ðy; tÞdC
½qðy; tÞ  q ð5:3:3Þ
0 C

where q ^ are measured values of the state fields (displacements u, temperatures T or


natural frequencies x), q are the values of the same state fields calculated from the
numerical model of the structure, y are sensor boundary points and t is time.
It is also possible to create a combination of different objective functions if more
than one state field are taken into account:

X
m
J0 ¼ gi J0i ð5:3:4Þ
i¼1

where J0i are objective functions for ith state field data, ηi are non-negative weights
indicating the relative importance of each J0i.
212 5 Intelligent Computing in Inverse Problems

Measurements have been obtained numerically (numerical experiment) in all


cases assuming perfect and disturbed measurement data.
To calculate the fitness function J0 value, it is necessary to evaluate the values of
the state fields q (for numerical experiment) and q ^. It can be done by means of the
finite-element method (FEM) [21] or the boundary element method (BEM) [13].
The block diagram of defect identification by means of the evolutionary algorithm
and FEM or BEM is presented in Fig. 5.11.

5.3.3 Geometrical Parameterization of Defects

One of the important issues connected with the identification of defects is the
selection of design variables which enable the description of the shape, the position,
the kind and the number of defects. The defects in 2D structures are modelled as:
(i) a circular, (ii) an elliptical, (iii) any arbitrary shape by using the closed NURBS
curves or (iv) cracks (Fig. 5.12). In the case of 3D structures, the defects are
modelled as: (i) a spherical, (ii) an ellipsoidal, or (iii) an arbitrary shape by means of
a closed NURBS surfaces (Fig. 5.13).
If the number of defect is unknown, a few types of chromosomes describing the
identified defect are proposed. The maximal number of defects n_max is presumed.
In the 2D case the chromosomes have one of the types presented below.

Fig. 5.11 The block diagram of the identification procedure (EA and BEM or FEM)
5.3 Identification of Defects 213

Fig. 5.12 The modelled forms of the defects (2D): a, b, c voids; d, e, f cracks

Fig. 5.13 The modelled forms of the defects (3D): a spherical, b ellipsoidal, c arbitrary shape

The first type of the chromosome is constructed as follows:

ch ¼ ½n; x1 ; y1 ; r1 ; x2 ; y2 ; r2 ; . . .; xl ; yl ; rl ; . . .; xn max ; yn max ; rn max  ð5:3:5Þ

where n2{0, n_max} represents the number of defects.


The second type of the chromosome has the form:
214 5 Intelligent Computing in Inverse Problems

ch ¼ ½x1 ; y1 ; r1 ; x2 ; y2 ; r2 ; . . .; xl ; yl ; rl ; . . .; xn max ; yn max ; rn max  ð5:3:6Þ

where the actual number of defects is controlled by the condition rl \rmin . If this
condition is fulfilled, the lth defect does not exist.
The third type of the chromosome is constructed as follows:
 
w1 ; w2 ; . . .; wl ; . . .; wn max ; x1 ; y1 ; r1 ; x2 ; y2 ; r2 ; . . .; xl ; yl ; rl ;
ch ¼ ð5:3:7Þ
. . .; xn max ; yn max ; rn max

A controlling parameter wl = {true, false} determines if the lth defect exists


(true) or not (false).
In all chromosome types xi and yi denote the coordinates of the centres of
defects. In the case of 3D problems defect centres have three coordinates xi, yi, zi for
each I = 1, 2, …, n_max.
The parameters vector rl, l = 1, 2, …, n depends on the kind of defect and has
the following forms:
• for circular (Fig. 5.12a) or spherical (Fig. 5.13a) voids, the vector rl = [rl]
contains one member which represents the radius rl of the lth void,
• for elliptical (Fig. 5.12b) or ellipsoidal (Fig. 5.13b) voids, the parameters vec-
tors have forms rl = [rlx, rly, al] and rl = [rlx, rly, rlz, alx, aly, alz], respectively,
• for an arbitrary shape of the void, the vector parameter is described by rl = [rl1,
rl2, …, rls, …, rln], where rls are the positions of the NURBS control points on
given rays (Fig. 5.12c for 2D and Fig. 5.13c for 3D structures),
• for linear
h i xi, yi are the coordinates of one tip while rl is defined as
cracks
rl ¼ x2tip 2tip
l ; yl (Fig. 5.12d) or rl = [al, ll] (Fig. 5.12e), where x2tip 2tip
l ; yl are the
coordinates of the second crack tip, al = a slope angle, ll = the crack length,
• for the segmental-straight cracks consisting of R linear segments rl = [ll1, al1, ll2,
al2, …, llr, alr, …, llR, alR], where llr = the length, alr = a slope angle of rth
segment, respectively (Fig. 5.12f).
The elliptical (for 2D) and the ellipsoidal (for 3D) defects can also represent the
cracks. If rx ! 0 or ry ! 0 the elliptical void becomes the plane crack. Similarly,
the ellipsoidal void transforms into the spatial crack if rx ! 0, ry ! 0 or rz ! 0. If
the actual number of defects n is smaller than n_max, some genes are treated as
nonactive ones. This attitude allows finding the unknown number of defects.

5.3.4 The Intelligent Identification System

Calculation of the fitness function value is usually the most time-consuming ele-
ment of the evolutionary computations. It is possible to speedup the calculations by
replacing the BEM or the FEM solutions by their approximations with the help of
the artificial neural networks (ANNs) [11] or neuro-fuzzy inference system (NFIS)
5.3 Identification of Defects 215

[10, 15]. The block diagram of the proposed intelligent identification system is
presented in Fig. 5.14. The artificial neural network or the fuzzy inference system
works as the approximator of a boundary-value problem for the different number,
shapes and positions of defects. The EA is searching for the number, shapes and
positions of internal defects on the basis of the results obtained by means of the
approximators.
The approximators (the ANN and the NFIS) are trained with the help of gradient
methods, the evolutionary method and the evolutionary method coupled with the
gradient method. The neural network with Gaussian radial basis functions (see
Sect. 3.8.4) and the fuzzy inference system with Gaussian membership functions
(see Sect. 3.9.5) are considered. The input–output pairs for ANN or NFIS are
obtained by means of the BEM or the FEM calculations of the boundary-value
problem.
Then in both cases networks are trained by means of gradient method.
Parameters are modified in each step according to the formula:

@E
wðs þ 1Þ ¼ wðsÞ  gðsÞ þ aDwðs  1Þ ð5:3:8Þ
@w

Fig. 5.14 The intelligent identification system


216 5 Intelligent Computing in Inverse Problems

where w is the modified parameter, s is the number of iteration steps, η is the


learning rate, a is the momentum rate, Dwðs  1Þ is the change of the parameter in
previous iteration step.
In order to reduce the risk of reaching the local minimum of the error function
during the training process, the training pairs are randomly chosen from the training
set.
The learning rate η is modified during the training phase in the following way:
g0
gð s Þ ¼ ð5:3:9Þ
1 þ Qs

where s is the iteration step, η0 is the initial value of the learning rate and Q is a
constant.

5.3.5 Numerical Examples of Defect Identification

5.3.5.1 The Evolutionary Identification of a Single Void

A 2D elastic structure presented in Fig. 5.15 is considered. The material constants


of the structure are: E = 2.1e9 MPa, m = 0.3. The structure is loaded by dynamical
traction field p ¼ p0 sinxf t. It is assumed that the structure contains a void of
arbitrary shape which is parameterized by NURBS curve with six control points.
The displacements in 32 boundary points are measured.

Fig. 5.15 The structure with


identified void
5.3 Identification of Defects 217

The chromosome has the form ch = [x, y, r1, …, r6], where x and y are the
coordinates of the centre and rl, l = 1, …, 6, are the positions of control points on
rays. The angle between two neighbouring rays is equal to 60°. The actual position
and shape of the void is described by chr = [15, 50, 1.5, 1.5, 1.5, 4.5, 1.5, 2.5].
The parameters of the evolutionary algorithm are:
– the population size: pop_size = 600;
– the maximum number of generations: max_life = 100;
– the probability of uniform mutation: pum = 0.25;
– the probability of nonuniform mutation: pnm = 0.35;
– the probability of boundary mutation: pbm = 0.05;
– the probability of simple crossover: psc = 0.25;
– the probability of heuristic crossover: phc = 0.25;
– the probability of arithmetical crossover: pac = 0.25;
– the cloning probability: pcl = 0.05.
The identification problem has been solved for two kinds of measurements:
• the ideal data of displacements obtained from numerical simulation for the
actual void by the BEM;
• the disturbed measurements obtained by the additional introduction of the
Gaussian noise.
The identification results are presented in graphical form in Fig. 5.16.

5.3.5.2 The Evolutionary Identification of Multiple Voids—


Mechanical Boundary Conditions

A 2D elastic structure presented in the previous numerical example and containing


two circular voids and one elliptical void is considered. All voids are parameterized

Fig. 5.16 The identification results: a for ideal measurements; b for disturbed measurements
218 5 Intelligent Computing in Inverse Problems

by the elliptical description. It is assumed that the number of voids is unknown and
n_max = 3. The chromosome has the form ch = [x1, y1, r1, x2, y2, r2, x3, y3, r3],
rl = [rlx, rly, al]. The actual position and shape of the voids are described by
chr = [70, 0, 3, 3, 0; 20, 70, 2, 2, 0; 20, 20, 6, 3, 1].
The aim of the identification is to find the number of defects, their size
and coordinates having measured: (i) eigenvalues xi, I = 1, 2, 3; (ii) displacements
u(x, T) in 21 boundary sensor points. The objective function is given as:
J0 ¼ gx Jx þ gu Ju , nx = nu = 0.5.
The parameters of the evolutionary algorithm are:
– the population size: pop_size = 3000;
– the maximum number of generations: max_life = 100;
– the probability of uniform mutation: pum = 0.25;
– the probability of nonuniform mutation: pnm = 0.35;
– the probability of boundary mutation: pbm = 0.05;
– the probability of simple crossover: psc = 0.25;
– the probability of heuristic crossover: phc = 0.25;
– the probability of arithmetical crossover: pac = 0.25;
– the cloning probability: pcl = 0.05.
The best solutions in chosen generations of EA are shown in Fig. 5.17.

5.3.5.3 The Evolutionary Identification of Multiple Voids—


Thermo-Mechanical Boundary Conditions

A 2D structure containing three circular voids is considered (Fig. 5.18a). The


boundary conditions and sensor point positions are presented in Fig. 5.18b. All
voids are parameterized by the circular description. The aim of the identification is
to find the number, positions and radii of the voids having measured:
• temperatures in all (56) sensor points;
• displacements in all sensor points;
• temperatures in 30 sensor points (circles in Fig. 5.18b) and displacements in 28
sensor points (squares in Fig. 5.18b). In this case the objective function has a
form: J0 ¼ gT JT þ gu Ju , nT = nu = 0.5.
It is assumed that the number of voids is unknown and n_max = 5. The iden-
tification problem has been solved for ideal and disturbed data (Gaussian noise).
The boundary conditions are: p0 = 100 Mn/m, T0 = 100 °C, T1 = 100 °C,
T2 = 100 °C, a0 = 1000 W/m2K, a1 = 20 W/m2K. The parameters of the EA are as
follows:
– the population size: pop_size = 500;
– the maximum number of generations: max_life = 300;
– the probability of uniform mutation: pum = 0.015;
– the probability of nonuniform mutation: pnm = 0.1;
5.3 Identification of Defects 219

Fig. 5.17 The identification results for EA generation: a 1st, b 10th, c 50th, d 100th

– the probability of boundary mutation: pbm = 0.01;


– the probability of simple crossover: psc = 0.07;
– the probability of heuristic crossover: phc = 0.10;
– the probability of arithmetical crossover: pac = 0.07;
– the cloning probability: pcl = 0.03.
The identification results for ideal and disturbed data for different types of
measurements are presented in Figs. 5.19 and 5.20, respectively.
It can be observed that the best identification results have been obtained for
simultaneous measurements of different state fields. “Ideal” data give better results
than disturbed ones but this kind of measurements can be obtained only if the
numerical experiment is performed instead of real measurements.
220 5 Intelligent Computing in Inverse Problems

Fig. 5.18 A structure with three voids: a shape; b boundary conditions and sensor points

5.3.5.4 The Neuro-evolutionary Identification of Voids

A 2D elastic rectangular plate of dimensions L = 300 mm, H = 100 mm in plane


stress state is fixed and statically loaded (Fig. 5.21). Two cases, with one and two
internal circular holes, are considered. The material constants of the structure are:
E = 2.1e9 MPa, m = 0.3. The upper part of the structure is loaded by the traction
5.3 Identification of Defects 221

Fig. 5.19 Identification results for ideal data having measured: a temperatures; b displacements;
c temperatures and displacements

Fig. 5.20 Identification results for disturbed data having measured: a temperatures; b displace-
ments; c temperatures and displacements

p = 100 MPa. The displacements are measured in 30 sensors located on the free
part of the boundary.
The EA is employed to identify the number of defects n and their parameters on
the basis of the knowledge about F natural frequencies of the structures with defect
and displacements in S sensor points on the boundary of the structure. The
unknown parameters of the defect are the coordinates of the hole centres (xi, yi) and
their radii Ri (I = 1, 2, …, n_max). Defects are described by a chromosome:
ch = [x1, y1, R1, …, xi, yi, Ri, …, xn_max, yn_max, Rn_max]. It is also assumed that the
number of circular voids is not known and n_max = 2 in both cases. As a result,
each chromosome has the form: ch = [x1, y1, R1, x2, y2, R2]. If Ri < Rmin ith defect
does not exist and xi, yi, Ri genes are inactive ones.
In the case of a plate with one defect, a training set consists of 1008 input–output
pairs and a testing set consists of 300 pairs. In the case of the plate with two defects
the training set consisted of 12,800 pairs and the testing sets consisted of 1600
input–output pairs.
222 5 Intelligent Computing in Inverse Problems

Fig. 5.21 The rectangular


plate a with one void; b with
two voids

The parameters of the EA are as follows:


– the population size: pop_size = 50;
– the maximum number of generations: max_life = 500;
– the probability of Gaussian mutation: pgm = 0.01;
– the probability of arithmetical crossover: pac = 0.8.
Chromosomes consisting of genes with information about the position and shape
of defects are sent to inputs of the approximators. Displacements in three sensor
points (no. 23, 25 and 27) and three natural frequencies are the measurement data.
The fuzzy inference system is employed to approximate the fitness function value
for each chromosome. The obtained computation speedup, compared to the cal-
culations of the fitness function by means of BEM, was equal to 1.92 (neglecting a
NFIS training time).
Approximated displacements in three sensor points (no. 23, 25 and 27) and
natural frequencies are obtained as the outputs of NFIS. They are sent back to the
EA and the fitness function for each chromosome is calculated.
The identification results for 1 and 2 voids are collected in Tables 5.9 and 5.10,
respectively.
5.3 Identification of Defects 223

Table 5.9 The identification results for n = 1 and n_max = 2


Void x1 (mm) y1 (mm) R1 (mm) x2 (mm) y2 (mm) R2 (mm)
Actual 130.0 55.0 9.0 0.0 0.0 0.0
Identified 126.8 47.0 9.1 0.0 0.0 0.0
Actual 130.0 35.0 12.0 0.0 0.0 0.0
Identified 130.5 34.6 10.4 0.0 0.0 0.0
Actual 50.0 40.0 7.0 0.0 0.0 0.0
Identified 49.0 36.0 6.7 0.0 0.0 0.0

Table 5.10 The identification results for n = 2 and n_max = 2


Void x1 (mm) y1 (mm) R1 (mm) x2 (mm) y2 (mm) R2 (mm)
Actual 65.0 35.0 8.0 135.0 45.0 11.0
Identified 57.6 43.7 7.5 135.6 40.8 11.4
Actual 83.0 40.0 11.0 103.0 65.0 10.0
identified 73.5 37.0 10.5 105.2 64.2 10.4
Actual 65.0 35.0 8.0 100.0 45.0 8.0
Identified 75.8 35.9 7.6 102.7 40.9 7.5

5.3.5.5 The Evolutionary Identification of a Single Crack

An elastic structural element of the shape and dimensions presented in Fig. 5.22
containing a single crack consisting of two linear segments is considered. The

Fig. 5.22 The plate with one


crack—dimensions, boundary
conditions and sensor points
224 5 Intelligent Computing in Inverse Problems

material constants of the structure are: E = 2.0e9 MPa, m = 0.25. The structure is
loaded by a traction field p = 10 MN/m2.
The aim of the identification is to find a size and a position of the crack having
measured displacements at 37 boundary sensor points. It is assumed that mea-
surements are disturbed by Gaussian measurement error. The fitness function values
for each individual in the population are obtained from the analysis of the structure
by means of the dual BEM [18].
The chromosome has the form: ch = [x1, y1, ll1, al1, ll2, al2], where x1, y1 are
coordinates of the first crack tip, ll is the segment length, and al is the segment slope
angle.
The parameters of the EA are:
– the population size: pop_size = 100;
– the maximum number of generations: max_life = 1000;
– the probability of uniform mutation: pum = 0.01;
– the probability of boundary mutation: pbm = 0.05;
– the probability of simple crossover: psc = 0.1;
– the probability of heuristic crossover: phc = 0.1;
– the probability of arithmetical crossover: pac = 0.1.
The actual and final positions of the crack are shown Fig. 5.23. The actual and
final values of design variables are presented in Table 5.11. It can be observed that
the evolutionary algorithm properly identified the size and position of the segmental
crack.

Fig. 5.23 The plate with one


crack—the identification
results

actual posiƟon

final
posiƟon
5.3 Identification of Defects 225

Table 5.11 The identification results for the plate with one crack
Variable no., dimension Actual value Range Final value Error (%)
1 (m) 0.00 –0.20; 0.90 0.000 –
2 (m) –0.04 –0.20; 0.90 –0.0401 0.25
3 (m) 0.04 0.0; 0.1 –0.0413 3.25
4 (m) 0.0 –90; 90 0.0 –
5 (°) 0.058 0.0; 0.1 0.061 5.17
6 (°) 62.0 –90; 90 61.1 1.46

5.3.5.6 The Evolutionary Identification of an Unknown Number


of Cracks

An elastic structural element of the shape and dimensions presented in Fig. 5.24
containing linear cracks is considered. The material constants of the structure are:
E = 2.0e9 MPa, m = 0.25. The structure is loaded by a traction field p = 10 MN/m2.
The aim of the identification is to find the number, the size and the position of
cracks having measured displacements at 81 boundary sensor points. It is assumed
that measurements are disturbed by Gaussian measurement error. The fitness
function values for each chromosome are calculated by means of the dual BEM.
It is assumed that the maximum number of cracks n_max = 5, while the actual
number of cracks is 2. A total of 21 design variables represent the number of cracks
n and coordinates
h of 10 possible crack tips, respectively. The chromosome
i has a
form: ch ¼ n; xtip1 tip1 tip2 tip2 tip1 tip2 tip2 tip2
1 ; y1 ; x1 ; y1 ; . . .; xn max ; yn max ; xn max ; yn max .

Fig. 5.24 The plate with 1–5


cracks—dimensions,
boundary conditions and
sensor points
226 5 Intelligent Computing in Inverse Problems

Fig. 5.25 The plate with 1–5


cracks—the identification
results

The parameters of the EA are:


– the population size: pop_size = 100;
– the maximum number of generations: max_life = 1000;
– the probability of uniform mutation: pum = 0.01;
– the probability of boundary mutation: pbm = 0.05;
– the probability of simple crossover: psc = 0.1;
– the probability of heuristic crossover: phc = 0.1;
– the probability of arithmetical crossover: pac = 0.1.
The actual and final positions of the crack are shown in Fig. 5.25. The actual and
final values of variables (for active genes) are presented in Table 5.12.
The evolutionary algorithm properly found the number of cracks. The position
of the cracks was identified with good precision.

Table 5.12 The identification results for the plate with 1–5 cracks
Variable no., dimension Actual value Range Final value Error (%)
1 (m) 2 1, …, 5 2 0
2 (m) 0.00 –0.20; 0.90 0.01 –
3 (m) –0.04 –0.20; 0.90 –0.041 2.5
4 (m) 0.04 –0.20; 0.90 0.04 0
5 (m) –0.02 –0.20; 0.90 –0.019 5.0
6 (m) 0.04; –0.20; 0.90 0.042 5.0
7 (m) –0.07 –0.20; 0.90 –0.073 4.28
8 (m) 0.07 –0.20; 0.90 0.072 2.85
9 (m) –0.04 –0.20; 0.90 –0.043 7.5
5.3 Identification of Defects 227

5.3.6 Concluding Remarks

The application of the computation intelligence methods for the identification of


different internal defects has been presented. Evolutionary computing is a very
effective technique for inverse problems. The presented approach enables finding
not only the positions and shape of defects but also the number of them. The
number and location of the sensor points as well as a type of them highly influence
the results of the identification. Combining the measured information (application
of different types of sensors) makes the identification process faster and more
unequivocal. To speed up the evolutionary identification, it is possible to use the
intelligent identification system which consists of the evolutionary algorithm
(EA) and the artificial neural network (ANN) or the neuro-fuzzy inference system
(NFIS). NFIS and ANN can be used as approximators of the boundary value
problem in identification tasks.

5.4 Identification of Material Properties

5.4.1 Introduction

Composite materials, especially composite laminates, play increasingly important


role in the modern industry due to their properties [14]. It is possible to tailor the
material properties to the designer requirements by manipulating components
material, stacking sequence, fibres orientation, layer thickness, and so on, in order
to obtain the optimal material properties for the given application. Composite
materials also have high strength–weight ratio in comparison with the conventional,
usually isotropic materials.
Because of the anisotropy of laminated structures, it is often necessary to
identify the elastic properties of designed and manufactured structure. As laminate
elements are typically produced individually or in short series, the nondestructive
tests are required. Conventional methods of the stiffness parameters identification in
the composites, based on the strain fields’ measurements, are not efficient enough
due to sample size dependencies, boundary effects and problems with obtaining
homogenous stress and strain fields. As a consequence of presented difficulties,
indirect methods, like numerical and mixed numerical-experimental methods, have
been developed recently [6].
The identification procedure is based on the comparison of measured state fields
values and values taken from the numerical model of the structure [20]. In the
present chapter measurements of the natural frequencies, accelerations at the sensor
points (frequency response of the structure) and static data (displacements, strains,
stresses) collected at sensor points are used as information necessary for the
identification procedure. As the composites identification problems are multimodal
ones, the global optimization methods in the form of evolutionary algorithms and
228 5 Intelligent Computing in Inverse Problems

artificial immune systems are used. To accelerate computations a hybrid opti-


mization method coupling global and local methods is used.
Numerical methods like the boundary element method or the finite-element
method can be used to solve the boundary-value problem. The finite-element
method software is employed to solve a boundary-value problem for the laminate
plates.

5.4.2 Formulation of the Materials Identification Task

The aim of the identification procedure is to find values of the elastic constants for
multilayered, symmetrical laminates stacked of many layers and having different
fibres orientation. Simple and hybrid laminates (interply hybrids with laminas made
of different materials) are considered.
The identification is performed having measured: eigenfrequencies, accelerations
at the sensor points (frequency response) and displacements, strains or stresses at
the sensor points. To calculate the objective function value for each candidate
solution, the boundary-value problem for laminates is solved by means of the
commercial finite-element method (FEM) software package MSC.Patran/Nastran.
All measurements are simulated numerically (numerical experiment) assuming
ideal and disturbed responses of the structure.
To solve the laminates’ identification task, the following computational intelli-
gence methods have been used:
• distributed evolutionary algorithm (DEA) described in Sect. 3.3;
• parallel artificial immune systems (PAIS) presented in Sect. 3.6;
• Two-step optimization strategy depicted in Sect. 3.9.4.
The identification problem can be treated as the minimization of the objective
functional J0 with respect to a design variables vector x:

minðJ0 Þ ð5:4:1Þ
x

The functional J0 has one of the following forms:

N 
X 
qi  qi 
^
J 0 ð xÞ ¼  ^ ð5:4:2Þ
i¼1
q  i

or

X
N
J 0 ð xÞ ¼ qi  qi Þ 2
ð^ ð5:4:3Þ
i¼1
5.4 Identification of Material Properties 229

where x = (xi) are parameters representing identified elastic constants, q^i are
measured values of the state fields, qi are values of the same state fields calculated
from the numerical model.
Laminate layers are orthotropic materials with four independent elastic constants
(see Sect. 4.9.2): two Young moduli E1 and E2; shear modulus G12 and Poisson’s
ratio m12. Design variables vector x (chromosome in the evolutionary algorithm,
B-cell in the artificial immune system) has the form [4]:
• for simple laminates:

x ¼ ½E1 ; E2 ; G12 ; m12  ð5:4:4Þ

• for hybrid laminates:

 
x ¼ E1e ; E2e ; Ge12 ; me12 ; qe ; E1i ; E2i ; Gi12 ; mi12 ; qi ð5:4:5Þ

where e = material of external layers, i = material of internal layers, q = mass


density.

5.4.3 Measurements

Different measurement data are considered to solve the identification problem for
simple and hybrid laminates. Static measurements in the form of displacements,
strains or stresses require many sensor points, which can be inconvenient in
practice. The number of sensor points depends on the complexity of the problem
and influences the identification results. A too small set of sensor points can cause
the ambiguity of the identification procedure.
To avoid these drawbacks, dynamic properties of the laminate structures could
be considered and modal analysis techniques may be applied. The modal model of
the dynamic structure is the ordered set of: eigenfrequencies, damping coefficients
and vibration forms. The modal analysis can be performed theoretically, by using
the numerical models and finite or boundary element method, or experimentally.
The modal analysis if often used for the optimization of the dynamic properties of
the structure in order to minimize the vibration propagation in it and for the
machine diagnostics [16].
A typical attitude is to measure the natural frequencies of the structure. The
results obtained from the eigenfrequency measurements may be not satisfactory due
to the insufficient data number used in the identification. In order to increase the
number of measurement data, one can use the frequency response information. This
230 5 Intelligent Computing in Inverse Problems

attitude may be very effective as there is usually no need to measure frequency


responses at more than one sensor point. Acceleration measurements seem to be
very useful from the practical point of view because accelerometers have relatively
small mass comparing with displacement and velocity sensors. Moreover, it is
possible to obtain velocity and displacement signals by an integration of acceler-
ations signal.
In the present chapter static (displacements, strain, stresses) and dynamic
(eigenfrequencies, frequency response data) are measured. All measurements are
simulated numerically on the basis of the finite-element model of the structures [5].

5.4.4 Numerical Examples

5.4.4.1 Numerical Example 1: Immune Identification of a Simple


Laminate

A rectangular plate made of a symmetrical laminate with known stacking sequence


[0/15/-15/45/-45/90/30/-30/0]s (Fig. 5.26) is considered. Each ply of the laminate is
made of the same orthotropic epoxy-glass material of thickness hi = 0.002 m. It is
assumed that the density of the material is known and equal to q = 1600 kg/m3.
The aim is to identify four elastic constants of a laminate having measured:
(i) displacements in sensor points, (ii) eigenfrequencies. The minimized functional
has the form described by Eq. (5.4.2).
To solve the boundary-value problem the plate is divided into 120 four-node
(QUAD4) finite elements. Ideal and disturbed measurements are considered. It is
assumed that the measurement error with the Gaussian distribution does not exceed
10%, with the expected value E(q) equal to the ideal one, and standard deviation
r(q) = E(q)/30.
To solve the identification task, the parallel artificial immune system is
employed. The parameters of the PAIS are:

Fig. 5.26 Identified simple laminate: a boundary conditions, b sensor points location
5.4 Identification of Material Properties 231

– the number of memory cells: n_mc = 10;


– the number of clones: n_c = 10;
– the maximum number of iterations: ni = 25;
– the Gaussian mutation range: mr = 0.5;
– the minimal crowding distance: cdist = 0.2.
The actual values and variation ranges for the identified elastic constants of the
considered laminate are presented in Table 5.13.
In the first case the plate is loaded by three forces: F1 = 500 N, F2 = –1500 N,
F3 = 500 N. The displacements at 11 sensor points are measured. The identification
results for the ideal (no noise) and disturbed (noise) data are collected in
Table 5.14.
In the second case 25 first eigenfrequencies of the plate are measured. The
identification results are collected in Table 5.15.

Table 5.13 Identified elastic Constant Actual value Range


constants of the simple
laminate E1 (MPa) 1.81  105 1.30  105  2.20  105
E2 (MPa) 1.03  104 0.80  104  1.30  104
G12 (MPa) 7.17  103 5.00  103  9.00  103
m12 0.28 0.22  0.32

Table 5.14 Identification Constant No noise Noise


results for displacement
Found Error Found Error
measurements
value (%) value (%)
E1 181033.100 0.02 1.81E+05 0.02
(MPa)
E2 10317.250 0.17 9.88E+03 4.03
(MPa)
G12 7155.125 0.21 6.60E+03 7.88
(MPa)
m12 0.276 1.47 2.87E−01 2.33

Table 5.15 Identification Constant No noise Noise


results for eigenfrequency
Found value Error Found value Error
measurements
(%) (%)
E1 (Pa) 1.828  105 0.99 1.804  105 0.35
E2 (Pa) 1.053  104 2.28 1.102  104 6.95
G12 (Pa) 7.126  103 0.61 7.105  103 0.91
m12 0.261 6.77 0.248 11.6
232 5 Intelligent Computing in Inverse Problems

5.4.4.2 Numerical Example 2: Evolutionary Identification of a Hybrid


Laminate

A rectangular plate made of a symmetrical hybrid laminate is considered


(Fig. 5.27). The stacking sequence of the laminate is (0/15/-15/45/-45)s. External
plies (with fibres angle equal to 0°) are made of glass-epoxy Scotchply 1002 ma-
terial (Me), while core layers are made of graphite-epoxy T300/5280 material (Mi).
All layers have the same thickness hi = 0.002 m.
The aim is to identify 10 constants of a hybrid laminate (four elastic constants
and a mass density for both materials). Acceleration amplitudes in one sensor point
for varying excitation frequency (frequency response of the structure) are the
measurement data. The excitation frequency has varied in the range 10–2000 Hz
with a step of 10 Hz. The frequency response diagram is presented in Fig. 5.28.
The minimized functional has the form described by Eq. (5.4.3). To solve the
boundary-value problem, the plate is divided into 400 four-node (QUAD4) finite
elements. Ideal measurements are considered. To solve the identification task, the
distributed evolutionary algorithm is employed. The parameters of the DEA are:
– the number of subpopulations n_sp = 2;
– the subpopulation size: sp_size = 50;
– the number of genes: ng = 10;
– the maximum number of generations: max_life = 1000;
– the probability of simple crossover: psc = 1.0;
– the probability of Gaussian mutation: pgm = 1/chromosome_length;
– the probability of uniform mutation: pum = 0.1.
The actual values of the identified parameters, variable ranges and identification
results for material Me and Mi are collected in Tables 5.16 and 5.17, respectively.
The reduction of computing time with the use of more than one processor is also
considered. The computing time after 25 generations of the EA is taken into
account. The speedup is calculated as a computing time with the use of one pro-
cessing unit over computing time with the use of n processing units. The results are
tabulated in Table 5.18.

Fig. 5.27 Identified hybrid laminate: a geometry, excitation and sensor points, b material location
5.4 Identification of Material Properties 233

Fig. 5.28 Frequency response diagram

Table 5.16 Identification results of material Me


Constant E1 (GPa) E2 (GPa) G12 (GPa) m12 q (kg/m3)
Actual value 181 10.3 7.17 0.28 1600
Range 100–250 0.5–30 0.5–30 0.2–0.4 1400–2000
Found value 183.81 10.35 7.63 0.268 1896
Error (%) 1.55 0.44 6.42 4.18 18.55

Table 5.17 Identification results of material Mi


Constant E1 (GPa) E2 (GPa) G12 (GPa) m12 q (kg/m3)
Actual value 38.6 8.27 4.14 0.26 1800
Range 10–80 0.5–30 0.5–30 0.2–0.4 1400–2000
Found value 34.96 8.65 3.93 0.231 1725.32
Error (%) 9.44 4.61 5.04 11.38 4.15

Table 5.18 Computation speedup for 1–3 processors


Number of processing units Time (s) Speedup
1. (1.4 GHz Intel Xeon) 6981 1.00
2. (SMP 2x1.4 GHz Intel Xeon) 4125 1.69
3. (SMP 2x1.4 GHz Intel Xeon + 1xAthlon XP 2.5 Hz) 3048 2.29
234 5 Intelligent Computing in Inverse Problems

5.4.4.3 Numerical Example 3: Two-Stage Identification of a Simple


Laminate

A rectangular simple laminate plate presented in Fig. 5.29 is considered. The


laminate is made of glass-epoxy material with each lamina having the thickness
hi = 0.002 m. The stacking sequence of the laminate plate is (0/15/45/-15/-45/90/
15/-15)s.
The aim is to identify four elastic constants of the laminate. The minimized
functional has the form described by Eq. (5.4.2). Frequency response data are
considered as measurements. The excitation frequency has varied in the range 10–
2000 Hz with a step of 10 Hz. A total of 200 acceleration amplitudes have been
measured (numerical experiment). To solve the boundary-value problem the plate
has been divided into 400 four-node finite elements (QUAD4). The ideal mea-
surements are considered.
The evolutionary algorithm is used in the first stage of the identification. The
parameters of the EA are as follows:
– the population size: p_size = 100;
– the number of genes: ng = 4;
– the maximum number of generations: max_life = 400;
– the probability of arithmetic crossover: pac = 0.2;
– the probability of Gaussian mutation: pgm = 0.4.
The number of iterations of the local method has been assumed as 500. The
actual values of identified constants, variable ranges and identification results after
1st and after 2nd stage of the strategy are collected in Table 5.19.

Fig. 5.29 Simple laminate for two-stage identification

Table 5.19 Two-step Constant E1 (GPa) E2 (GPa) G12 (GPa) m12


identification results
Actual value 38.6 8.28 4.14 0.26
Range 20–60 4–15 1–10 0.1–0.4
After 1st stage 39.2 8.14 4.07 0.27
Error (%) 1.55 1.69 1.72 3.85
After 2nd stage 38.6 8.28 4.14 0.26
Error (%) 0.0 0.0 0.0 0.0
5.4 Identification of Material Properties 235

5.4.5 Concluding Remarks

In the present chapter the application of different computational intelligence tech-


niques coupled with finite-element method for composites has been presented.
Evolutionary algorithms or artificial immune systems have been used as the global
optimization methods, and local optimization methods supported by artificial neural
network have been applied to perform the second stage of hybrid strategy. Material
constants in composites in the form of multilayered, simple and hybrid laminates
have been identified.
Different types of measurement data have been considered as values necessary to
determine (identify) material constants of the laminates: static (displacements, strain
or stresses) and dynamical (eigenfrequencies or frequency response data). The
frequency response measurements are especially convenient as the measurements
are performed typically in one sensor point.
Positive identification results have been obtained in all presented cases, espe-
cially for small number of design variables (simple laminates). The second stage in
a hybrid algorithm (two-stage strategy) significantly increases the precision of the
identification.
As the numerical experiment has been performed to obtain measurement data,
the ideal as well as disturbed responses of the structure have been considered. The
influence of the measurement error not greater than 10% is not very significant—
with the same iteration number of the global method the parameters are identified
less precisely (results similar to the “ideal” measurements can be typically obtained
in a longer time).
The disadvantage of the populational algorithms (EA, AIS)—time-consuming
calculations—can be effectively decreased by using parallel versions of global
optimization methods. The application of the ANN as an approximation tool in the
second stage of two-stage strategy also reduces the number of objective function
computations.

References

1. Aliabadi MH, Rooke DP (1991) Numerical fracture mechanics. Solid mechanics and its
applications, Computational Mechanics Publications, Southampton/Boston
2. Banerjee PK (1994) The boundary element method in engineering. McGraw-Hill Book
Company, London
3. Beluch W (2000) Crack identification using evolutionary algorithms. In: Proceedings of the
symposium on methods of artificial intelligence in mechanical engineering—AI-MECH 2000,
Gliwice (Poland), Gliwice 2000
4. Beluch W, Burczyński T, Kuś W (2004) Distributed evolutionary algorithms in identification
of material constants in composites. In: Proceedings of KAEIOG 2004 conference, Kazimierz,
pp 1–8
236 5 Intelligent Computing in Inverse Problems

5. Beluch W, Kuś W, Burczyński T (2003) Evolutionary identification of material constants in


composites. In: Full papers, symposium on methods of artificial intelligence, AI-METH 2003,
Gliwice, pp 22–23
6. Bledzki AK, Kessler A, Rickards R, Chate A (1999) Determination of elastic constants of
glass/epoxy unidirectional laminates by the vibration testing of plates. Compos Sci Technol
59(13):2015–2024
7. Bonnet M, Burczyński T, Nowakowski M (2002) Sensitivity analysis for shape perturbation
of cavity or internal crack using BIE and adjoint variable approach. Int J Solids Struct
39:2365–2385
8. Bui HD (1994) Inverse problems in the mechanics of materials: an introduction. CRC Press,
Bocca Raton
9. Burczyński T, Beluch W, Długosz A, Kuś W, Nowakowski M, Orantek P (2002)
Evolutionary computation in optimization and identification. CAMES 9(1):3–20
10. Burczyński T, Orantek P, Skrobol A (2004) Fuzzy-neural and evolutionary computation in
identification of defect. J Theoret Appl Mech 42(3):445–460
11. Burczyński T, Skrobol A (2004) Approximation of a boundary-value problem using artificial
neural networks. In: Recent developments in artificial intelligence methods, Gliwice, pp 79–
84
12. Długosz A (2004) Evolutionary computation in thermoelastic problems. In: Osyczka A,
Burczyński T (eds) IUTAM symposium on evolutionary methods in mechanics, pp 69–80.
Kluwer, Dordrecht
13. Gaul L, Kögl M, Wagner M (2003) Boundary element methods for engineers and scientists:
an introductory course with advanced topics. Springer
14. Gay D, Hoa S (2007) Composite materials: design and applications. CRC Press
15. Jang JR, Sun Ch, Mizutani E (1997) Neuro-fuzzy and soft computing: a computational
approach to learning and machine intelligence. Prentice-Hall
16. Mendes M, Silva JM (1997) Theoretical and experimental modal analysis. Research Studies
Press Ltd
17. Michalewicz Z (1992) Genetic algorithms + data structures = evolutionary programs. AI
Series. Springer, New York
18. Portela A, Aliabadi MH, Rooke DP (1992) The dual boundary element method: effective
implementation for crack problems. Int J Numer Methods Eng 33(1269):1287
19. Sladek V, Sladek J (1983) Boundary integral equation method in thermoelasticity, part I:
general analysis. Appl Math Model 7:241–253
20. Trujillo DM, Busby HR (1997) Practical inverse analysis in engineering. CRC-Press
21. Zienkiewicz OC, Taylor RL (2000) The finite element method, vol 1–3. Butterworth, Oxford
Chapter 6
Closing Remarks

The presented methodology of intelligent design of structures and its applications


show that it is effective and useful tool of optimization. We have described several
bio-inspired intelligent algorithms based on theory of evolution, immune systems,
neural networks and behaviour of biological systems. Applications of them together
with discrete models of systems based on FEM, BEM and FEM/BEM turned out to
be very helpful in creating new structures. The common feature of these algorithms
is the ability to learn, which is usually attributed to natural intelligence. The pro-
posed methodology is very flexible and open-ended, and it is also easy to
parallelization.
Different kinds of optimization are considered, such as shape, topology, size and
material optimization of 2D and 3D structures being under static and dynamical
mechanical and thermo-mechanical loadings, structures with nonlinearities and
cracks as well as composite structures. Multiobjective optimization for coupled
problems is also taken into account. Several numerical examples illustrating these
kinds of optimization are presented. A special type of problems are those related to
solving inverse problems in which boundary conditions, defects such as voids or
cracks and material characteristics, are considered in the framework of the intelli-
gent methodology.
The presented methodology has turned out to be an alternative to methods based
on sensitivity analysis and other classical methods. It is resistant to different kinds
of uncertainties encountered in the system modelled by various models of
granularity.
On the grounds of results presented in the book one can say that the relationship
between mechanisms governing biological systems and creativity of intelligent
optimal design of artefacts undoubtedly exists.

© Springer Nature Switzerland AG 2020 237


T. Burczyński et al., Intelligent Computing in Optimal Design,
Solid Mechanics and Its Applications 261,
https://doi.org/10.1007/978-3-030-34161-9_6

Potrebbero piacerti anche