Documenti di Didattica
Documenti di Professioni
Documenti di Cultura
Tadeusz Burczyński · Wacław Kuś ·
Witold Beluch · Adam Długosz ·
Arkadiusz Poteralski ·
Mirosław Szczepanik
Intelligent
Computing
in Optimal
Design
Solid Mechanics and Its Applications
Volume 261
Founding Editor
G. M. L. Gladwell, University of Waterloo, Waterloo, ON, Canada
Series Editors
J. R. Barber, Department of Mechanical Engineering, University of Michigan,
Ann Arbor, MI, USA
Anders Klarbring, Mechanical Engineering, Linköping University, Linköping,
Sweden
The fundamental questions arising in mechanics are: Why?, How?, and How much?
The aim of this series is to provide lucid accounts written by authoritative
researchers giving vision and insight in answering these questions on the subject of
mechanics as it relates to solids. The scope of the series covers the entire spectrum
of solid mechanics. Thus it includes the foundation of mechanics; variational
formulations; computational mechanics; statics, kinematics and dynamics of rigid
and elastic bodies; vibrations of solids and structures; dynamical systems and
chaos; the theories of elasticity, plasticity and viscoelasticity; composite materials;
rods, beams, shells and membranes; structural control and stability; soils, rocks and
geomechanics; fracture; tribology; experimental mechanics; biomechanics and
machine design. The median level of presentation is the first year graduate student.
Some texts are monographs defining the current state of the field; others are
accessible to final year undergraduates; but essentially the emphasis is on
readability and clarity.
Springer and Professors Barber and Klarbring welcome book ideas from
authors. Potential authors who wish to submit a book proposal should
contact Dr. Mayra Castro, Senior Editor, Springer Heidelberg, Germany,
email: mayra.castro@springer.com
Indexed by SCOPUS, Ei Compendex, EBSCO Discovery Service, OCLC,
ProQuest Summon, Google Scholar and SpringerLink.
Intelligent Computing
in Optimal Design
123
Tadeusz Burczyński Wacław Kuś
Institute of Fundamental Technological Silesian University of Technology
Research of the Polish Academy of Sciences Gliwice, Poland
Warsaw, Poland
Adam Długosz
Cracow University of Technology Silesian University of Technology
Cracow, Poland Gliwice, Poland
Witold Beluch Mirosław Szczepanik
Silesian University of Technology Silesian University of Technology
Gliwice, Poland Gliwice, Poland
Arkadiusz Poteralski
Silesian University of Technology
Gliwice, Poland
This Springer imprint is published by the registered company Springer Nature Switzerland AG
The registered company address is: Gewerbestrasse 11, 6330 Cham, Switzerland
Preface
v
Contents
1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
2 Computational Models of Structures . . . . . . . . . . . . . . . . . . . . . . . . . 3
2.1 Finite-Element Models of Structures . . . . . . . . . . . . . . . . . . . . . 3
2.1.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
2.1.2 The FEM Formulation for Linear Structures . . . . . . . . . 3
2.2 Boundary Element Models of Structures . . . . . . . . . . . . . . . . . . 7
2.2.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
2.2.2 BEM for 2D Structures . . . . . . . . . . . . . . . . . . . . . . . . 8
2.3 FE and BE Models of Structures . . . . . . . . . . . . . . . . . . . . . . . . 13
References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15
3 Intelligent Computing Techniques . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
3.1 Introduction to Computational Intelligence . . . . . . . . . . . . . . . . . 17
3.2 Sequential Evolutionary Algorithms . . . . . . . . . . . . . . . . . . . . . . 18
3.2.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18
3.2.2 Evolutionary Operators . . . . . . . . . . . . . . . . . . . . . . . . . 19
3.3 Parallel and Distributed Evolutionary Algorithms . . . . . . . . . . . . 21
3.3.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21
3.3.2 The Parallel Evolutionary Algorithm . . . . . . . . . . . . . . . 22
3.3.3 The Distributed Evolutionary Algorithm . . . . . . . . . . . . 23
3.3.4 The Improved Distributed Evolutionary Algorithm . . . . . 23
3.3.5 Optimal Parameters of the Distributed Evolutionary
Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24
3.3.6 Numerical Examples . . . . . . . . . . . . . . . . . . . . . . . . . . . 26
3.3.7 Concluding Remarks . . . . . . . . . . . . . . . . . . . . . . . . . . 28
3.4 Information Granularity and Granular Computing . . . . . . . . . . . . 29
3.4.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29
3.4.2 Interval Numbers and Interval Arithmetic . . . . . . . . . . . 30
vii
viii Contents
2.1.1 Introduction
The section describes the basics of the finite element method (FEM). The name of
the method was used for the first time by Clough in 1960s of twentieth century [9]
in a work devoted to plates. Since then the method was extended to different types
of structures and beyond mechanics. The first book describing the FEM was by
Zienkiewicz published in 1971 [14], which was improved and extended later [16].
The last edition consists of three volumes, describing in detail many aspects of
FEM. The FEM is the most popular method used in academia and industry for
solving partial differential equations-based problems [8]. The method works well,
both for linear and nonlinear problems, isotropic and anisotropic materials. The
formulation of FEM for linear isotropic plates for static problem is described in the
following sections.
Let us consider a body X bounded by the boundary C, as shown in Fig. 2.1. The
body is loaded with internal forces b in area X, tractions p0 act on the boundary
segment Cp and the displacements u0 are prescribed on the boundary segment Cu,
whereby Cp [ Cu ¼ C and Cp \ Cu ¼ ;.
The partial differential equation for the body can be expressed as:
rij;j þ bi ¼ 0 ð2:1:1Þ
where rij;j is an element of the stress tensor. The boundary conditions are given by
equations:
where k and l are Lame constants, d is a Kronecker symbol. Strains are defined on
the basis of displacements u as:
1
eij ¼ ui;j þ uj;i ð2:1:6Þ
2
1
deij ¼ dui;j þ duj;i ð2:1:8Þ
2
The stress and strains tensors can be presented in a more compact form due to
symmetry as Voigt vectors:
and
The matrix N contains the shape functions and u is a column matrix containing
displacements. Virtual strains can be expressed as:
6 2 Computational Models of Structures
de ¼ Bdu ð2:1:14Þ
The relation between strains and stresses for linear problems assuming small
strains and displacements can be formulated in a matrix form as:
r ¼ De ð2:1:18Þ
e ¼ Bu ð2:1:19Þ
Ku ¼ f ð2:1:22Þ
The equation can be solved by taking into account the boundary conditions. The
vector of nodal displacements u is obtained after solving the system of equations.
Internal stresses can be obtained using Eq. (2.1.23).
r ¼ DBu ð2:1:23Þ
2.2.1 Introduction
The boundary element method (BEM) is not as popular as FEM but gives some
advantages, especially when dealing with infinite, semi-infinite structures or bodies
with complex geometries. For most problems, only the boundary of the body is
discretized, which leads to fewer elements compared to FEM. For some specific
problems, like elastoplasticity, internal boundary elements must be introduced. The
main disadvantage of BEM is full and asymmetrical matrices, which cause prob-
lems when solving algebraic equation systems. The details about BEM can be
found in Banerjee [1], Beer [2], Brebbia and Dominiguez [3], Brebbia et al. [4],
Dominguez [10], Burczyński [5, 6], Burczyński and Grabacki [7], Gaul et al. [11],
Sladek and Sladek [13]. The BEM is still under development, and new approaches
like fast multipole BEM [12] have been recently introduced that allow overcoming
some disadvantages of BEM.
The following sections present BEM for elastic 2D static problem.
8 2 Computational Models of Structures
Let us consider the body as in Sect. 2.1.2 with the same boundary conditions and
equilibrium equations. The displacement equation of static elastic problem can be
formulated as:
Ls u ¼ b; x2X ð2:2:1Þ
4 ðl þ kÞ @x @x 1 2
lD þ ðl þ kÞ @x 2
2 2 3 5 ð2:2:2Þ
ðl þ kÞ @x@@x ðl þ kÞ @x@@x @2
2 2
lD þ ðl þ kÞ @x 2
1 3 2 3 3
After taking into account Maxwell-Betti reciprocal work theorem, we can obtain
Somigliana identity:
Z Z Z
uðxÞ ¼ U ðx; yÞpðyÞdCðyÞ P ðx; yÞuðyÞdCðyÞ þ U ðx; yÞbðyÞdXðyÞ
C C X
ð2:2:3Þ
where
@
Dijk ðx; yÞ ¼ Cijlm U ðx; yÞ ð2:2:5Þ
@xm lk
2.2 Boundary Element Models of Structures 9
@
Sijk ðx; yÞ ¼ Cijlm P ðx; yÞ ð2:2:6Þ
@xm lk
1
Uij ðx; yÞ ¼ ð3 4mÞlnðrÞdij ri rj ð2:2:7Þ
8pð1 mÞl
1 @r
Pij ðx; yÞ ¼ ð1 2mÞdij þ 2ri rj ð1 2mÞ ri nj rj ni
4pð1 mÞr @n
ð2:2:8Þ
and
@r ri
ri ¼ ¼
@xi ðyÞ r
Somigliana identity (2.2.3) can be used when all displacements and tractions are
known on the boundary of the structure. Only parts of the displacements and
tractions are known in boundary-value problems. If the point x tends to the
boundary, Eq. (2.2.3) becomes an integral equation. The equation can be modified
by assuming the area near boundary as a point x surrounded by boundary Ce with
radius e, as shown in Fig. 2.3. The C boundary can be expressed as a sum:
C ¼ ðC Ce Þ þ Ce ð2:2:9Þ
where X0 is a spherical area near source point x with radius e. Due to nonuniqueness
of U , the first and the third integrals are improper. The second integral can be
expressed as a sum of two integrals:
Z Z
uðxÞ ¼ U ðx; yÞpðyÞdCðyÞ þ U ðx; yÞbðyÞdXðyÞ þ
C X
8 9
> > ð2:2:11Þ
< Z Z =
lim P ðx; yÞuðyÞdCðyÞ þ P ðx; yÞuðyÞdCðyÞ
e!0>
: >
;
CCe Ce
where the first integral is equal to zero due to the continuity of displacements. After
the rearrangement, the Eq. (2.2.12) can be formulated as:
Z
cðxÞ ¼ I þ lim P ðx; yÞdCðyÞ ð2:2:13Þ
e!0
Ce
where I is a unit matrix. Taking into account Eq. (2.2.13), Somigliana identity may
be written in the form:
Z Z
cðxÞuðxÞ þ P ðx; yÞuðyÞdCðyÞ ¼ U ðx; yÞpðyÞdCðyÞ
C C
Z ð2:2:14Þ
þ U ðx; yÞbðyÞdXðyÞ
X
2.2 Boundary Element Models of Structures 11
1 1
M1 ðnÞ ¼ ð1 nÞ; M2 ðnÞ ¼ ð1 þ nÞ ð2:2:17Þ
2 2
uðxðnÞÞ Mw ðuÞw ; x 2 Ce
w ð2:2:18Þ
pðxðnÞÞ Mw ðpÞ ; x 2 Ce
where
" 2 #12
@x1 2 @x2
JðnÞ ¼ þ ð2:2:20Þ
@n @n
The Eq. 2.2.14 can be reformulated taking into account the discretization and
shape functions:
ne X
X We Z
cðxÞuðxÞ ¼ ðuÞw
e P ½x; yðnÞMw ðnÞJðnÞdn þ
e¼1 w¼1
Ce
Z ð2:2:21Þ
X
ne X
We
ðpÞw
e U ½x; yðnÞMw ðnÞJðnÞdn þ BðxÞ
e¼1 w¼1
Ce
The BðxÞ is present in case of body loads and it is the only integral over the area
of body. In many problems BðxÞ vanishes or can also be expressed as a boundary
integral:
Z
BðxÞ ¼ U ðx; yÞbðyÞdXðyÞ ð2:2:22Þ
X
Z Z
nodeð2Þ
Hu ¼ Gp þ B ð2:2:24Þ
2.2 Boundary Element Models of Structures 13
where matrices H and G contain the values from integrals. Taking into account the
boundary conditions, Eq. (2.2.24) is converted into the following form:
AX ¼ F ð2:2:25Þ
where matrix A contains a part of the values from matrices H and G, vector
X contains unknown displacements and tractions, vector F contains known values
of displacements and tractions multiplied by part of the H and G matrices.
A concept of the dual boundary element method is presented in Sect. 4.7.4.
The coupling of finite elements with boundary elements allows using advantages of
both the methods [15]. FEM is a simple and efficient method for both linear and
nonlinear problems. The disadvantage of this method is the need of mesh gener-
ation for the interior of the structure. It is very inconvenient for analysis of struc-
tures with infinite volume. The use of FEM in such cases is complicated; hence one
can add artificial boundary to the structure and treat it as a structure with finite
boundary, or the specialized finite elements can be created. The BEM treats infinite
structures by simply defining only interior boundaries. The boundary of the
structure has to be meshed with the use of boundary elements if the problem is
linear. The coupled finite and boundary element method is a handy tool if we deal
with infinite structures with local nonlinearities. In such cases the infinite part is
modelled using boundary elements, while the structures near the areas with non-
linearities are modelled by using finite elements [2]. The coupling of boundary and
finite elements can be performed in two ways. The boundary elements region can be
defined as a finite element and included in FEM analysis or the finite elements may
be prescribed as a boundary element formulation and included in BEM analysis.
The chapter describes expressing boundary element region as a finite element.
Using the coupled finite and boundary element method, one should divide the
body into finite elements and boundary element regions (Fig. 2.6). X1 denotes
region with finite elements, and X2 is the region of boundary elements. The regions
discretized using finite elements can contain nonlinearities (e.g. plastic strains). The
common nodes are present on the common boundary between finite and boundary
elements regions.
We can write the integral equation for the BEM region:
Z Z
cu ¼ U pdC P udC ð2:3:1Þ
C C
Hu ¼ Gp ð2:3:2Þ
where H and G are coefficient matrices. The above equation may be transformed
into a form similar to the FEM dependence between forces and displacements. The
matrix G is eliminated from the right side:
G1 Hu ¼ p ð2:3:3Þ
and then tractions are converted into nodal forces by multiplying Eq. (2.3.3) by
shape function matrix M:
MG1 Hu ¼ Mp ð2:3:4Þ
K0 u ¼ f 0 ð2:3:5Þ
K0 in the Eq. (2.3.5) can be treated as the FEM element stiffness matrix.
Unfortunately, the stiffness matrix K0 is not symmetrical. The iterative method
should be used to solve the problem because of the presence of nonlinearities in the
finite-element region.
Figure 2.7 presents an infinite body X: The structure is divided into
finite-element region near interior hole and the boundary elements model the infi-
nite structure. The boundary elements are located on the outer boundary of the
finite-elements region.
References 15
References
Abstract The chapter presents various methods that can be qualified as intelligent
computing ones. Different bio-inspired methods and techniques in the form of
evolutionary algorithms (EAs), artificial immune systems (ANNs), particle swarm
optimizers (PSOs) and artificial immune systems (AISs) are described. Moreover,
information granularity attitude is introduced to model some uncertainties in
material properties, geometry or boundary conditions. Granular computing tech-
niques using interval numbers, fuzzy numbers and random variables are presented.
Combinations of EAs and granular computing techniques in the form of fuzzy and
stochastic EAS are proposed. Various hybrid computational intelligence algorithms
combining different, intelligent or conventional techniques (e.g. gradient opti-
mization methods) are described. A brief comparison of the effectiveness of
selected bio-inspired optimization methods (PSO, EA and AIS) for the chosen test
functions is also included.
3.2.1 Introduction
Evolutionary algorithms [4, 38] are algorithms that search the space of solutions
based on the analogy of the biological evolution of species. Like in biology, the
term of an individual is used, and it represents a single solution. Evolutionary
algorithms operate on populations of individuals, so while an algorithm works, all
the time we deal with a set of problem solutions. An individual consists of chro-
mosomes. Usually it is assumed that an individual contains only one chromosome.
Chromosomes consist of genes which are equivalent to design variables in opti-
mization problems. Adaptation of each individual is calculated using the fitness
function. Figure 3.1 shows how an evolutionary algorithm works.
Evolutionary operators change the gene value like the biological mechanisms of the
mutation and the crossover. Different kinds of operators are presented in publica-
tions, and the basic ones are:
– uniform mutation,
– mutation with Gaussian distribution,
– boundary mutation,
– simple crossover,
– arithmetical crossover.
A uniform mutation changes the values of randomly chosen genes of randomly
selected individual. The gene takes a random value with a uniform distribution from
the variables range. The diagram of how the uniform mutation operator works is
presented in Fig. 3.3.
function. The best individuals obtain the highest rank value; the worst obtain the
lowest ones. In the final step, individuals for the offspring generation are drawn, but
the probability of drawing particular individuals is closely related to their rank
value.
3.3.1 Introduction
The sequential evolutionary algorithms are well-known tools for global optimiza-
tion [4, 38]. The number of fitness function evaluations during optimization is equal
to thousands or even hundreds of thousands. The fitness function evaluation for
most of the real-life problems connected with mechanics or mechanical engineering
takes a lot of time (from seconds to hours). The long-time computations can be
shortened when the parallel or distributed evolutionary algorithm is used. The
fitness function evaluation is done in parallel way when the parallel evolutionary
algorithms are used. The distributed evolutionary algorithms operate on many
22 3 Intelligent Computing Techniques
The distributed genetic algorithms [1, 55] and the distributed evolutionary algo-
rithms (DEA) work similar to many evolutionary algorithms operating on sub-
populations. The evolutionary algorithms exchange chromosomes during a
migration phase between subpopulations. The flowchart of DEA is presented in
Fig. 3.8. When DEA is used, the number of fitness function evaluations can be
lower in comparison with sequential and parallel evolutionary algorithms. DEA
usually works in a parallel manner. Each of the evolutionary algorithms in DEA
works on a different processing unit. The theoretical reduction of calculation time
could be bigger than the number of processing units. The starting subpopulation of
chromosomes is created randomly. The evolutionary operators change chromo-
somes and the fitness function value for each chromosome is computed. The
migration exchanges a part of chromosomes between subpopulations. The selection
decides which chromosomes will be in the new population. The selection is done
randomly, but the fitter chromosomes have bigger probability to be in the new
population. The selection is performed on chromosomes changed by operators and
immigrants. The next iteration is performed if the stop condition is not fulfilled. The
stop condition can be expressed as the maximum number of iterations.
It is hard to find the optimal parameters of the evolutionary algorithm. One of the
methods is to perform optimization with different parameters of the evolutionary
algorithm and comparison of the results. The parameter values can be found with
the use of other “master” evolutionary algorithm. The parameters are coded into the
master algorithm chromosomes. The following parameters can be taken into
account:
– the probabilities of operators,
– the number of subpopulations,
– the number of chromosomes in subpopulations,
– migration topology,
– the frequency and number of migrating chromosomes.
3.3 Parallel and Distributed Evolutionary Algorithms 25
1X m
FðxÞ ¼ EVt ð3:3:1Þ
m t¼1
where m is the number of tests performed for parameters x, and EVt is the number
of fitness function evaluations in test t. The tested algorithm stops after achieving a
prescribed optimal value of the fitness function. Other fitness function can be used
when the stop criterion of the tested algorithm is the number of fitness function
evaluations:
1X m
FðxÞ ¼ BEt ð3:3:2Þ
m t¼1
where BEt means the best fitness function value found in test t. The number of tests
m is important because evolutionary algorithms are stochastic and the quality of the
26 3 Intelligent Computing Techniques
Fig. 3.10 The flowchart of the algorithm used to optimize the parameters of the evolutionary
algorithm
algorithm with parameters x should be determined as the average of many runs. The
flowchart of the algorithm is presented in Fig. 3.10.
This method is very time-consuming and can be applied for mathematical
functions and simple mechanical problems only, but it can indicate parameter
values for other real problems.
X
20
FðxÞ ¼ 200 þ x2i 10 cosð2pxi Þ ð3:3:3Þ
i¼1
3.3 Parallel and Distributed Evolutionary Algorithms 27
The minimum value of the function is equal to zero. The constraints imposed on
the design variables are:
The fitness function for the “master” evolutionary is expressed by (3.3.1). The
number of tests m is equal to 30. The “master” algorithm chromosome contains
information about:
– the probability of the uniform mutation (0–1);
– the probability of the mutation with Gaussian distribution (0–1);
– the probability of the simple crossover (0–1);
– the probability of the arithmetic crossover (0–1);
– the number of chromosomes in subpopulations (2–30);
– the frequency of the migration (1–10);
– the number of migrating chromosomes (1–30).
The tests were performed for several numbers of subpopulations. The obtained
values of the evolutionary algorithm’s parameters suggest the following: using only
the mutation with Gaussian distribution, a small number of chromosomes, the
migration every generation and the migration of all chromosomes.
After these tests it can be checked out how the number of fitness function
evaluations depends on the number of chromosomes in one population (only the
mutation with the Gaussian distribution was used). In Fig. 3.11 the average (for 100
tests), maximal and minimal number of fitness function evaluations in the function
of the number of chromosomes are presented. The lowest average for six
chromosomes is obtained, and the average was 7891.9 evaluations.
Next, the tests for the number of subpopulations varying from 2 to 16 were
performed. Two chromosomes in each subpopulation and only the mutation with
the Gauss distribution were used. The migration occurs in every generation and all
the chromosomes migrate. The topology of migration is “full connected”. The
results are shown in Fig. 3.12. The average is computed for 100 tests.
30000
fitness function evaluations in
the function of the number of 25000
chromosomes for one
evaluations
20000
population
15000
10000
5000
0
4 8 12 16 20 24 28 32
number of chromosomes
28 3 Intelligent Computing Techniques
evaluations
10000
8000
6000
4000
2000
2 4 6 8 10 12 14 16
number of subpopulations
11
9
7
5
3
1
2 7 12
number of subpopulations
t1
k¼ ð3:3:5Þ
tn
where t1 is the time needed for optimization when one population is used, and tn is
the time needed when n subpopulations are used. Because these tests were per-
formed by means of a single computer, we compute maximal speedup which can be
achieved when communication time between subpopulations is equal to zero. The
number of fitness function evaluations for one population was used as time t1, and
the average number of fitness function evaluations for n subpopulations was used as
the time tn. Such assumptions can be made when time needed for each fitness
function evaluation is constant.
3.4.1 Introduction
The topic of information granularity was presented in the early works of Lotfi
Zadeh in the context of fuzzy sets [59]. Information granules can be defined as
elements grouped together on the basis on their similarity, indistinguishability,
proximity or functionality [6]. The concept of information granules is more
philosophical than technical and it can be diversely interpreted. Information gran-
ules are parts of abstraction encountered everywhere in the surrounding reality and
they represent the human, intuitive way of thinking. They constitute a connection
between the real world and its digital representation. Granularity can be considered
spatial as well as temporal. Granules may be in the form of classes, objects, subsets
and clusters or different elements of the reality [57]. The process of preparation of
information granules is called granulation. Information granules can exhibit dif-
ferent granularity levels according to the required accuracy [48]. Granules of par-
ticular level may be formed of lower level subgranules and they also can state
subgranules for granules of higher level.
Granular computing is knowledge-oriented in contrast to numeric calculations
based on the data. It denotes the idea of processing of the data in the form of the
information granules. Granular computing as a concept of “a subset of computing
with words” was introduced at the end of twentieth century by Zadeh [60]. Granular
computing is also considered as the semantical transformation of data in the
granulation process and verification of information abstractions in the noncompu-
tational way [7]. According to Yao [57], granular computing may be considered as
an integration of three elements (granular computing triangle, Fig. 3.14):
The history of intervals and interval arithmetic goes back to 1950s of the twentieth
century. It was introduced by Ramon Moore in 1959 as a tool for automatic control
of the computational errors that arise from input error, rounding errors during
computation and truncation errors while using a numerical approximation to the
mathematical problem [19]. Nowadays, this idea leads to very powerful technique
with many applications in mathematics, computer science and engineering.
A real interval [x] is a connected portion of the real line. It is defined by a
bounded, closed subset of real numbers [39]:
½a ¼ ½a;
a ¼ fx 2 R; a x
ag; a; a 2 R; a a ð3:4:1Þ
where a;
a are lower and upper bounds of the interval, respectively; x is any element
belonging to interval [a].
The midpoint mð½aÞ; the radius rð½aÞ and the width wð½aÞ of the interval are
defined as:
1
mð½aÞ ¼ ða þ aÞ
2
1
rð½aÞ ¼ ð a aÞ ð3:4:2Þ
2
j½aj ¼ maxfðjaj; j
aj Þ g
wð½aÞ ¼ a a ¼ 2rð½aÞ
Interval arithmetic is the arithmetic of quantities that lie within the specified
ranges (intervals) instead of having exact values [2]. Interval arithmetic is typically
limiter to real intervals. If two intervals [a] and [b] are bounded and closed,
arithmetical interval operators can be defined as:
where: 2 f þ ; ; ; g.
The endpoints of the interval ½a ½b can be determined as:
aþ
½a þ ½b ¼ ½a þ b; b
½a ½b ¼ ½a b;
a b
ð3:4:4Þ
½a ½b ¼ min ab; a b; a
ab; b ; max ab; ab; ab; ab
½a=½b ¼ ½a ½b
1
; where ½b
1
¼ 1bb 2 ½b if 0 62 ½b
If IðRn Þ denotes the set of real intervals, an interval vector ½v in IðRn Þ has
n components, thus intervals being defined as ½vi 2 IðRn Þ; i ¼ 1; . . .; n:
The midpoint, the radius and the upper bound of the interval vector are defined
similar to (3.4.2) as:
mð½vÞi ¼ mð½vi Þ
rð½vÞi ¼ rð½vi Þ ð3:4:5Þ
j½vji ¼ j½vi j
The width and norm of the interval vector are scalars represented, respectively,
as:
An interval matrix ½A ¼ ð½aij Þ has m rows and n columns; each element of the
interval matrix is an interval ½aij 2 IðRmn Þ; i ¼ 1; . . .; m; j ¼ 1; . . .; n: The width
and norm of the interval matrix are scalars defined as:
wð½AÞ ¼ max w ½aij
i; j
nX o ð3:4:7Þ
k½Ak ¼ max ½aij
i j
32 3 Intelligent Computing Techniques
Fuzzy sets are an extension of the classical (crisp) notion of set introduced by
Lotfi A. Zadeh in 1965 [61]. In the classical sets theory, any element belongs or not
to a given set, so only two logical states: 0 or 1 are admissible. In the fuzzy sets
theory, an arbitrary element can also partially belong to the set.
If X is a nonempty set of elements (objects, points) denoted by x, a fuzzy set A is
a set of ordered pairs defined as:
where lA ðxÞ is the membership function which associates with each element x in
X a real non-negative number whose supremum is finite.
There exist many standard membership functions; some of them are presented in
Fig. 3.15.
A support of the fuzzy set A is defined as the crisp subset of all x 2 X whose
elements all have nonzero membership function values:
An a-level set (a-cut) is the crisp set of elements that belongs to the fuzzy set
A at least to the degree a (Fig. 3.16):
Each fuzzy set can uniquely be represented by the family of all its a-cuts.
If supðlA ðxÞÞ ¼ 1; the fuzzy set is called normal one; otherwise fuzzy set A is
subnormal. Depending on the membership function value, the element x can be: not
included in the fuzzy set A (lA ðxÞ = 0), fully included (lA ðxÞ = 1) or x is called a
fuzzy member ð0\lA ðxÞ\1Þ:
The following operations on fuzzy sets were proposed by Zadeh (Fig. 3.17) [61]:
(i) intersection (logical and):
Intersection and union definitions were later extended to T-norm and S-norm
(T-conorm) definitions, respectively:
where: 2 f þ ; ; ; g.
Representing fuzzy numbers by a-cuts allows very often using simple interval
arithmetic instead of extension principle.
(i) the lower approximation of a set X with respect to R is the set of all objects,
which are certainly X with respect to R:
[
R ðxÞ ¼ x2U fRðxÞ : RðxÞ
X g ð3:4:17Þ
The boundary region of a set X with respect to R is the set of all objects, which
can be neither ruled in nor ruled out as members of the set X with respect to R:
R ðXÞ
X
R ðXÞ
R ð;Þ ¼ R ð;Þ ¼ ;
R ðUÞ ¼ R ðUÞ ¼ U
R ðX \ YÞ ¼ R ðXÞ \ R ðYÞ
R ðX \ YÞ
R ðXÞ \ R ðYÞ
R ðX [ YÞ R ðXÞ [ R ðYÞ ð3:4:20Þ
X
Y ! R ðXÞ
R ðYÞ & R ðXÞ
R ðYÞ
R ðXÞ ¼ R ðXÞ
R ðXÞ ¼ R ðXÞ
R R ðXÞ ¼ R R ðXÞ ¼ R ðXÞ
R R ðXÞ ¼ R R ðXÞ ¼ R ðXÞ
Rough sets can also be defined by means of rough membership function which
expresses conditional probability that x belongs to X in terms of information about
x expressed by R [47]:
jX \ RðxÞj
lRX : U ! \0; 1[ ; lRX ðxÞ ¼ ð3:4:21Þ
jRðxÞj
3.5.1 Introduction
Systems and processes in physical problems are expressed by some parameters, like
material properties, geometry or boundary conditions. If it is not possible to
describe such parameters precisely, they can be treated as uncertain ones. There
exist different models which describe granular (imprecise) character of data: interval
numbers, fuzzy sets, rough sets and random variables. In the present work it is
assumed that the granularity of information is represented in the form of the fuzzy
numbers and random variables.
3.5 Fuzzy and Stochastic Evolutionary Algorithms 37
ch j ðxÞ ¼ x1j ; x2j ; . . .; xij ; . . .; xNj ð3:5:1Þ
The standard representation of the fuzzy number can be problematic from the
fuzzy numbers’ arithmetic point of view. To reduce this inconvenience, it is pos-
sible to represent the fuzzy number x as a set of interval values ½x; x lying on
adequate a-cut levels, as shown in Fig. 3.20.
The number of a-cuts can be arbitrary. Figure 3.20 shows an example of the
replacement of the fuzzy value using the five interval values. This attitude allows
the use of interval arithmetic operators instead of fuzzy ones. It is also possible to
obtain different (symmetrical and asymmetrical) forms of the fuzzy values, as
presented in Fig. 3.21.
To simplify evolutionary operations, a central value cv is introduced and it is
assumed that each gene is represented by a trapezoidal fuzzy number described by
means of five real values (Fig. 3.22):
xij ¼ aL ðxij Þ; aU ðxij Þ; cvðxij Þ; bL ðxij Þ; bU ðxij Þ ð3:5:2Þ
where: cvðxij Þ is the central value of a fuzzy number; ak ðxij Þ; bk ðxij Þ is the distances
between the central value and the left and right boundaries of the interval on lower
(L) and upper (U) a-cuts, respectively.
Special evolutionary operators have been proposed to work with fuzzy repre-
sentations of genes: two mutation operators, one crossover operator and one
selection operator [14]. In the first type of the mutation, the central value cvðxij Þ of
the ith gene of jth chromosome is modified:
The fuzzy selection is based on the tournament selection method. The fuzzy
fitness function values of chromosomes chosen for tournament are compared in
order to select the best individual in the tournament. The better chromosome wins
with a probability depending on the introduced parameter b.
It is assumed that the minimization problem is taken into account. Consider the
fitness functions for two different fuzzy chromosomes having the same number of
a-cuts: eval1= F [ch1(x)] and eval2= F [ch2(x)]. In the first step the central values cv
are compared. If they have identical values, the width condition (ai and bi) is
checked for each a-cut. If both widths are identical, both b1 and b2 take the value
equal to 0.5. Otherwise, the fuzzy value, which has the bigger width take the value
smaller than 0.5 (e.g. b1 = 0.4) and the second fuzzy value takes value greater than
0.5 (e.g. b2 = 0.6). Such attitude promotes more concentrated fuzzy numbers.
Assuming that central values have different values and cv(eval1) < cv(eval1),
parameter b1 takes a value close to 1 (e.g. b1 = 0.95) while parameter b2 takes a
value close to 0 (e.g. b2 = 0.05, b1 þ b2 ¼ 1Þ. In the next step the following
conditions are checked:
ch j ½xðcÞ ¼ x1j ðcÞ; x2j ðcÞ; . . .; xij ðcÞ; . . .; xNj ðcÞ ð3:5:7Þ
The aim of the optimization is to find a vector x(c) minimizing the objective
function FðcÞ ¼ F ðxðcÞÞ with constraints P½gk ðxÞ 0 pk ; k ¼ 1; 2; . . .; m: Each
gene xij ðcÞ in jth chromosome is represented by a random variable, which is a real
function xij ðcÞ; c 2 C; defined on a sample space C and measurable with respect to P.
It is assumed that each jth random chromosome has a n-dimensional Gaussian
distribution of the probability density function, given as follows:
" #
1 1 X N
p x1 ; x2 ; . . .; xij ; . . .; xN ¼ pffiffiffiffiffiffiffi jKil jðxi mi Þðxl ml Þ
ð2pÞ N=2
jKj jKj i;l¼1
ð3:5:8Þ
where jKj 6¼ 0 is the determinant of matrix covariance; jKil j is the cofactor of the
element kil of the matrix K.
It is assumed that random genes are independent random variables. The joint
probability density function is expressed by the probability density functions of
random genes:
p x1j ; x2j ; . . .; xjj ; . . .; xNj ¼ p1 x1j p2 x2j . . .pi ðxij Þ. . .pN xNj ð3:5:9Þ
where:
" 2 #
1 xij mðxij Þ
pi ðxij Þ ¼ N mðxij Þ; rðxij Þ ¼ pffiffiffiffiffiffi exp ð3:5:10Þ
rðxij Þ 2p 2rðxij Þ2
is the probability density function of the random gene xij ðcÞ: If the random genes are
random independent Gaussian variables, the two parameters—the mean value
mðxij Þ and the standard deviation rðxij Þ—describe the probability density function
for each gene xij ðcÞ in chromosome ch j ½xðcÞ:
xij ¼ mðxij Þ; rðxij Þ ð3:5:11Þ
mðxij Þmin mðxij Þ mðxij Þmax ; rðxij Þmin rðxij Þ rðxij Þmax ð3:5:12Þ
42 3 Intelligent Computing Techniques
mðxij Þ ¼ mðxij Þ þ Gm
ð3:5:13Þ
sðxij Þ ¼ sðxij Þ þ Gr
The artificial immune systems (AIS) are developed on the basis of a mechanism
discovered in biological immune systems [49]. An immune system is a complex
system which contains distributed groups of specialized cells and organs. The main
purpose of the immune system is to recognize and destroy pathogens—fungi,
viruses, bacteria and improperly functioning cells. The lymphocytes cells play a
very important role in the immune system. The lymphocytes are divided into
several groups of cells. There are two main groups: B and T cells; both contain
some subgroups (like B-T dependent or B-T independent). The B cells contain
antibodies, which could neutralize pathogens and are also used to recognize
pathogens (Fig. 3.24).
There is a big diversity between antibodies of the B cells, allowing the recog-
nition and neutralization of many different pathogens. The B cells are produced in
bone marrow in long bones. It undergoes a mutation process to achieve a big
diversity of antibodies. T cells mature in thymus. Only T cells recognizing
non-self-cells are released to lymphatic and blood systems. There are also other
44 3 Intelligent Computing Techniques
cells, like macrophages with presenting properties. The pathogens are processed by
a cell and presented by major histocompatibility complex (MHC) proteins. The
recognition of a pathogen is performed in a few steps. First, the B cells or mac-
rophages present the pathogen to a T cell using MHC (Fig. 3.25).
A T cell decides if the presented antigen is a pathogen. It gives chemical signal
to B cells to release antibodies.
A part of stimulated B cells goes to a lymph node and proliferate (clone)
(Fig. 3.26).
A part of the B cells changes into memory cells, the rest of them secrete anti-
bodies into blood. The secondary response of the immunology system in the
presence of known pathogens is faster because of memory cells. The memory cells
created during primary response proliferate and the antibodies are secreted to blood
(Fig. 3.27).
The antibodies bind to pathogens and neutralize them. Other cells like macro-
phages destroy pathogens (Fig. 3.28). The number of lymphocytes in the organism
3.6 Artificial Immune Systems and Algorithms 45
changes, while the presence of pathogens increases, but after attacks a part of the
lymphocytes is removed from the organism.
The artificial immune systems [5, 20, 21] take only a few elements from the
biological immune systems. The most frequently used are the mutation of the B
cells, proliferation, memory cells and recognition by using B and T cells. The
artificial immune systems have been used for optimization problems in de Castro
and Von Zuben [22], as classification and also computer viruses recognition in
Balthrop et al. [5]. The cloning algorithm presented by von Zuben and de Castro
[20, 21] uses some mechanisms similar to biological immune systems for global
optimization problems. The unknown global optimum is the searched pathogen.
The memory cells contain design variables and proliferate during the optimization
process. The B cells created from the memory cells undergo mutation. The B cells
evaluate and better ones exchange the memory cells. In Wierzchoń [56] version of
Clonalg, the crowding mechanism is used—the diversity between memory cells is
forced. A new memory cell is randomly created, and substitutes the old one, if two
memory cells have similar values of design variables. The crowding mechanism
allows finding not only the global optimum but also other local ones. The presented
approach is based on the Wierzchoń [56] algorithm, but the mutation operator is
changed. The Gaussian mutation is used instead of the nonuniform mutation in the
presented approach.
The flowchart of an artificial immune system is presented in Fig. 3.29. The
memory cells are created randomly. They proliferate and mutate creating
the B cells. The number of nc clones created by each memory cell is determined
by the memory cells objective function value. The objective functions for the B
cells are evaluated. The selection process exchanges some memory cells for better
B cells. The selection is performed on the basis of the geometrical distance between
each memory cell and the B cells (measured by using design variables). The
crowding mechanism removes similar memory cells. The similarity is also deter-
mined as the geometrical distance between memory cells. The process is iteratively
repeated until the stop condition is fulfilled. The stop condition can be expressed as
the maximum number of iterations.
The particle swarm algorithms [34], like the evolutionary and immune algorithms,
are developed on the basis of the mechanisms discovered in the nature. The swarm
algorithms are based on the models of the animals’ social behaviours: moving and
living in groups. The animals relocate in the three-dimensional space in order to
change their stay place, the feeding ground, to find the good place for reproduction
or to evading predators. We can distinguish many species of the insects living in
swarms, fish swimming in the shoals, birds flying in flocks or animals living in
herds (Fig. 3.30).
A simulation of bird flocking was published by Reynolds [50]. It was assumed
that this kind of coordinated motion is possible only if three basic rules are fulfilled:
collision avoidance, velocity matching of neighbours and flock centring. The
computer implementation of these three rules showed very realistic flocking
behaviour flying in three-dimensional space, splitting before obstacle and rejoining
again after missing it. Similar observations concerned the fish shoals. Further
observations and simulations of birds and fish behaviour gave effective, more
accurate and more precise formulated conclusions [31, 54]. The results of this
biological examination were used by Kennedy and Eberhart [33], who proposed
particle swarm optimizer (PSO). This algorithm realizes directed motion of particles
in n-dimensional space to search for the solution for an n-variable optimization
problem. PSO works in an iterative way. The location of one individual (particle) is
determined on the basis of its earlier experience and the experience of the whole
group (swarm). Moreover, the ability to memorize and, in consequence, returning to
the areas with convenient properties, known earlier, enables the adaptation of the
particles to the life environment. The optimization process using PSO is based on
finding better locations in the search space (in the natural environment that are, for
example, hatching or feeding grounds).
where:
/1j ðkÞ ¼ c1 r1j ðkÞ; /2j ðkÞ ¼ c2 r2j ðkÞ;
3.8.1 Introduction
The artificial neural networks are simplified models of human nervous system.
They are applied for complex problems and if the criteria for the classical computer
programme are not clearly specified. The main advantages of the ANNs are:
• they are trained, not programmed;
• they have the ability of generalizing;
• they are highly resistant to a noise and distortion in the signal;
• they help to detect significant data connections.
Typical applications of the ANNs are:
• prediction (prediction of n + 1 value on the basis of n function values without
defining the relation between input and output data);
• classification and pattern recognition;
• approximation (interpolation, extrapolation);
• control;
• medical diagnosis, financial applications;
• data mining;
• signal filtering;
• optimization problems.
The notion of artificial neuron was developed by Warren McCulloch and Walter
Pitts in 1943. The artificial neuron is a structure with one or more inputs and one
output. The input values are multiplied by weights (synaptic weights) and then
summed up. The sum is passed through a function known as an activation function
or transfer function. The weights are modified during learning (training) phase.
Weights represent a memory of the neuron. Exemplary artificial neuron is presented
in Fig. 3.34.
The output signal y of the neuron is calculated as:
! !
X
N X
N
y ¼ uðeÞ ¼ u wj xj ¼u wj xj þ B ð3:8:1Þ
j¼0 j¼1
Fig. 3.35 The artificial neural network: a single-layer ANN, b multilayer ANN
where wj is the weight for jth input, xj is the jth input signal; e is the net value; u is
the activation function; and B is the bias input of constant value equal to 1.
Artificial neural network (ANN) consists of an interconnected group of artificial
neurons, typically organized in layers (Fig. 3.35).
The output signals of neurons are sent to the next layer neurons. Single-layer
networks consist of an input layer (not counted) and an output layer (Fig. 3.35a).
Multilayer network is a network which has an input layer, at least the one so-called
hidden layer, and the output layer (Fig. 3.35b).
The four main classes of ANNs are: (i) feedforward networks with signals
flowing in one direction (most often used); (ii) recurrent (feedback) networks (e.g.
Hopfield networks); (iii) Kohonen self-organizing networks; and (iv) radial basis
function (RBF) networks.
There exist different types of activation functions [27]. Their choice depends on the
problem being solved. The activation function typically has one of the following
forms:
• a linear activation function;
• a threshold activation function;
• a nonlinear activation function.
In the linear neuron the net value e becomes the output signal y. Typical
modification of the linear activation function is truncation of their values to the
range 〈0, 1〉 (for so-called unipolar functions) or 〈–1, 1〉 (for bipolar functions).
Examples of the truncated linear activation functions are presented in Fig. 3.36.
3.8 Artificial Neural Networks 53
(a) (b)
1 1
0.5
-1 0
-0.5 0 0.5
1
-1
The threshold functions are presented in Fig. 3.37. The neuron with the
threshold activation function is called perceptron.
(a) (b)
1
1
0
0
-1
1
uðeÞ ¼ ð3:8:4Þ
1 þ expðbeÞ
expðbeÞ expðbeÞ
uðeÞ ¼ ¼ tghðbeÞ ð3:8:6Þ
expðbeÞ þ expðbeÞ
The artificial neural network usually has its weights initialized randomly, usually
with values from the range 〈–0.1;0.1〉. The aim of the training process is to modify
the weights to obtain desired reaction (output values) on given inputs. The training
set is repeated many times (the number of repetition typically depends on the type
and topology of the ANN and on the complexity of the problem).
There are three main groups of ANN learning methods [36]: (i) a supervised
(associative) learning; (ii) an unsupervised learning (self-organization); and (iii) a
reinforcement learning. The supervised learning, as the most popular one, is
described in more detail in Sect. 3.8.4.1.
In unsupervised learning, the ANN is trained to respond to clusters of patterns
within the inputs and with no desired outputs. The ANN is supposed to discover
statistically important features of the input signals. There is no a priori set of
categories into which the patterns have to be classified.
In the reinforcement learning, which can be considered as an intermediate form
of the previous learning methods, the ANN is only provided with a grade, or score,
which indicates network performance.
In the supervised learning, the neural network is trained by presenting input and
matching output patterns. These input–output pairs can be provided by an external
teacher or by the system which contains the neural network.
The weighs correction for the perceptron is performed according to the following
rule (a delta rule):
ðjÞ ðjÞ
rwi ¼ gdðjÞ xi ð3:8:8Þ
where z is the required output value; y is the obtained output value; dðjÞ ¼ zðjÞ yðjÞ ;
xi is the ith input value; η is the learning rate.
56 3 Intelligent Computing Techniques
The learning rate η is a parameter which strongly influences the learning process.
Typically it takes values from the range 〈0.01 5.0〉. A small value of η results in
slow learning procedure but a large η value causes big weight modifications, and as
a result, the learning process may not be stable (the network is unable to learn).
The momentum learning method makes use of the additional component for
Eq. (3.8.9) which makes the weights correction also dependent on the error in the
previous step:
where η2 is a momentum parameter taking values from the range 〈0, 1〉 (usually
η2 = 0.9).
The direct application of the formula (3.8.9) to calculate the weights correction
for neurons in hidden layers is not possible, as there is no information about desired
outputs of such neurons. The backpropagation method [28] allows estimating the d
value for hidden layer neurons on the basis of the errors calculated for the next
layer:
X
n
ðjÞ
dðjÞ
m ¼ wðkÞðjÞ
m dk ð3:8:11Þ
k¼1
A radial basis function (RBF) neural network consists of the input layer, one hidden
layer and the output layer with one output neuron (Fig. 3.41). The input signals
x are transmitted to all neurons in the hidden layer [11].
The number of neurons in the hidden layer is equal to or lower than the number
of training vectors x. The hidden layer neurons implement the following mapping:
uðkx ci kÞ ¼ kx ck=r 2
a
uðkx ci kÞ ¼ r 2 þ kx ck2 ; 0\a\1
b ð3:8:14Þ
uðkx ci kÞ ¼ r 2 þ kx ck2 ; b[0
uðkx ci kÞ ¼ r kx ck2 lnðr kx ckÞ
where r > 0.
X
k
y¼ wi uðkx ci kÞ ð3:8:15Þ
i¼1
3.9.1 Introduction
Evolutionary algorithms and gradient methods are usually used separately. Gradient
methods are local optimization methods, which are fast and precise, but their
application is restricted due to their limitations: information about fitness function
gradient is often hard or even impossible to obtain and they have the tendency to
find local minima. Gradient methods process one point in each iteration and they
are very sensitive to the location of the starting point.
Evolutionary algorithms are global optimization methods and they do not require
information about fitness (objective) function gradient. As they process the popu-
lation of candidate solutions they are relatively slow. Also, the precision of such
algorithms in finding the optimal value is lower, which is caused by the manner in
which they work (new candidate solutions are generated by means of evolutionary
operators).
The alternative is coupling of both methods into hybrid algorithms, taking
advantages of global and local algorithms and reducing drawbacks of both of them.
As a result, one can obtain algorithms with the lower cost of calculation than using
only evolutionary optimization and the higher probability of finding the global
optimum than using only gradient methods. Both methods can be applied in a
parallel manner or sequentially. In the first attitude a gradient mutation operator is
introduced (Fig. 3.42). The gradient mutation is a single-argument operator, which
modifies any (especially the best one in the generation) chromosome using infor-
mation about the fitness function gradient [40].
Evolutionary
Gradient Algorithm
Algorithm
where:
xij if l ¼ 0
xji ¼ ð3:9:2Þ
xij þ Dxij if l ¼ 1
Dx ¼ anðxÞ ð3:9:3Þ
where F j ðxÞ denotes the fitness function value for a chromosome ch j ðxÞ;
• the conjugate gradient mutation:
Global method
In order to reduce the problems with calculation of the objective function gradient,
it is possible to employ the artificial neural network (ANN) to calculate the
approximate value. The same ANN can be applied to approximate the
boundary-value problem, which reduces the computational effort [15]. The most
important advantage of artificial neural networks is processing the data simulta-
neously. The approximation problem is one of the typical applications of artificial
neural networks.
An artificial neural network with sigmoid activation functions is considered (see:
Sect. 3.8). The fitness function is modelled closely to the optimum by the parabolic
function for each design variable; therefore one hidden layer in the ANN is suffi-
cient. The number of neurons in the input layer is equal to the number of design
variables of the objective function. In the output layer there is one neuron for which
the output value plays the role of the objective function value (Fig. 3.44).
The number of neurons in hidden layer depends on the optimization problem
and, as usual, on the number of design variables. The backpropagation training
method has been used to modify weights of the ANN.
The output value of a neuron k in the layer i (hidden or output) is expressed by:
1
eik ¼ ð3:9:7Þ
1 þ esik
where:
sik ¼ ei11 wi11ik þ ei12 wi12ik þ þ ei1 Ni1 wi1 Ni1 ik þ wwik
X
Ni1 ð3:9:8Þ
¼ ei1n wi1njk þ wwik
n¼1
ei – 1k are output values of neurons in previous layer, wij is weight value for jth
input of ith neuron.
The sensitivity of the output signal e21 of the network with respect to an input
value e0z is expressed as:
de21 X I1
ds1n1 de1n1 ds2n1 de21
¼ ð3:9:9Þ
de0z n ¼1 de0z ds1n1 de1n1 ds21
1
where:
!
ds1n1 d X
I0
¼ ðe0n0 w0n0 1n1 þ ww1n1 Þ ¼ w0z1n1
de0z de0z n0 ¼1
de1n1 d 1 es1n1
¼ ¼
ds1n1 ds1n1 1 þ es1n1 ð1 þ es1n1 Þ2
! ð3:9:10Þ
ds2n1 d XI1
¼ ðe1n1 w1n1 21 þ ww21 Þ ¼ w1n1 21
de1n1 de1n1 n ¼1
1
de21 d 1 es21
¼ ¼
ds21 ds21 1 þ es21 ð1 þ es21 Þ2
Training vectors
creation
ANN training
Local optimization
[Optimum found]
In the first stage, the global optimization method (evolutionary algorithm, arti-
ficial immune system or particle swarm optimizer) is used. As a result, a set
(cluster) of solution points is obtained. It can be assumed that some of the solutions
are located close to a global optimum. There is also a high probability that the
points are situated on the basis of attraction of more than one optimum. In this case
the second stage (local method) may work unstably. This problem can be reduced
by introducing a parameter which describes the maximum size of the cluster [16].
The parameter can be described by the radius of the region in a domain. The centre
of the region is equal to the best solution of the global method. All points inside the
region belong to a cloud of points. This approach is characterized by a variable
number of training vectors. In this case an alternative parameter is introduced. The
parameter defines the maximum number of the points in the cloud.
In the second stage, a local (gradient) method supported by ANN is used. The
local optimization procedure is an iterative procedure presented in Fig. 3.45. Local
optimization stage starts forming a cloud of points being a part of (the best)
solutions from the previous stage. These points are used to construct the training
vectors for the ANN. It is assumed that the initial number of training vectors is
equal to 3m, where m is the number of design variables. In the next step the local
3.9 Hybrid Computational Intelligence Algorithms 65
where xi is the input variable, y is the output variable, Ai is the fuzzy subset of rules
premise, and B is the fuzzy set of rules conclusion.
The rules are collected in a set called a rule base or a knowledge base. A typical
fuzzy system consists of four parts, as presented in Fig. 3.46 [44].
The inference process consists of three steps: fuzzification, inference and
defuzzification. Fuzzification block determines the degree of membership of each
(typically crisp) input variable x for each fuzzy set A′. Inference block uses the
membership values determined during fuzzification to evaluate the rules according
to the compositional rule of inference. The result is an output fuzzy set B′.
Defuzzification block is responsible for the transformation of the fuzzy set into a
crisp value y: The centre of gravity (COG) method is usually used [32]. As a result,
the output value can be calculated as:
PM
cl lAðlÞ ðxÞ
y ¼ Pl¼1
M ð3:9:13Þ
l¼1 lAðlÞ ðxÞ
where cl is the centre of the output set for the rule A(l), lAðlÞ ðxÞ is the membership
function calculated in the inference step, l = 1, 2, …, M is the rule number.
The NFIS realizes a multivariable function using the sum of single-variable
fuzzy functions [53]. The fuzzy functions are characterized by the membership
function l(x). The Gaussian membership function for each input in each rule is
assumed:
h
x ci2
lA ðx; c; rÞ ¼ exp ð3:9:14Þ
r
ðlÞ ðlÞ
where Wl corresponds to the cl value; ci and ri are centres and widths of part “IF”
in each rule, and Wl is the centre of part “THEN” in each rule.
The function f(x) can be described by making use of a multilayer fuzzy-neural
ðlÞ ðlÞ
network (Fig. 3.47). Parameters Wl, ci and ri are searched during the training
process. The aim of the training is the minimization of the mean-square error E by
means of gradient optimization methods. The mean-square error is defined as:
1
E ¼ ½f ðxÞ d 2 ð3:9:17Þ
2
where x is the input vector, f(x) is the function value approximated by the
fuzzy-neural network and d is the desirable answer of the NFIS for the input vector x.
3.9 Hybrid Computational Intelligence Algorithms 67
@E yl
¼ ½f ðxÞ d
@Wl f2
ðlÞ
@E f ðxÞ d xi c
¼2 yl ½Wl f ðxÞ
i 2
ðlÞ
@ci f ðlÞ
2
ri ð3:9:18Þ
h i2
ðlÞ
@E f ðxÞ d x i c i
¼2 yl ½Wl f ðxÞ
3
ðlÞ
@ri f2 ðlÞ
ri
different criteria. The results are strongly dependent on the declared optimization
parameters. The simulation results depend on the velocity value and the number of
particles in the case of PSO algorithm, and on mutation mechanism and other
algorithm parameters in the case of evolutionary and immune algorithms. Thus, the
optimization parameters have to be set taking into account the number of combi-
nations. In this chapter after the choice of the optimal optimization parameters for
the specified optimization problem, the comparison of effectiveness is performed. In
the literature one can find papers which usually indicate better effectiveness of
PSOs in comparison with AISs and GAs for different global optimization tasks [3,
24, 26, 58, 62].
The comparison has been performed on the basis of the optimization of the known
mathematical functions, that is, the Branin function with two design variables
(Fig. 3.48), the Goldstein-Price function with two design variables (Fig. 3.49), the
Rastrigin function with 20 design variables (Fig. 3.50), and the Griewangk function
with 20 design variables (Fig. 3.51), for the best parameters of the algorithms
(found earlier for these functions).
In order to find the optimal parameters of the particle swarm optimizer, the
algorithm has been tested by changing the number of particles, inertia weight w and
acceleration coefficients c1, c2. The range of the changes of the particular param-
eters of the particle swarm optimizer is presented in Table 3.2. The results of the
stage of the optimal parameters selection for particular mathematical functions are
included in Table 3.3.
The parameters of the artificial immune system are: the number of memory cells,
the number of clones, range of the Gaussian mutation and the crowding factor. The
ranges of the change in the artificial immune system parameters are included in
Table 3.4 and the optimal values of the parameters in Table 3.5.
The sequential and distributed evolutionary algorithms, applied in comparison
with the particle swarm optimizer, use evolutionary operators like a simple cross-
over and a Gaussian mutation. The selection is performed by means of the ranking
method. The optimal probabilities of the evolutionary parameters for particular
mathematical functions are presented in Table 3.6.
70 3 Intelligent Computing Techniques
The results of the comparison of the particle swarm optimizer to the artificial
immune system and the sequential and distributed evolutionary algorithms are
presented in Figs. 3.52, 3.53, 3.54, 3.55. The criterion of the comparison was the
effectiveness of the tested algorithms measured by the average number of objective
function evaluations. Ten tests have been performed for each change in the
parameters of the algorithm, and the average number of objective function evalu-
ations for this representation has been computed. The numbers of the objective
function evaluations needed to achieve the value near the global optimum for each
of the tested functions were computed (Figs. 3.52, 3.53, 3.54, 3.55). For example,
the number of objective function evaluations for the Branin function with the global
minima 0.397887 was computed when the algorithm reached the value below 0.5 as
3.10 Comparison of Particle Swarm Optimizer to Evolutionary Algorithms … 71
Table 3.3 The optimal parameters of the PSO for particular functions
Function Particles Inertia Acceleration Acceleration
number weight w coefficient c1 coefficient c2
Branin 4 0.5 1.8 1.7
Goldstein-Price 5 0.5 1.5 1.8
Rastrigin 74 1 1.9 1.9
Griewangk 10 1 1.7 1.7
72 3 Intelligent Computing Techniques
Table 3.5 The optimal parameters of the AIS for particular functions
Function The number of The number of Crowding Gaussian
memory cells the clones factor mutation
Branin 2 2 0.48 0.1
Goldstein-Price 12 2 0.45 0.5
Rastrigin 2 4 0.45 0.4
Griewangk 2 2 0.45 0.1
Table 3.6 The optimal parameters of SEA and DEA for particular functions
Function The number of The number of The The
subpopulations chromosomes in probability of probability of
each simple Gaussian
subpopulation crossover (%) mutation (%)
Branin 1 20 100 100
2 10 100 100
Goldstein-Price 1 20 100 100
3 7 100 100
Rastrigin 1 20 100 100
2 10 100 100
Griewangk 1 10 100 100
2 5 100 100
the stop condition for the optimization process. Then, new optimization process has
been started. Similarly, for the Goldstein-Price function with the global minima 3.0,
the stop condition was set as 3.1, for the Rastrigin and Griewangk function with the
global minima 0.0, the stop condition was equal to 0.1.
74 3 Intelligent Computing Techniques
References
14. Burczyński T, Orantek P (2005) The fuzzy evolutionary algorithm in structural optimization
and identification problems. In: 16th international conference on computer methods in
mechanics CMM-2005, Full Paper, CD-ROM, Czestochowa
15. Burczyński T, Orantek P (2005) Sigmoid and radial neural networks in sensitivity analysis:
comparisons and applications in defect identification. In: Proceedings of 16th international
conference on computer methods in mechanics CMM-2005, CD-ROM, Czestochowa
16. Burczyński T, Orantek P (2007) The identification of stochastic parameters in mechanical
structures. In: Proceedings of 17th international conference on computer methods in
mechanics CMM-2007, CD-ROM, Łódź-Spała
17. Burczyński T, Skrzypczyk J (1999) Theoretical and computational aspects of the stochastic
boundary element method. Comput Methods Appl Mech Eng 168:321–344
18. Cantu-Paz E (1998) A survey of parallel genetic algorithms. Calculateurs Paralleles, Reseaux
et Systems Repartis, 10, 2, Paris, pp 141–171
19. Caprani O, Madsen K, Nielsen HB (2002) Introduction to interval analysis. Lecture Notes,
Department of Informatics and Mathematical Modelling, Technical University of Denmark,
Lyngby, Denmark
20. de Castro LN, Timmis J (2003) Artificial immune systems as a novel soft computing
paradigm. Soft Comput 7(8):526–544
21. de Castro LN, Von Zuben FJ (2001) Immune and neural network models: theoretical and
empirical comparisons. Int J Comput Intell Appl (IJCIA) 1(3):239–257
22. de Castro LN, Von Zuben FJ (2002) Learning and optimization using the clonal selection
principle. IEEE Trans Evol Comput Spec Issue Artif Immune Syst 6(3):239–251
23. Chen L, Rao SS (1977) Fuzzy finite element approach for vibrating analysis of imprecisely
defined systems. Finite Elem Anal Des 27:69–83
24. Cheng YM, Li L, Chi SC (2007) Performance studies on six heuristic global optimization
methods in the location of critical slip surface. Comput Geotech 34:462–484
25. Clerc M, Kennedy J (2002) The particle swarm-explosion, stability and convergence in a
multidimensional complex space. IEEE Trans Evol Comput 6
26. Eberhart RC, Shi Y (1998) Comparison between genetic algorithms and particle swarm
optimization. In: Proceedings of the seventh annual conference on evolutionary programming.
Springer: New York, pp 611–616
27. Fausett LV (1993) Fundamentals of neural networks: architectures, algorithms and
applications. Prentice Hall, Upper Saddle River
28. Freeman JA, Skapura DM (1991) Neural networks—Algorithms, applications and program-
ming techniques. Addison-Wesley Pub, Reading
29. Gurney K (1997) An introduction to neural networks. UCL Press, London
30. Hanss M (2005) Applied fuzzy arithmetic. Springer, Berlin
31. Heppner F, Grenander U (1990) A stochastic nonlinear model for coordinated bird flocks. In:
Krasner S (ed) The ubiquity of chaos. AAAS Publications, Washington, DC
32. Jang JSR, Sun CT, Mizutani E (1997) Neuro-fuzzy modeling and soft computing. Prentice
Hall, Upper Saddle River
33. Kennedy J, Eberhart RC (1995) Particle swarm optimisation. In: Proceedings of IEEE
international conference on neural networks. Piscataway, NJ, pp 1942–1948
34. Kennedy J, Eberhart RC (2001) Swarm intelligence. Morgan Kauffman, San Francisco
35. Kleiber M, Hien TD (1992) The stochastic finite element method. Wiley, New York
36. Mehrotra K, Mohan CK, Ranka S (1997) Elements of artificial neural networks. MIT Press,
Cambridge
37. Mendel JM (2001) Uncertain rule-based fuzzy logic systems: introduction and new directions.
Prentice Hall, Upper Saddle River
38. Michalewicz Z (1996) Genetic algorithms + data structures = evolution programs. Springer,
Berlin
39. Moore RE (1966) Interval analysis. Prentice-Hall, Englewood Cliff
76 3 Intelligent Computing Techniques
4.1.1 Introduction
points in the considered domain. The algorithm is very simple, but effective only for
finite small domains, so usually checking all possibilities is impossible in reason-
able time. The goal of the random methods is to randomly explore the whole
searching space (without any additional parameters). Searching is very
time-consuming, but less than enumerative methods.
Analytical optimization methods are widely applied and have good mathematical
foundations, but unfortunately for multimodal function they usually get stuck in
local optima (Fig. 4.1). Intelligent computing techniques, for example, EAs, AISs,
PSOs all compromise between efforts to obtain in many practical optimization
tasks; they are the only possible choice. In many practical optimization tasks, they
are the only possible choice.
where xiL and xiR are left and right admissible values of xi .
For the single optimization problem, the task consists in finding a set of design
variables x which minimizes or maximizes the objective function f(x) and simul-
taneously satisfies a set of constrains. For the multiobjective optimization problem
instead of one objective function, a set of objective functions is considered:
8i 2 1; 2; . . .; k : fi ðxÞ fi ðyÞ
and ð4:1:4Þ
9j 2 1; 2; . . .; k : fj ðxÞ\fi ðyÞ
The pareto optimum does not always give a single solution, but a set of solutions
called nondominated solutions or efficient solutions.
Most of the bio-inspired multiobjective algorithms are based on pareto concept.
Some earlier implementations of such algorithms and gradient-based techniques use
scalarization methods that involve preferences a priori. Parameters, coefficients,
constraint limits, and so on, have to be specified in order to complete the
pareto-optimal set [5, 51]. The most popular methods are:
• global criterion method,
• min–max method,
• weighting min–max method,
• weighting sum method,
• e-constraint method,
• lexicographic method,
• goal programming.
The authors have performed different types of engineering multiobjective opti-
mization problems with the use of:
• weighting sum method,
• e-constraint method,
• MOEA—multiobjective evolutionary algorithm based on Fonseca Fleming idea,
• MOOPTIM—multiobjective optimization library based on pareto concept and
EAs,
• NSGA-II—nondominated sorting genetic algorithm.
Except NSGA-II all implementations are authors’ codes written in C ++.
The weighting sum method and e-constraint method belong to the scalarization
methods and did not require modification of the core evolutionary algorithm (se-
quential evolutionary algorithm is used). For the former one, the problem is
transformed into single objective (Fig. 4.3) by applying the following formula:
X
k
f ð xÞ ¼ w i f i ð xÞ ð4:1:5Þ
i¼1
where k is the number of objective functions, and wi are weights of each criterion.
The algorithm consists of two parts: an initialization and a main loop. Figure 4.5
shows the flowchart of the multiobjective evolutionary algorithm. In the initial-
ization step, all the settings of the algorithm are determined, populations Qi and Pi
are generated and the fitness functions are evaluated for population Qi. In the main
loop, after the evaluation of fitness function values for Pi and checking stop
• ANSYS Multiphysics;
• ABAQUS;
• COMSOL.
Coupling of the computational techniques and the optimization problem requires
the creation of the proper interface (Fig. 4.6). Communication is usually performed
through files. These interfaces should read values of the design variables and
prepare appropriate information for the solution of the boundary-value problem.
The flowchart of the fitness function evaluation is presented in Fig. 4.7. The
optimization algorithm (EA, MOEA, AIS, PSO) sends values of design variables.
On the basis of the design variables, the first geometry of the structure is created.
The next steps are generation of the finite-element mesh, boundary and initial
conditions and definition of all necessary settings of the analysis. After solving the
boundary-value problem, the results are read from the output files generated by the
FEM or BEM packages. On the basis of the results, fitness function (for single
optimization task) or fitness functional (for the multiobjective optimization task) is
calculated.
It should be mentioned that the preparation of the model, mesh generation, and
so on can be done by means of own codes or appropriate pre-processor for com-
mercial systems. It is very convenient to use pre-processors (e.g. there is no need to
use external mesher procedure), but it requires the usage of internal script languages
implemented in the pre-processors and may be more time-consuming.
The choice of the geometry modelling method and the design variables has a great
influence on the final solution of the optimization process. There are a lot of
methods for geometry modelling. In the proposed approach, nonuniform rational
B-spline (NURBS) and Bezier curves are used to model the geometry of the
structure [98]. The use of these curves in optimization makes the reduction of the
number of design parameters possible. It provides the flexibility to design a large
variety of shapes by manipulating the control points.
An nth-degree Bezier curve is defined by:
X
n
C ð uÞ ¼ Bi;n ðuÞPi ð4:1:7Þ
i¼0
where u is a coordinate with changes and range h0; 1i, and Pi are control points.
The basis function Bi;n is given by:
n!
Bi;n ðuÞ ¼ ui ð1 uÞn1 ð4:1:8Þ
i!ðn iÞ!
An example of the fourth-degree Bezier curves is shown in Fig. 4.8. The flex-
ibility to design a large variety of shapes is provided by manipulating the control
points.
Fig. 4.8 The example modelling of the shape of the structure by fourth-degree Bezier curve
4.1 Formulation of Single- and Multiobjective Optimization Problems 87
Successive points of the curve are obtained by changing the value of u between 0
and 1. For u ¼ 0; CðuÞ ¼ P0 and for u ¼ 1; CðuÞ ¼ P4 . The shapes of the Bezier
curve depend on the position of control points. In order to obtain more complicated
shapes, it is necessary to raise-up the degree of the Bezier curve and introduce more
control points.
A NURBS curve is more adjustable and flexible in comparison to the Bezier
curve. The curve is defined by the following formula:
Pr
Nj;n ðuÞwj Pj
C ðuÞ ¼ Pj¼0
r ; aub ð4:1:10Þ
k¼0
Nk;n ðuÞwj
where Pj are control points, wj is the weight of control points, Nj,n is the nth-degree
B-spline basis functions defined by the knot vector
8 9
< =
U ¼ a; . . .; a; un þ 1 ; . . .; umn1 ; b; . . .; b ð4:1:11Þ
:|fflfflffl{zfflfflffl} |fflfflffl{zfflfflffl};
nþ1 nþ1
When the position and the weight of the control points are changed, it is possible
to manipulate the curve precisely. From the practical point of view, a very
important feature of NURBS curves is local approximation property. It means that
if the control point Pj is moved
and/orthe weight wj is changed, only a part of the
curve on the interval u 2 ui ; ui þ p þ 1 is modified.
In the case of 3D structures the boundaries of the NURBS surfaces (Fig. 4.9) are
modelled. Due to the use of NURBS curves and surfaces, the number of optimized
parameters can be decreased.
Fig. 4.9 The modelling of the boundary by means of the NURBS surface
88 4 Structural Intelligent Optimization
Shape and topology optimization have been an active research area for some time.
Recently, several innovative approaches for topology optimization have been
developed. One of the simplest optimization approaches is the method based on
removing inefficient material from a structure. This method is named evolutionary
structural optimization [120]. However, this method is not based on the application
of the evolutionary algorithm but on different rejection criteria for removing
material which depends on the types of design constraints.
One of the most famous topology optimization approaches is based on the
material homogenization method [16, 18]. It has been applied to various opti-
mization problems. The homogenization design method assumes the introduction of
the periodic microstructures of a particular shape into the finite elements of the
discretized domain. The size and orientation of microstructures in the elements
determine the density and structural characteristics of the material they are made of.
An optimization process consisting of application of the mathematical program-
ming techniques leads to the minimization of the structure compliance by changing
the orientation and size of the microstructures. As a result of the optimization
process, composite structures emerge. As a variation and simplification of the
homogenization method, the solid isotropic microstructure with penalization
(SIMP) method [17, 18] has been introduced. In this approach, the densities of the
basic element play the role of the design variables. The convergence of this method
is strongly dependent on the value of penalization of the term. Another interesting
approach assumes the discretization of the domain into binary material/void ele-
ments introduced by Anagnostou et al. [4]. This approach was developed by
Kirkpatrick et al. [78], who proposed finding the optimal material configuration
within the design domain by using simulated annealing. Jensen and Sandgren [103]
proposed the application of the genetic algorithm in order to solve similar opti-
mization problems. This approach has been developed by Chapman et al. [49].
Another interesting approach to the structural optimization problem is the method
named multi-GA system introduced by Woon et al. [119], which assumes the
application of two operating simultaneously and in parallel genetic algorithms. The
first external genetic algorithm is used to define the optimum shape of the structure
through operating on the external boundary, while the second (internal) is used to
optimize the internal topology. This method does not require the application of the
post-processing or additional algorithms to generate smooth boundaries. Another
approach to the structural optimization is based on generating a new void (so-called
bubble) inside a domain on the basis of special criteria and next on performing
simultaneous shape and topology optimization. This approach was originated by
Eschenauer and Schumacher [67]. Coupling of this approach, the boundary ele-
ments and the genetic algorithms, was considered by Burczyński and Kokot [31].
From the mathematical point of view, this approach is based on replacing
one-connected domain with a multiconnected domain. The topology optimization
4.2 Shape, Topology, Material and Size Optimization and Their Parameterization 89
1Z
J¼ updC ð4:2:6Þ
2C
Fig. 4.10 The illustration of the idea of evolutionary generation for a 2D structure
where djmin , djmax are the minimum and maximum values of the gene, respectively.
Genes are the values of the function Wa ð X Þ; a ¼ q; g in the control points ð X Þj of
h i
the surface (hypersurface), that is, dj ¼ Wa ð X Þj ; j ¼ 0; 1; 2; . . .; G:
The finite-element method [126] is applied in the ana-lysis of the structure. The
domain X of the structure is discretized by means of the finite elements,
X ¼ [ Ee¼1 Xe .
Assigning of the mass density and thickness to each finite element Xe ; e ¼
1; 2; . . .; E is adequately performed by the mappings:
qe ¼ Wq ð X Þe ; ð X Þe 2 Xe ; e ¼ 1; 2; . . .; E ð4:2:9Þ
ge ¼ Wg ð X Þe ; ð X Þe 2 Xe ; e ¼ 1; 2; . . .; E ð4:2:10Þ
It means that each finite element can have different mass density or thickness.
92 4 Structural Intelligent Optimization
When the value of the mass density or thickness for the eth finite element is
included in the interval 0 qe \qmin (or 0 ge \gmin ), the finite element is elim-
inated and the void is created, and in the interval qmin qe \qmax (or
gmin ge \gmax ), the finite element remains.
In the next step, the Young’s modulus for the eth finite element is evaluated by
means of the following equation:
r
qe
Ee ¼ Emax ð4:2:11Þ
qmax
where Emax ; qmax are Young’s modulus and mass density for the same material,
respectively, r is a parameter which can change from 1 to 9.
The dependence between Young’s modulus and mass density in the topology
optimization was proposed for the first time by Bendsøe [15]. For the topology
optimization of 2D structures the expression (4.2.11) was applied by Kutyłowski
[87]. The material properties or the thickness of finite elements change evolution-
ally and some of them are eliminated by means of the proposed method. As a result,
the optimal shape, the topology and the material or the thickness of the structures
are obtained.
4.2.3 Parameterization
Parameterization is the key stage in the structural optimization. The large number of
design variables makes the optimization process ineffective. A connection between
the design variables (genes) and the number of finite elements leads to poor results.
Better results can be obtained when the surface (or hypersurface) of mass density
distribution is interpolated by a suitable number of values given in control points
ð X Þj . This number, on the one hand, should provide the good interpolation, and on
the other hand, the number of design variables should be small.
Two different types of interpolation procedures were applied. First, the multi-
nomial interpolation described below for 3D structure was introduced (the proce-
dure for 2D structure is particular case of it). The hypersurface Wa is interpolated as
follows:
2 3
1 d1
Wa ð X Þ ¼ U D E 1 F 1 4 5; a ¼ q; g ð4:2:12Þ
d27
where
4.2 Shape, Topology, Material and Size Optimization and Their Parameterization 93
The structure which is under the optimization process is inserted into a cube H 3
whose edges have length A = 2, B = 2, C = 2, and 27 control points are arranged
regularly (Fig. 4.11). In this case the number of control points is fixed. In the case
when the body has a complex geometry whose overall dimensions are considerably
different from the space H 3 , this approach can lead to the lower accuracy of the
interpolation process. Then, the domain X does not cover the working space
(Fig. 4.12).
In order to overcome these difficulties, the second interpolation procedure based
on some nodes overlapping selected FEM nodes has been introduced. This pro-
cedure (Table 4.1) is based on the analysis of the neighbourhoods of the individual
nodes and enables introduction of an optional number of the control points in any
nodes of the finite-element mesh.
This interpolation procedure works in an iterative way:
Ik þ 1 ¼ f Ik ; k ¼ 0; 1; 2; . . .; K ð4:2:15Þ
where the approximations of the interpolation vector in the following steps k are
given by the expression
I k ¼ pk1 ; pk2 ; . . .; pki ; . . .; pkN ; i ¼ 1; 2; . . .; N; k ¼ 0; 1; 2; . . .; K ð4:2:16Þ
and the interpolation parameters pki are the values of the function Wak ; a ¼ q; g, in
the interpolation nodes ð X Þi (nodes of the finite element mesh):
pki ¼ Wak ð X Þi ; i ¼ 1; 2; . . .; N; k ¼ 0; 1; 2; . . .; K; a ¼ q; g ð4:2:17Þ
The number and the arrangement of the control points of the interpolation
function Wak ; a ¼ q; g are the input data to the optimization programme. The control
points are located in selected nodes of the finite-element mesh and the given
inequality is satisfied:
4.2 Shape, Topology, Material and Size Optimization and Their Parameterization 95
GN ð4:2:18Þ
The number of control points equals the number of design variables. The number
and the locations of control points are arbitrarily declared by the user of the opti-
mization programme, who simultaneously introduces value 1 in the additional
vector Ti ; i ¼ 1; 2; . . .; N at the position which corresponds to the number of
chosen node. Therefore, in order to distinguish the nodes which play the role of the
control points, the additional vector Ti ; i ¼ 1; 2; . . .; N is introduced. If Ti ¼ 1
then the node pki ¼ dj ; j ¼ 1; 2; . . .; G plays the role of the control point. In
another way Ti ¼ 0 and interpolation parameters are calculated by the equation
1
pki þ 1 ¼ max pkl þ min pkl ; l ¼ 1; 2; . . .; M ð4:2:19Þ
2
search min(
(4.2.20)
search min( )
(4.2.21)
search max( ) (additionally for 3-D)
search min( ) (additionally for 3-D)
(additionally for 3-D) (4.2.22)
}
}
}
Two different types of the procedure have been introduced. The first one
(Fig. 4.13a) is applied to the task of minimization of the stress functional, and the
second one (Fig. 4.13b) to the task of minimization of the mass functional. In the
first case the procedure is performed until the volume constraint is fulfilled. In the
second one, the material is eliminated to the moment when the admissible limit of
the stresses is exceeded. Then the procedure of adding the material (finite elements)
around the regions with high stresses is performed. The structure is analysed by the
FEM and the stress constraint is checked. If the constraint is satisfied, the procedure
is finished and the fitness function is computed. If not, the last structure which has
fulfilled the stress constraint is analysed by the FEM, and the fitness function is
evaluated and transferred to the evolutionary algorithm.
The final structure obtained after the optimization process has a rough external and
internal segments of the boundary. In order to get the smooth shape of the
boundary, the smoothing procedure has to be used. The procedure can be used
during or after the optimization process. If the procedure is used during the opti-
mization process, the smooth structures which fulfil all the imposed constraints are
4.2 Shape, Topology, Material and Size Optimization and Their Parameterization 97
(a)
(b)
Fig. 4.13 The additional procedure aiding evolutionary optimization: a for the minimization of
the stress functional; b for the minimization of the mass functional
obtained. If the procedure is used after the optimization process, the smooth
structures which do not have to fulfil the imposed constraints are obtained. So they
must be analysed by the finite-element method once again and it must be checked if
they fulfil the constraints. The procedure smoothes the boundaries of the structures
by changing the coordinates of the nodes in an iterative way (Table 4.3).
The task of the optimization of shape, topology and thickness of a car wheel by the
minimization of the stress functional and with the volume constraint is considered.
A car wheel geometry with characteristic dimensions, included in Table 4.3, is built
of three surfaces of revolution (Fig. 4.15): the central surface with the holes for the
4.3 Optimization of Elastic Structures Under Static Loads 99
fastening bolts, the surface of the ring of the wheel and the surface connecting the
two mentioned earlier. The last one is subjected to the optimization process. The
shell structure is loaded with the tangent force s0 (torsion of the wheel) and with a
pressure c0 (pressure in the tyre). The loadings are applied to the ring of the wheel
(Fig. 4.16b). The structure is rigidly supported around the holes destined for the
fastening bolts and is also supported on the central surface in the direction of the
rotation axis of the wheel (Fig. 4.16b). In the considered task the symmetry of
the car wheel (revolution of the 1/5 part of the structure) during the distribution
of the control points of the interpolation hypersurface has been used (Fig. 4.16a). In
this way the number of design variables (genes) could be decreased and the sym-
metrical results could be obtained. This reasoning is purposeful because of the
necessity of the car wheel balance. Input data for the optimization task and the
parameters of evolutionary algorithm are included in Tables 4.3 and 4.4, respec-
tively. The results of the optimization are presented as the maps of thickness and the
maps of stresses for the best obtained solutions in the 100th generation (Fig. 4.17).
Fig. 4.16 A car wheel: a the distribution of the control points of the interpolation hypersurface;
b boundary conditions
100 4 Structural Intelligent Optimization
min max
(a) (b)
(c) (d)
Fig. 4.17 The results of the car wheel optimization: a, b the best solution from the first
population; c, d the best obtained solution; a, c maps of thicknesses; b, d maps of stresses
The task of the optimization of the shape, the topology and thickness of a tank
supporting a structure by the minimization of the stress functional and with the
volume constraint is considered. The considered construction is stiffly supported on
4.3 Optimization of Elastic Structures Under Static Loads 101
Fig. 4.18 The geometry and the dimensions of a tank supporting structure
the lower boundary. The tank is loaded with pressure c0 and the construction is
loaded with deadweight. The geometry and dimensions of the construction are
presented in Fig. 4.18. The tank supporting structure presented in Fig. 4.19a is
subjected to the optimization process. In order to reduce the number of design
variables and to get the symmetrical results, a quarter of the construction has been
analysed. The distribution of the control points of the interpolation hypersurface is
shown in the Fig. 4.19b. The input data for the optimization task and the parameters
of the distributed evolutionary algorithm are included in Tables 4.5 and 4.4,
respectively. The results of the optimization process with the application of three
metal plates of different thicknesses are presented as the maps of thickness
(Fig. 4.19c) and the maps of stresses (Fig. 4.19d) for the best obtained solution.
In the next example an “L” structure (Fig. 4.20a) is optimized. The criterion of
optimization is the minimization of the mass. Computational results obtained after
73 generations are presented in the form of a map distribution of mass density
(Fig. 4.20b, c). The structure after smoothing is presented in Fig. 4.21. Table 4.6
contains input data. The dimensions, loading of 3D structure and constraint are
included in Tables 4.7 and 4.8.
102 4 Structural Intelligent Optimization
min max
(a) (b)
(c) (d)
Fig. 4.19 Tank supporting structure: a the finite element mesh; b the distribution of the control
points of the interpolation hypersurface; c, d the results of the evolutionary optimization of the
tank supporting structure. The best individual in the t = 100th generation; c the map of
thicknesses; d the map of stresses
Table 4.5 The input data to the optimization task of a tank supporting structure
The number of design The number of rmin; Pressure c0 Vmax
variables control points p (MPa) (MPa) (cm3)
29 29 1.0; 1.0 5.0 17 000
Range of ge (mm); the existence or elimination of the finite The thickness of the metal
element plates (mm)
2.5 ge < 7.5 elimination 1. g = 10
7.5 ge 22.5 existence for 7.5 ge < 12.5
2. g = 15
for 12.5 ge < 17.5
3. g = 20
for 17.5 ge 22.5
4.3 Optimization of Elastic Structures Under Static Loads 103
Fig. 4.20 L-like structure: a the scheme of loading, b the distribution of mass density after first
generation, c the distribution of mass density after optimization
Table 4.9 The input data to the optimization task of a plate in plane stress
rad The thickness rmin; p (MPa) P (kN) Range of qe (g/cm3)
(MPa) (mm)
80.0 4.0 1.0; 1.0 2.0 7.3 qe < 7.5 elimination
7.5 qe 7.86 existence
4.3 Optimization of Elastic Structures Under Static Loads 105
Fig. 4.22 The plate (Example 1); a the geometry; b the distribution of the control points of the
interpolation surface
Fig. 4.23 The results of the immune optimization of the plate: a the solution of the optimization
task; b the map of mass densities; c the map of stresses; d the map of displacements
A 3D structure with dimensions and loading is presented in Fig. 4.24a and b. The
input data for the optimization procedure are included in Table 4.10. The geometry
and the distribution of the control points of the interpolation hypersurface are
shown in Fig. 4.24c. The results of the optimization process are presented in
Figs. 4.25 and 4.26.
106 4 Structural Intelligent Optimization
Fig. 4.24 Two cases of loading with the hypersurface: a first case (compression), b second case
(tension), c the distribution of the control points of the interpolation hypersurface
Fig. 4.25 a The distribution of mass density for the first case (compression), b structure after 50
iteration (the best solution) and c structure after smoothing
The structure is stiffly supported at the bottom boundary of a solid body. The upper
surface is loaded with pressure. The geometry, the boundary conditions and the
4.3 Optimization of Elastic Structures Under Static Loads 107
Fig. 4.26 a The distribution of mass density for the second case (tension), b structure after 50
iteration (the best solution) and c structure after smoothing
Fig. 4.27 The shell-solid structure (Example 3); a the geometry; b the boundary condition; c the
distribution of the control points of the interpolation hypersurface
Table 4.11 The input data to the optimization task of a plate in plane stress
rad The thickness rmin; p (MPa) Range of qe (g/cm3)
(MPa) (mm) p (MPa)
150.0 15.0 2.0; 2.0 3.0 7.3 qe < 7.5
elimination
7.5 qe 7.86
existence
min max
(a)
(b) (c)
Fig. 4.28 The results of the immune optimization of the shell-solid structure: a the solution of the
optimization task; (the map of mass densities); b the map of stresses; c the map of the displacement
A quadratic plate loaded with the concentrated force Q applied at the centre of the
structure and fixed at the boundary is considered. In order to obtain the symmetrical
results, a quarter of the structure has been analysed. The input data to the opti-
mization programme are included in Table 4.12. The results of the optimization
process with different values of the stress constraint are presented in Table 4.13.
Table 4.12 The input data to the optimization task of a bending plate
a
b (mm) Thickness (mm) rmin; p (MPa) Q (N) Range of qe (g/cm3)
200
200 4.0 5.0; 1.0 200.0 7.3 qe < 7.5 elimination
7.5 qe 7.86 existence
4.3 Optimization of Elastic Structures Under Static Loads 109
150 MPa
200 MPa
Table 4.14 The input data to the optimization task of a shell bracket
rad Thickness rmin; Q1; Q2 Range of qe (g/cm3)
(MPa) (mm) p (MPa) (kN)
110.0 5.0 2.0; 2.0 1.0; 7.3 qe < 7.5
1.0 elimination
7.5 qe 7.86
existence
Fig. 4.30 The results of the swarm optimization of the shell bracket structure: a the map of mass
densities; b the map of stresses; c the map of the displacement, for the best obtained solution
maxðx1 Þ ð4:4:1Þ
V jXj
ð4:4:2Þ
V V max
(b) the maximization of the difference between the first, second and third
eigenfrequencies
4.4 Optimization of Elastic Structures Under Dynamical Loads 113
where djmin is the minimum value of the gene and djmax is the maximum value of the
gene.
The assigning of Young’s moduli to each finite element Xe ; e ¼ 1; 2; . . .; R is
performed by the mapping:
Ee ¼ W ðx; y; zÞe ; ðx; y; zÞe 2 Xe ; e ¼ 1; 2; . . .; R ð4:4:7Þ
It means that each finite element can have different material properties.
If the value of Young’s modulus for the eth finite element is included in the
interval 0 Ee \Emin , the finite element is eliminated and the void is created; in the
interval Emin Ee \Emax , the finite element remains having the value of the
Young’s modulus from this material. As a result, the shape, topology and material
properties of the structure are changing simultaneously and this procedure is called
evolutionary generalized optimization.
Example 1: The maximization of the first eigenfrequency of a 3D bracket
A structure in the form of a 3D bracket (Fig. 4.31a) is optimized. The criterion of
optimization is the maximization of the first eigenfrequency. The best solution
obtained after 88 generations is presented in Fig. 4.31b. Table 4.15 contains input
data.
114 4 Structural Intelligent Optimization
Example 2: The maximization of the difference between the first, second and
third eigenfrequencies of a rectangular prism
A 3D structure in the form of a rectangular prism (Fig. 4.32a) is optimized. The
criterion of optimization is the maximization of the difference between the first,
second and third eigenfrequencies. The best solution in the form of the distribution
of Young’s moduli obtained after 169 generations is performed in Fig. 4.32b. Input
data are included in Table 4.16.
Example 3: The maximization of the difference between the first, second and
third eigenfrequencies and the forced vibration frequency of a rectangular prism
The last example concerns the optimization of a 3D structure from the previous
example (Fig. 4.32a). The criterion of optimization is the maximization of the
4.4 Optimization of Elastic Structures Under Dynamical Loads 115
difference between the first, second, and third eigenfrequencies and forced vibration
frequency. The best solution obtained after 134 generations is presented in
Fig. 4.33. Input data are included in Table 4.17.
The plate is modelled by the boundary element method (BEM) [65] and the
stiffener by the finite-element method (FEM) by means of beam finite elements,
attached along the C12 boundary (the interface). A perfect bonding between the
plate and the stiffener is assumed. The whole structure is analysed by the coupled
BEM/FEM and the subregion method [68]. The method allows modelling of bodies
with many plate subdomains and stiffeners of different properties. The numerical
equations, which are written for each plate and beam subdomain separately, are
coupled of using displacement compatibility conditions and traction equilibrium
conditions at all nodes along the common boundaries.
A set of algebraic equations for the plate in Fig. 4.34 has the following form:
1
u
€ u1 t1
M 1
M12 þ H1 H 12 ¼ G1 G 12
ð4:4:8Þ
€
u 12
u 12
t12
where M is the mass matrix, H and G are the BEM coefficient matrices, u and u € are
displacement and acceleration vectors, respectively, t is a vector of tractions applied
at the outer boundary or the interface. The superscripts denote the matrices, which
correspond to the outer boundary or the interface.
The equation of motion for the stiffener in Fig. 4.34 in a matrix form is:
where K is the FEM stiffness matrix and T is the matrix, which expresses the
relationship between the FE nodal forces and the BE tractions. The latter matrix
allows treatment of the finite-element region as an equivalent boundary element
region.
If the structure is subjected to time-dependent boundary conditions, the dynamic
interaction forces between the plate and the stiffener act along the interface. These
tractions are treated as body forces distributed along the attachment line and they
are unknown of the problem. The displacement compatibility conditions and the
traction equilibrium conditions at the nodes along the interface are:
If the above conditions are taken into account in the equations for the plate
(4.4.8) and stiffener (4.4.9), the following system of equations for the whole
structure is obtained:
8 9
< u1 =
M 1
M12
€
u1
H 1
H12
G 12
þ u12 ¼ G1 t1 ð4:4:11Þ
0 M21 €12
u 0 K21 T21 : 12 ;
t
The unknowns are displacements and tractions on the external boundary and at
the interface in each time step.
Example 4: Reinforced rectangular plate
The optimization of a reinforced rectangular plate (Fig. 4.35) is performed by
means of AIS, PSO and EA. The plate is dynamically loaded and it is reinforced by
the frame-like structure composed of straight beams. The plate and the stiffeners are
modelled by the boundary elements and frame finite elements, respectively.
Different kinds of load and support are considered. The structure before opti-
mization (the reference plate) is shown in Fig. 4.35.
The length and height of the plate are L = 10 cm and H = 5 cm, respectively.
The thickness of the plate is g = 0.25 cm; the dimensions of beams cross-section
are 2a = 0.5 cm and b = 0.5 cm.
The material of the plate and frame is aluminium, and the mechanical properties
are: the Young’s modulus E = 70 GPa, Poisson’s ratio m = 0.34 and density
q = 2700 kg/m3. The material is homogeneous, isotropic and linear elastic and the
plane stress is assumed.
118 4 Structural Intelligent Optimization
The uniformly distributed load is applied at the upper edge of the plate. Two
kinds of time-dependent loads are considered (see Fig. 4.36): (a) the sinusoidal load
p(s) = posin(2ps/T) with the period of time T = 20p ls, and (b) the Heaviside load
p(s) = poH(s). The value of the load in both cases is po = 10 MPa. The time of
analysis is 600 ls and the time step Dt = 2 ls.
Three different supports are considered (see Fig. 4.37):
(a) support A—the plate is fixed on the left and right edges,
(b) support B—the plate is supported at two segments, each of 0.5 cm long,
(c) support C—the plate is fixed at the bottom edge.
The optimal positions of stiffeners are searched in order to maximize the stiffness
of the plate. The maximal dynamic vertical displacement on the loaded edge is
minimized. Because of the symmetry of the structure and boundary conditions, only
half of the structure is considered. The number of design variables defining the
position of the frame is 4: X1, X2, Y1 and Y2 (see Fig. 4.38). The longer beams are
parallel to x-axis. The end points of beams can move along the edges of the plate
within the constraints, as shown in Fig. 4.38. The constraints imposed on design
variables are: X1 and X2 variables are within the range from 0.5 to 4.75 cm, Y1
from 0.5 to 2.25 cm and Y2 from 2.75 to 4.5 cm. The parameters of AIS are: the
number of memory cells and the clones is 6, the crowding factor and the Gaussian
mutation is 0.5. The parameters of EA are: the number of chromosomes is 20, the
probability of the Gaussian mutation is 0.5, and the probability of a simple and
arithmetic crossover is 0.05. The parameters of PSO are: the number of particles is
20, inertia weight is 0.73 and two acceleration coefficients are 1.47.
The total number of boundary and finite elements in the BEM/FEM analysis is
120 and 120, respectively (each horizontal and vertical beam is discretized into 40
and 20 finite elements, respectively). The number of boundary and finite elements
during the optimization is constant.
The values of the design variables obtained by AIS, PSO and EA for the plate
subjected to the sinusoidal load, the Heaviside load and for three kinds of supports
are presented in Table 4.18. The results obtained by three different methods are
almost the same. The values of Jo and J (where Jo and J are the objective functions
for the reference and the optimal plate, respectively) and the reduction of R = (Jo –
J)/Jo 100%, are also presented.
A significant reduction of R resulting in the improvement of the dynamic
response of the optimal plates in comparison with the initial designs can be
observed. The optimal structures for different kinds of supports and for the sinu-
soidal and the Heaviside loads are shown in Fig. 4.39a and b, respectively. It can be
seen that in the present example most of the constraints are active.
The efficiency of bio-inspired methods EA, AIS and PSO measured by number
of fitness function evaluations is presented in Table 4.19.
where rAx ðtÞ is the x-component of stress at the point A (see Fig. 4.40), ro is
a nominal stress at the weakened cross-section, defined as the ratio of the applied
load to the area of this cross-section; T is the time of analysis.
The plane stress is assumed. The cantilever is made of steel and considered as
a homogeneous and isotropic material in the framework of linear theory of elas-
ticity. The values of mechanical properties are: the Young’s modulus E = 210 GPa,
Poisson’s ratio m = 0.3 and density q = 7860 kg/m3.
The optimal shape of the cantilever is searched and the following objective
function J is minimized:
A 2
ZT
uy ð t Þ
J¼ dt: ð4:4:13Þ
0
uo
where uAy ðtÞ is a vertical displacement at the point A (see Fig. 4.42), uo is an
admissible displacement and T is the time of analysis.
The objective function (4.4.13) is minimized with respect to design variables (Li,
Hi, i, j = 1, 2), defining the dimensions of the structure.
The total number of boundary and finite elements in the BEM/FEM analysis is
84 and 72, respectively The quadratic elements (with two degrees of freedom per
node) are used for the BEM mesh. The frame elements (with three degrees of
freedom per node) are used for the FEM mesh. During the optimization the number
of boundary and finite elements is constant.
For this example five tests were performed and similar results were obtained.
The values of design variables for the optimal solutions are (rounded off to two
decimal places): L1 = 30.62 cm, L2 = 35.00 cm, H1 = 25.00 cm and
H2 = 25.00 cm. The optimal structure is shown in Fig. 4.43.
124 4 Structural Intelligent Optimization
Reinforced structures are often used in practice because they are resistant, stiff and
stable. A typical area of application of such structures is an aircraft industry where
light, stiff and highly resistant structures are required. Many aircraft elements are
made as thin panels reinforced by stiffeners. The choice of the optimal shape of the
structure or of the proper stiffeners arrangement in a domain of the structure decides
about the effectiveness of the construction or about the effectiveness of reinforce-
ment. Optimal properties of structures can be searched by means of computer-aided
optimization tools. The stiffeners layout is usually achieved by modifying the
thickness of each element of the finite element mesh or using the homogenization
method. However, the results obtained by means of these approaches do not give
clear stiffeners layout. Bendsoe and Kikuchi [16] analysed composites with per-
forated microstructures using the homogenization method. As a result of topology
optimization, the grey-scaled structures emerged. Cheng and Olhoff [50] considered
the problem of stiffener layout using the method based on thickness distribution to
maximize the stiffness of rectangular and axisymmetric plates. Ding and Yamazaki
[56] generated stiffener layout patterns introducing a growing and branching tree
model and topology optimization method. Diaz and Kikuchi [55] searched for the
optimal reinforcement layout for the plates by adding a declared amount of rein-
forcing material to increase the fundamental frequency. Bojczuk and Szteleblak
[21] proposed a heuristic algorithm in order to find the optimal reinforcement
layout. This algorithm consists of two stages: first, the initial localization of new
fibre or rib is determined by the information from sensitivity analysis (analogous to
the topological derivative approach of Sokołowski and Zochowski [108]); next, the
gradient optimization method is performed to correct their positions. Another
method is based on the optimization of the layout of isogrid stiffeners applied as
special triangular patterns. Due to their efficiency, these isogrid members have been
applied for example in launch vehicles and spacecraft components [107]. In the
present chapter, coupling FEM with bio-inspired methods, like the distributed
evolutionary algorithm [115] and the particle swarm optimizer [76], in optimization
of statically loaded reinforced structures is presented. The structures are optimized
by means of the criteria dependent on displacements or stresses. Numerical
4.5 Optimization of Structures with Stiffeners 125
examples demonstrate that the method based on the soft computation is an effective
technique for solving computer-aided optimal design problems.
introduced. The connection of the stiffeners that ends with the 2D structures
boundary has been assumed; therefore the location of the stiffener in the 2D
structure domain is determined by two points: Pi—beginning and end of the stiff-
ener (Fig. 4.44a).
In order to minimize the number of design parameters, the curved stiffener is
defined by means of nonuniform rational B-spline (NURBS) curve [98]. The shape
of this curve is defined by the control points Ck, k =1, 2, …, L; Ck X2D (L is the
number of control points).
The location of the stiffeners in the domain of 2D structures is controlled by
genes hi, i = 1, …, N and their shape by genes g1 , j = 1, …, M (Fig. 4.44b). The set
of the genes creates a chromosome
ch ¼ h1 ; h2 ; . . .; hi ; . . .; hN ; g1 ; g2 ; . . .; gj ; . . .; gM ð4:5:3Þ
where: hmin is the minimum value of the gene h, hmax is the maximum value of the
gene h, gmin is the minimum value of the gene g; gmax is the maximum value of
the gene g:
In order to solve the formulated problems, the finite-element models of the
structures are considered [126]. The 2D structure domain X2 D is divided into
triangular finite elements Xs ; s ¼ 1; 2; . . .; R (for plane stress, bending plate or
shell), according to the geometry mapped on the basis of the chromosome. The
edges of the triangular finite elements, which belong to the curves mapped on the
basis of the chromosome and playing the role of the stiffeners, create the bar
elements Xb ; b ¼ R þ 1; R þ 2; . . .; C (Fig. 4.45).
After the geometry discretization, finite-element analysis is performed and node
displacements are calculated by solving a system of linear algebraic equations
KU ¼ F ð4:5:5Þ
Z
ks ¼ BTs Ds Bs dA ð4:5:6Þ
A
for the bar elements, where Ds, Bs and Db, Bb are the known elasticity and geo-
metrical matrices for the 2D structure and bar elements, respectively; l represents
the length of the bar element and A represents the area of the finite element.
After the finite-element analysis, the value of the fitness function given for
example by:
Z
J¼ req dX2D ð4:5:8Þ
X2D
4.5.2.1 Example 1
4.5.2.2 Example 2
The optimization task of location of four stiffeners by the minimization of the stress
functional in a bending plate loaded with the pressure p and fixed at the boundary
(Fig. 4.48) is considered. Input data to the optimization programme and the
parameters of the evolutionary algorithm are included in Tables 4.23 and 4.20,
respectively. The results of the optimization process are presented in Fig. 4.49.
4.5 Optimization of Structures with Stiffeners 129
Fig. 4.46 Geometry and boundary conditions for the plate in plane stress (Example 1)
min max
(a)
(b)
Fig. 4.47 The location of three stiffeners in the plate in plane stress and the map of stresses; a 1st
iteration, b 54th iteration
130 4 Structural Intelligent Optimization
Fig. 4.48 Geometry and boundary conditions for the bending plate (Example 2)
4.5.2.3 Example 3
The optimization task of location of five stiffeners by the minimization of the stress
functional in a cylindrical shell is considered. The structure is stretched with con-
tinuous load q and is fixed, as presented in Fig. 4.50. Input data to the optimization
programme and the parameters of the swarm algorithm are included in Tables 4.24
and 4.21, respectively. The results of the optimization process are presented in
Fig. 4.51.
4.5.2.4 Example 4
The optimization task of location and shape of two stiffeners in a plate in plane
stress with boundary conditions shown in Fig. 4.52 is considered. The optimal
positions of stiffeners are searched in order to maximize the stiffness of the plate.
The maximal nodal displacement in the structure is minimized. The stiffeners are
modelled using three-point NURBS curves. The value of weight of each control
point is 1 (no influence on distance between the control point and the NURBS
curve). Input data to the optimization programme and the parameters of the evo-
lutionary algorithm are included in Tables 4.25 and 4.26, respectively. The results
of the optimization process are presented in Fig. 4.53.
4.6 Optimization of Structures Under Thermo-Mechanical Loading 131
min max
(a)
(b)
Fig. 4.49 The location of four stiffeners in the bending plate and the map of stresses; a 1st
iteration, b 527th iteration
4.6.1 Introduction
Fig. 4.50 Geometry and boundary conditions for the cylindrical shell (Example 3)
min max
(a)
(b)
Fig. 4.51 The location of five stiffeners in the plate in plane stress and the map of stresses; a 1st
iteration, b 186th iteration
Fig. 4.52 Geometry and boundary conditions for the plate in plane stress (Example 4)
min max
(a)
(b)
Fig. 4.53 The location of two stiffeners in the plate in plane stress and the map of stresses; a 1st
iteration, b 339th iteration
eq ðXÞ
min rmax
X
ð4:6:3Þ
with imposed constrains on the maximal value of the volume of the structure
(V V ad 0Þ,
• the maximization of the total dissipated heat flux:
with imposed constrains on the costs (c cad 0Þ and the maximal value of
equivalent stress (req rad
eq 0Þ.
136 4 Structural Intelligent Optimization
Example 1: Shape optimization of the cooling gap in the square plate with circular
void
A square plate with a circular void is considered (Fig. 4.54). For the sake of
symmetry, only a quarter of the structure is taken into consideration. The consid-
ered quarter of the structure contains the internal boundary shown in Fig. 4.55.
The values of the boundary conditions are contained in Table 4.26.
To solve boundary-value problem, BEM is used. The model consists of 90
boundary elements. The objective of shape function is the minimization of the
radial displacements given by the functional (4.6.1) on the boundary where trac-
tions p0 are prescribed. The optimization problem consists in searching an optimal:
• shape of the internal boundary;
• width of the gap;
• distribution of the temperature T0 on the internal boundary.
The shape of the internal boundary is modelled by means of Bezier curve which
consists of seven control points, whereas the width of the gap and temperature T0
by means of Bezier curve consist of six control points (Fig. 4.56).
For the sake of symmetry along line AB (Fig. 4.55), the total number of design
parameters is equal to 13. The range of the variability of each control point for the
width of the gap is between 0.2 and 0.8, whereas for the temperature it is between 5
and 80 °C. Table 4.27 and Fig. 4.57 contain results of the optimization [29].
Example 2: Shape optimization of the three types of heat exchangers
The aim of the optimization is to find the optimal shape of the heat exchangers
used to dissipate heat from the electrical devices shown in Fig. 4.58a, b.
The optimal distribution of the material in the radiator (Fig. 4.58c) is also
considered. The fixed dimensions and the values of boundary conditions along Z
138 4 Structural Intelligent Optimization
Fig. 4.59 Heat radiator: a the geometry and design variables, b the boundary conditions
responsible for the shape of the radiator, whereas the values of the control points
142 4 Structural Intelligent Optimization
N0–N5 are responsible for the width of the fins The height of the bottom part of the
sym sym sym sym
structure can also vary. Due to symmetry (P0 $ P5 , P1 $ P4 , P2 $ P3 , N0 $ N5 ,
sym sym
N1 $ N4 , N2 $ N3 ), the total number of the design parameters is equal to 7.
The admissible values of the design parameters are shown in Table 4.33. Several
numerical tests have been performed for each case. The best results of the opti-
mization are presented in Table 4.34 and Fig. 4.63 [60].
X eq ðXÞ
min rmax 80.5 51.7 71.3 11.4 5.6 10.3 8.85 0.97 MPa
Fig. 4.63 The optimal shape of the radiator: a the minimization of the maximal value of the
temperature; b the minimization of the volume of the structure; c the minimization of the maximal
value of equivalent stresses
144 4 Structural Intelligent Optimization
Table 4.35 The material properties for aluminium, copper and silver
Parameter Aluminium Copper Silver
Young’s modulus (MPa) 68,000 110,000 76,000
Poisson’s ratio 0.34 0.35 0.39
Thermal expansion coef. (1/K) 24
10−6 16.5
10−6 19.5
10−6
Heat conductivity (W/mK) 210 380 420
4.6 Optimization of Structures Under Thermo-Mechanical Loading 145
Fig. 4.65 The boundary conditions for the third type heat radiator
Fig. 4.66 The optimal distribution of the material in the radiator for different maximum cost
constrain for: a c =2.5; b c = 4; c c = 9
146 4 Structural Intelligent Optimization
4.7.1 Introduction
Many spectacular accidents and catastrophes were caused by fracture. Cracks can
occur in structural elements because of imperfections in material, the manufacturing
process or came into existence by a cycling loading. To some extent cracks are
present in all structures, but they become dangerous if they extend to a critical
length. The ability of the crack identification during the exploitation of the structure
is essential. There are different methods of nondestructive crack identification,
based mainly on the measurements of the responses of the structure. Cracks and
other defects identification problems are presented in Sect. 5.3.
The reduction of the crack negative influence on the structure can be obtained by
means of the shape optimization methods. Publications devoted to the shape
optimization of the structures with cracks divide problems into two general groups:
• the minimization of the stress intensity factors (e.g. Vrbka and Knésl [117]),
• the maximization of the fatigue life-time of the structure (e.g. Gani and Rajan
[71]).
4.7 Optimization of Structures with Cracks 147
minðJ0 Þ ð4:7:1Þ
x
Ja ðxÞ ¼ 0; a ¼ 1; 2; . . .; m
Jb ðxÞ 0; b ¼ 1; 2; . . .; n ð4:7:2Þ
ximax xi ximin ; i ¼ 1; 2; . . .; k
where MCO = max (u+ – u−); u+, u− are the displacement values of the
coincident nodes lying on the opposite sides of the crack; wi = MCOi/RMCOi
are weight factors (Rwi = 1); n is number of cracks.
2. The minimization of the reduced J-integral:
!
X
2n
minðJ0 Þ ¼ min J ¼ wi Ji ð4:7:4Þ
x
i¼1
Cracks arising may significantly reduce the life-time of real structures. The most
common fracture case is caused by fatigue crack growth. It is extremely dangerous
for structures, as a crack grows from a very small size to a critical one with no
visible effect. As a result, damage of the structure occurs.
The possibility of predicting the element life-time is a crucial problem. The
life-time of structure can be described in a general form by the velocity of the crack
growth [6]:
4.7 Optimization of Structures with Cracks 149
dl
¼ f ðr; l; C; Y; R; vÞ ð4:7:7Þ
dN
where N is the number of loading cycles, l is the current crack length, r is stress
expressed by stress amplitude, C are material constants, Y are geometrical param-
eters of the element or crack, R = rmax/rmin is the cycle ratio, and v is functional
representing loading history.
There exist many formulas for f() function describing the velocity of the crack
growth. One of the most frequently used is a Paris equation in the form:
dl
¼ cðDK Þm ð4:7:8Þ
dN
l2
Z 1
N¼ dl ð4:7:9Þ
l1
c ð DK Þm
DKeff
2
¼ DKI2 þ 2DKII2 ð4:7:10Þ
The stress intensity factor range for the particular fracture mode i is given by:
where ht is the angular coordinate of the tangent to the crack path, KI, KII are mode I
and II stress intensity factors.
The angular coordinate ht indicates the direction perpendicular to the maxi-mum
principal stress direction.
150 4 Structural Intelligent Optimization
Initial geometry
Shape modification
BEM
Introducing crack in
stress_max position
- N calculation (BEM)
- stress_max
calculation
DEA block
To solve the boundary-value problem for cracked structures, one of the numerical
methods has to be used. The most popular and widely applied one is the
finite-element method (FEM), but in the presented case the boundary element
method (BEM) is more convenient. The main reason is that cracks state parts of the
boundary, so assuming the lack of body forces, it is not necessary to discretize the
interior of the body. As a result, the dimension of the boundary-value problem is
reduced by one. The BEM is also capable of accurate modelling the high stress
gradients near the crack tip [25].
An elastic body occupying a domain X and having a boundary C ∂X is
considered (Fig. 4.68). Two fields are prescribed on the boundary C: a field of
displacements u0(x), x 2 Cu and a field of tractions p0(x), x 2 Cp, while Cu [
Cp = C and Cu \ Cp = ∅. The body contains internal traction-free cracks Ci.
Displacements are allowed to jump across C:
½½u u þ u 6¼ 0 ð4:7:13Þ
Assuming the lack of body forces, the displacement of an arbitrary point x can
be represented by the boundary displacements integral equation:
Z Z
cðxÞuðxÞ ¼ Uðx; yÞpðyÞdCðyÞ Pðx; yÞuðyÞdCðyÞ; x2C ð4:7:14Þ
C C
where U(x, y), P(x, y) are fundamental solutions of elastostatics; c(x) is a constant
depending on the position of the collocation point; x; y are boundary points.
If the foregoing equation is applied on both surfaces of the same crack, two
identical equations are formed. As a result, the set of algebraic equations obtained
after the discretization of the body becomes singular. There are a few techniques
allowing overcoming this problem. The most versatile seems to be the dual
boundary element method (dual BEM). In this technique an additional equation—a
tractions integral equation—is introduced [99]:
" #
1 Z Z
pðxÞ ¼ n Dðx; yÞpðyÞdCðyÞ Sðx; yÞuðyÞdCðyÞ ; x2C ð4:7:15Þ
2 C C
where D(x, y), S(x, y) are the third-order fundamental solution tensors, n is the unit
outward normal vector at the collocation point x.
The tractions integral equation is applied on one surface of each crack, while the
displacements integral equation is applied on the opposite side of each crack and the
remaining boundary.
The nonuniform rational B-splines (NURBS) curves are used to model the modified
part of the boundary. Such attitude allows reducing the number of design variables
of the optimization procedure. NURBS can be treated as generalized nonrational
B-splines and nonrational and rational Bezier curves. They are industrial standard
tools for the geometry representation and design. The main advantages of NURBS
curves are:
• one mathematical form for standard analytical shapes and for free-form shapes;
• flexibility to design a large variety of shapes;
• fast evaluation by numerically stable and accurate algorithms;
• invariance under transformations (affine and perspective).
The NURBS curve is defined as [98]:
Pr
j¼0 Nj;n ðtÞwj Pj
C ð t Þ ¼ Pr ; atb ð4:7:16Þ
k¼0 Nk;n ðtÞwk
P3
P1=P6
Fig. 4.70 A structure with two cracks—initial shape, cracks localization and boundary conditions
(a) (b)
Fig. 4.71 The structure after optimization a constant area; b increased area (max. 20%)
MCO. The material constants of the structure are: E = 2.105 MPa, m = 0.25.
Structure is fixed and loaded as presented in Fig. 4.72 (p = 10 MN/m2). Free parts
of the boundary are modified during optimization. Constraints on the equivalent
von Mises stresses are imposed on the boundary.
4.7 Optimization of Structures with Cracks 155
Fig. 4.72 A structure with one crack—initial shape, crack localization and boundary conditions
Two variants are considered: (i) the maximum element area is equal to the initial
area; (ii) the maximum element area can be increased by 20%.
Final shapes for constant and increased areas of optimized structures are pre-
sented in Fig. 4.73. Initial and final values of Kred, J and MCO are collected in
Table 4.38.
A 2D structural element loaded and fixed as shown in Fig. 4.74 is optimized. Two
cases are considered: nonsymmetrical and symmetrical. In the nonsymmetrical case
each chromosome consists of 24 design variables representing coordinates of 12
control points (three control points for each of four NURBS curves). In the sym-
metrical case each chromosome consists of 12 design variables representing
coordinates of six control points (three control points for each of three NURBS
curves). The vertical axis is the symmetry axis.
156 4 Structural Intelligent Optimization
(a) (b)
Fig. 4.73 The structure after optimization of a constant area; b increased area (max. 20%)
(a) (b)
Fig. 4.74 The optimized structure a loaded and fixed b modified parts of the boundary and ranges
of the control points
4.7 Optimization of Structures with Cracks 157
(a) (b)
Fig. 4.75 A structural element—optimal shapes for nonsymmetrical case: a fixed area, b increased
area
The parameters of Paris equation are assumed as: c = 4.62E−12, m = 3.3 and
the amplitude ratio of the cyclic load is R = 2/3. To obtain the number of cycles
N for the initial shape, the position of the maximum von Mises stress is located and
the reference crack is introduced in the proper direction. Then, the boundary-value
problem is solved and N value is calculated.
Two cases are considered:
– the final area of the element is not bigger than the area of the initial element;
– the final area of the element can be increased by 10%.
Maximum von Mises reduced stress value is limited to rp= 120 MPa.
Shapes obtained after optimization are presented in Fig. 4.75 for the nonsym-
metrical case and Fig. 4.76 for the symmetrical case. The initial and final values of
cycle numbers, maximum stresses and areas of the element are collected in
Table 4.39.
(a) (b)
Fig. 4.76 A structural element—optimal shapes for symmetrical case: a fixed area, b increased
area
solve the boundary-value problem for cracked structures. To reduce the number of
design variables, parametric NURBS curves have been used. Enclosed numerical
examples illustrate the efficiency of presented attitude.
where req means the Huber–von Mises equivalent stress, rp is the yield stress and
r0 is the reference stress.
The shape optimization of structures with geometrical nonlinearities is per-
formed by minimizing structure displacements. The fitness function can be for-
mulated in the following form:
160 4 Structural Intelligent Optimization
2
Z q
F¼ dX ð4:8:2Þ
X
q 0
The forging process is highly nonlinear. Three different fitness functions were used
during the optimization. The first one is a measure between axisymmetric shape of
the forged detail and the desired one.
Z
F ¼ Dr ð yÞdy ð4:8:3Þ
y
The meaning of Dr ð yÞ is shown in Fig. 4.77. The optimal fitness function value
is known and is equal to zero.
The MSC.Marc was used to solve the forging problem. The axisymmetrical
bodies were considered. The forging process was modelled with the use of two
bodies: rigid for an anvil and elastoplastic for a preform. The contact with Coulomb
friction was used. The isothermal conditions were considered. The material was
modelled as viscoplastic using the following equation:
The second and third fitness functions depend on plastic strains values. The idea
of using these functions is to equalize plastic strains distribution in the body. The
fitness function can be expressed as a double integral over time and over the area of
the structure and the difference between plastic strain ep and mean plastic strain esr :
ZT Z
F¼ ep esr dXdt ð4:8:5Þ
oX
The third fitness function is a double integral over time and over the area of the
structure of plastic strains:
ZT Z
F¼ ep dXdt ð4:8:6Þ
0X
A material with the characteristic presented in Fig. 4.78 is used in test problems
Sects. 4.8.2.1–4.8.2.3. E1 and E2 are Young’s moduli, ep is yield strain and rp is
yield stress.
A 2D structural element is considered (Fig. 4.79a). The material data and param-
eters of the distributed evolutionary algorithm are: E1 = 20 GPa, E2 = 0.5 GPa,
rp = 250 MPa, m = 0.3, thickness 5 mm, load value 110 N/mm, maximum body
area 8000 mm2, the number of chromosomes 500, the number of generations 250,
and number of populations 4.
Fig. 4.79 Optimized plate: a geometry, b the best after 1st generation, c the best after 196th
generation
The external boundary and the hole boundary undergo shape optimization. The
external boundary was modelled by using the NURBS curve with three control
points (one of them can be moved—two design variables) and the internal hole was
modelled by using NURBS curve with four control points (each can be moved—
eight design variables). The fitness function was computed by using FEM. The
shape of the boundary after the first and 196th generations is shown in Fig. 4.79b,
c. The plastic areas are coloured in grey.
In order to examine the DEA for the various number of computers, the com-
puting time was measured for 15,000 fitness function evaluations. The computers
had AMD Duron 750 processors. The computing time versus the number of
computers is given in Table 4.40. The number of computed fitness functions as the
function of the number of computers is shown in Fig. 4.80. The starting population
was the same for each test. The problem was simpler that one shown above;
finite-element mesh had a lower number of elements.
A shell is considered (Fig. 4.82a). The shell has 10 holes with constant radii. The
holes can be moved. The optimization criterion is to minimize integral over shells
displacements. The fitness function was evaluated using MSC.Nastran.
Fig. 4.81 A half of K-structure: a geometry, b the best after 1st generation, c the best after 476th
generation
164 4 Structural Intelligent Optimization
Fig. 4.82 A shell: a geometry, b the best after 1st generation, c the best after 500th generation
The shape of the shell after the first and 500th generations is shown in
Fig. 4.82b, c.
The shape optimization of the preform was considered. The open die forging was
simulated. The flat anvil was used. The goal of the optimization is to find the shape
of the preform which leads to the cylindrical shape after forging. The geometrical
parameters are shown in Fig. 4.83. The material parameters for aluminium in 350 °C
used were: A = 26.478, B = 24.943, m = 0.1629, n = 3.4898. The friction
coefficient was equal to 0.5. The time step was 0.002 s, the number of steps was 200,
and speed of the anvil 75 mm/s. The fitness function (4.8.4) was used during
optimization.
The geometry of the preform (Fig. 4.84) was modelled by NURBS curve with
four control points. The coordinates of the control points were defined by six genes
values (g1–g6).
The constraints on the genes values are shown in Table 4.41.
The number of chromosomes was 25, the probability of uniform mutation 25%,
the probability of Gaussian mutation 62.5%, the probability of simple crossover
6.25%, and the probability of arithmetic crossover 6.25%.
The best result was achieved after 638 generations (15,362 fitness function
computations). The best found shape of the preform is presented in Fig. 4.85a, and
the shape after forging in Fig. 4.85b.
Fig. 4.85 a The best found shape of the preform, b the shape of the preform after forging
The results obtained for both criteria are very close to each other. The shape of the
anvil described using NURBS curve is shown in Fig. 4.86. Eight parameters of the
NURBS curve were searched.
The preform had a cylindrical shape. The material parameters were the same as
in example Sect. 4.8.2.4. The friction coefficient was equal to 0.3. The model was
discretized by quadrilateral elements. The evolutionary algorithm with 10 chro-
mosomes was used. The Gaussian mutation and simple crossover operators were
applied.
Figure 4.87a shows the results obtained after flat anvil forging in the first stage
and Fig. 4.87b after closed die forging in the second stage. The best found result is
presented in Fig. 4.88.
The speedup of computation was measured for the presented example and was
expressed as:
t1
s¼ ð4:8:7Þ
tn
where t1 is the computing time using one processor, and tn is the computing time for
n processors. The speedup of computations is presented in Fig. 4.89.
4.8 Optimization of Structures with Nonlinearities 167
Fig. 4.87 The shape of the preform after a first stage, b second stage of forging
Fig. 4.88 The shape of the preform obtained by means of the best anvils after a first stage,
b second stage of forging
4.9.1 Introduction
K
hi ¼ hK þ 1i ; i ¼ 1; 2; . . .; ð4:9:1Þ
2
where rij is the stress vector; eij is the strain vector; E1, E2, E3 are Young’s modules
in the main material axes 1, 2 and 3; G23, G23 and G12 are bulk modules in planes
(2, 3), (1, 3) and (1, 2); mij are Poisson’s ratios corresponding to the strains in
direction “j” if loading acts in direction “i”;
1 m12 m21 m23 m32 m31 m13 2m21 m32 m13
D¼ ð4:9:3Þ
E1 E2 E3
There is the following relation between the fifth elastic constant in foregoing
equation and the other elastic constants:
E2
m21 ¼ m12 ð4:9:5Þ
E1
where A = [Aij], B=[Bij], D=[Dij] are in-plane, coupling and out-of-plane stiffness
matrixes, respectively; e0 are strains at the mid-plane; j0 are curvatures at the
mid-plane.
In the symmetrical laminates the coupling matrix B is a null one (Bij = 0). As
a result, there is no coupling of shield and bending states; the shield state is fully
described by the A matrix while the D matrix fully describes the bending state. The
foregoing equation takes the uncoupled form:
fNg ¼ ½A e0
ð4:9:7Þ
fMg ¼ ½D j0
qhx2 w ¼ D11 wxxxx þ 4D16 wxxxy þ 2ðD12 þ 2D66 Þwxxyy þ 4D26 wxyyy þ D22 wyyyy
ð4:9:8Þ
4.9 Optimization of Composites 171
where zðkÞ is the distance from the middle plane to the top of layer k; Q ðijkÞ is the
plane stress reduced stiffness component of the layer k.
If different materials are used for distinct layers, the laminate is called the hybrid
one. There exist different groups of hybrid laminates [96]: (i) interply hybrids with
layers made of different materials; (ii) intraply hybrids with at least two types of
reinforcement in the same layer; (iii) intermingled hybrids with constituent fibres
mixed as randomly as possible to avoid their concentrations; (iv) selective place-
ment hybrids with extra reinforcement placed in the critical regions; (v) superhy-
brids composed of metal foils or metal composite plies stacked in a specified
sequence and orientation.
The cost of laminates enlarges rapidly with their properties (e.g. strength).
Subsequently, it is sometimes advantageous to couple very stiff and expensive
material for the surface layers with low stiffness but cheaper material for the core
layers (interply hybrids). Such attitude has been applied in present chapter to reduce
the structure cost ensuring a high performance of the laminate.
minðJ0 Þ ð4:9:10Þ
x
with constrains:
Ja ðxÞ ¼ 0; a ¼ 1; 2; . . .; m
Jb ðxÞ 0; b ¼ 1; 2; . . .; n ð4:9:11Þ
ximax xi ximin ; i ¼ 1; 2; . . .; k
3. the maximization of the distance between the external excitation frequency xex
and the closest eigenfrequency xi_cl:
(a)
y
(b)
0.2 x Me
Mi
symmetry
0.5
Fig. 4.90 The hybrid laminate plate: a dimensions and bearing; b location of materials (for
10-plies case)
Table 4.43 Numerical Example 1: optimization results for the |xex − xcl| maximization
Variant Plies Stacking sequence |xex − xcl| xcl
(Hz) (Hz)
Initial 10 (0/15/-15/45/-45)s 20.268 99.732
20 (0/0/15/15/-15/-15/45/45/-45/-45)s 20.268 99.732
Continuous 10 (76.9/88.9/61.1/6.1/61.3)s 86.874 33.126
20 (80.4/-76.3/61.9/87.5/-48.1/71.6/12.1/ 86.873 33.127
53.8/85.9/45.6)s
5° 10 (90/60/-45/50/90)s 86.845 33.155
20 (-80/90/65/55/65/25/-65/-85/85/15)s 86.880 206.880
15° 10 (-75/90/-60/15/-15)s 86.739 33.261
20 (90/75/45/90/60/-60/-45/90/-30/90/)s 86.813 33.639
45° 10 (90/-45/90/45/90)s 86.362 33.638
20 (90/90/90/45/45/45/90/90/90/90)s 86.802 33.198
The optimization results for the maximization of the distance between external
eigenfrequency and the closest eigenfrequency are gathered in Table 4.43. It is
assumed that the external excitation frequency xex is equal to 120 Hz. First five
eigenfrequencies of the laminate plate are considered.
Similar optimization results have been obtained for all cases and variants, but it
can be observed that results for 20-plies case are slightly better. It can be explained
by larger number of design variables which gives more possibilities of different
stacking sequences. Results obtained for continuous and 5° variants are typically
better than results achieved for remaining variants.
4.9 Optimization of Composites 175
A box-beam with varying cross-section is considered (Fig. 4.91). The wider end of
the structure is fixed. All four walls of the structure are made of the same hybrid,
symmetric laminate with the same stacking sequence. External laminates are made
of graphite-epoxy material Me, while internal layers are built of glass-epoxy
material Mi [14].
The thickness of each ply hi is constant and is equal to 0.2e−3 m.
The properties of materials are:
• material Me (graphite-epoxy): E1 = 141.5 GPa, E2 = 9.80 GPa, G12 = 5.90
GPa, m12 = 0.42 q = 1445.5 kg/m3;
• material Mi (glass-epoxy): E1 = 38.6 GPa, E2 = 8.27 GPa, G12 = 4.14 GPa,
m12 = 0.26, q = 1800 kg/m3.
It is assumed that the number of plies on each wall is equal to 14 but external
plies angle is preset to 0. As a result, the number of design variables is equal to 6
(due eto the symmetry).The stacking sequence for each wall can be presented as
0=h1 =he2 =he3 =he4 =he5 =he6 s; where subscripts denote a design variable number whilst
superscripts refer to the materials: (e—external, i—internal). It is assumed that:
• each ply angle can take real values from the range 〈−90°, 90°〉 (continuous
variant);
• each ply angle can take discrete values from the range 〈−90°, 90°〉 varying
every 5°, 15° and 45° (discrete variants).
The aim of the optimization is to find an optimal stacking sequence to maximize
the fundamental eigenfrequency x1 of the structure. The PAIS is employed to solve
the optimization problem.
The parameters of the PAIS are:
• the number of memory cells nmc = 5;
• the number of clones ncl = 20;
• termination condition: no. of iterations (in = 30);
The laminate plates’ optimization problem has been presented. Simple and hybrid
(with plies made of different materials) laminates have been taken into account.
Different optimization criteria related to the modal properties of optimized struc-
tures have been considered. To solve this task computational intelligence methods
(evolutionary algorithm, artificial immune system) coupled with the commercial
FEM software have been employed. The continuous as well as discrete optimiza-
tion has been performed. The proposed optimization method gave positive results in
all presented cases.
4.10.1 Introduction
should be minimized while the total dissipated heat flux or maximal value of the
equivalent stress should be maximized (or also minimized). The common approach
in this sort of problems is to choose one objective (e.g. the volume of the structure)
and incorporate the other objectives as constraints, or use the weighting method.
Such attitude did not require modification of the core of the algorithm but it has
some disadvantages (see Sect. 4.1).
The evolutionary algorithms using the pareto approach are more convenient to
solve such problems. One run of the algorithms gives a set of pareto optimal
solutions for designers. In the chapter different algorithms are used to solve mul-
tiobjective problems (MOEA, MOOPTIM, NSGAII). Details of the algorithms are
described in Sect. 4.1.
Coupled field problems occur when two or more physical systems interact with
each other, with the independent solution of any one system being impossible
without simultaneous solutions of the others. Definition of the coupled systems can
be formulated as [126]:
Coupled systems and formulations are those applicable to multiple domains and dependent
variables which usually (but not always) describe different physical phenomena and in
which
• neither domain can be solved while separated from the other;
• neither set of dependent variables can be explicitly eliminated at the differential
equation level.
• thermo-elasticity (M-T),
• thermo-electro-mechanical coupling (M-T-E),
• piezoelectricity (M-E).
To solve the considered coupled-field problems, the following FEM commercial
codes are used: MSC.Mentat/Marc and Ansys Mutiphysics. Thermo-elasticity and
thermo-electro-mechanical problems are solved as a weakly coupled analysis,
whereas piezoelectricity as a strongly coupled.
Generally for the considered problems, the definition of the objective functions
(functionals) may use results from each of the physical problems and additional
functionals may be defined as a volume or costs, and so on (see Sect. 4.6). The
multiobjective optimization tasks are solved for the functionals defined as:
• the minimum volume of the structure:
eq ðXÞ
min rmax
X
ð4:10:2Þ
Fig. 4.95 a The design variables, b the geometry and the boundary conditions
Table 4.48 The admissible Design variable Min value (m) Max value (m)
values of the design
parameters Z1, Z2, Z3, Z4 0.01 0.05
Z5 0.0025 0.006
Z6 0.0025 0.008
Fig. 4.100 The set of pareto-optimal solutions for minimization of the volume and von Mises
stress
Fig. 4.101 The set of pareto-optimal solutions for minimization of the volume and maximization
of the deflection of the actuator
186 4 Structural Intelligent Optimization
Fig. 4.102 The set of pareto-optimal solutions for minimization of the maximal temperature and
maximization of the deflection of the actuator
Fig. 4.103 The set of pareto-optimal solutions for three functionals (4.10.1), (4.10.3) and (4.10.4)
4.10 Multiobjective Optimization in Coupled Problems 187
Fig. 4.106 Geometry and stress distribution for the three indicated regions
Multiobjective shape optimization for different coupled problems has been pre-
sented. The proposed method gives the designer the set of optimal solutions based
on more than one criterion. The application of the FEM software requires evalu-
ation in several steps for each single solution (modification of the geometry, cre-
ating finite element mesh, etc.). It can be a very time-consuming task, especially for
more complicated geometry. Solution of the coupled problems such as
thermo-elasticity, electro-thermo-mechanical or piezoelectric analysis is more
4.10 Multiobjective Optimization in Coupled Problems 189
References
18. Bendsøe MP, Sigmund O (2003) Topology optimisation—theory, methods and applications.
Springer-Verlag, Berlin
19. Bendsøe MP, Sokołowski J (1988) Design sensitivity analysis of elastic-plastic problems.
Mech Struct Mach 16(1):81–102
20. Białecki RA, Burczyński T, Długosz A, Kuś W, Ostrowski Z (2005) Evolutionary shape
optimization of thermolastic bodies exchanging heat by convection and radiation. Comput
Methods Appl Mech Eng (Elsevier) 194:1839–1859
21. Bojczuk D, Szteleblak W (2008) Optimization of layout and shape of stiffeners in 2D
structures. Comput Struct 86:1436–1446
22. Bonnans J-F, Gilbert JC, Lemarechal C, Sagastizábal CA (2006) Numerical optimization,
2nd ed. Springer-Verlag
23. Botelho EC, Silva RA, Pardini LC, Rezende MC (2006) A review on the development and
properties of continuous fiber/epoxy/aluminum hybrid composites for aircraft structures.
Mater Res 9(3):247–256
24. Badrinarayanan S (1997) Preform and die design problems in metalforming. PhD thesis,
Cornell University
25. Brebbia CA, Dominiguez J (1989) Boundary elements. An introductory course. Comput
Mech
26. Brebbia CA, Telles JCF, Wrobel LC (1984) Boundary element techniques. Springer-Verlag,
Berlin
27. Burczyński T (1995) The method of boundary elements in mechanics. WNT, Warsaw (in
Polish)
28. Burczyński T, Beluch W, Długosz A, Nowakowski M, Orantek P (2000) Coupling of the
boundary element method and evolutionary algorithms and optimization and identification
problems. In: Proceedings of the ECCOMAS 2000 European congress on computational
methods in applied sciences and engineering, Barcelona
29. Burczyński T, Długosz A (2002) Evolutionary optimization in thermoelastic problems using
the boundary element method. Comput Mech (Springer) 28(3–4):317–324
30. Burczyński T, Długosz A, Kuś W (2005) Evolutionary computation in shape optimization of
heat radiators. Numer Heat Trans NHT-2005, EUROTHERM 82
31. Burczyński T, Kokot G (1998) Topology optimisation using boundary elements and genetic
algorithms. In: Idelsohn SR, Ońate E, Dvorkin EN (eds) Proceedings of fourth congress on
computational mechanics, new trends and applications. Barcelona, CD-ROM
32. Burczyński T, Kuś W (2001) Shape optimization of elasto-plastic structures using
distributed evolutionary algorithms. In: Proceedings of the 2nd European conference on
computational mechanics, ECCM 2001, Kraków
33. Burczyński T, Kuś W (2001) Distributed evolutionary algorithms in shape optimization of
nonlinear structures. In: Proceedings of the parallel processing and applied mathematics,
PPAM 2001, Nałęczów
34. Burczyński T, Kuś W (2002) Shape optimization of elasto-plastic structures using coupled
BEM-FEM approach and distributed evolutionary algorithm. In: Proceedings of the world
conference on computational mechanics, WCCM 2002, Wien
35. Burczyński T, Kuś W (2002) Distributed evolutionary algorithm—tests and applications.
AI-METH 2002, Gliwice
36. Burczyński T, Kuś W (2003) Distributed and parallel evolutionary algorithms in
optimization of nonlinear structures. In: Proceedings of the 15th international conference
on computer methods in mechanics CMM-2003, Wisła
37. Burczyński T, Kuś W (2003) Distributed evolutionary algorithms with
master-co-evolutionary algorithm and slaves—fitness function evaluators. In: Proceedings
of the VI Krajowej Konferencji Algorytmy Ewolucyjne i Optymalizacja Globalna KAEiOG
2003, 26–28 maja, Łagów
38. Burczyński T, Kuś W (2003) Optimization of structures using distributed and parallel
evolutionary algorithms. In: Proceedings of the PPAM 2003, parallel processing and applied
mathematics, Częstochowa
References 191
81. Kuś (2002) Coupled boundary and finite element methods in optimization of mechanical
structures. PhD thesis, Gliwice (in Polish)
82. Kuś W, Burczyński T (2001) Distributed evolutionary algorithms in optimization of
elasto-plastic structures with use of coupled FEM-BEM method. In: Burczyński T,
Cholewa W (eds) Proceedings of the AI-MECH 2001 symposium on methods of artificial
intelligence in mechanics and mechanical engineering, Gliwice
83. Kuś W, Burczyński T (2002) Evolutionary optimization of structures modeled using coupled
FEM-BEM method. In: Burczyński T (ed) Zeszyty Naukowe KWMiMKM, vol 1.
Computational sensitivity analysis and evolutionary optimization of systems with geomet-
rical singularities, Gliwice
84. Kuś W, Burczyński T (2002) Distributed evolutionary algorithms in optimization of
nonlinear solids. In: Proceedings of the IUTAM symposium on evolutionary methods in
mechanics, Cracow, pp 51–52
85. Kuś W, Burczyński T (2004) Distributed evolutionary algorithms in optimization of
nonlinear solids In: Burczyński T, Osyczka A (eds) IUTAM symposium on evolutionary
methods in mechanics. Kluwer, Dordrecht, pp 229–240
86. Kuś W, Długosz A, Burczyński T (2011) OPTIM—library of bioinspired optimization
algorithms in engineering applications. Comput Methods Mater Sci 11(1):9–15
87. Kutyłowski R (2004) Topology optimization of material continuum (in Polish), monograph.
Oficyna Wydawnicza Politechniki Wrocławskiej, Wrocław
88. Leiva JP, Ghosh DK, Rastogi N (2002) A new approach in stacking sequence optimization
of composite laminates using genesis structural analysis and optimization software. In: 9th
symposium on multidisciplinary analysis and optimization, Atlanta
89. Leu LJ, Mukherjee S (1993) Sensitivity analysis and shape optimization in nonlinear solid
mechanics. Eng Anal Bound Elem 12
90. Michalewicz Z (1996) Genetic algorithms + data structures = evolutionary programs.
Springer-Verlag, Berlin and New York
91. Michalewicz Z, Fogel DB (2004) How to solve it: modern heuristics. Springer-Verlag
92. Min S, Kikuchi N, Park YC, Kim S, Chang S (1999) Optimal topology design of structures
under dynamic loads. Struct Optim 17:208–218
93. Min S, Nishiwaki S, Kikuchi N (2000) Unified topology design of static and vibrating
structures using multiobjective optimization. Comput Struct 75:93–116
94. MSC.MARC (2010) Theory and user information, vol A–D. MSC Software Corporation
95. Novotny AA, Feijoo RA, Taroco E, Padra C (2003) Topological sensitivity analysis.
Comput Methods Appl Mech Eng 192:803–829
96. Pegoretti A, Fabbri E, Migliaresi C, Pilati F (2004) Intraply and interply hybrid composites
based on E-glass and poly(vinyl alcohol) woven fabrics: tensile and impact properties. Poly
Int 53(9):1290–1297
97. Perez R, Behdinan K (2007) Particle swarm approach for structural design optimization.
Comput Struct 85(19–20):1579–1588
98. Piegl L, Tiller W (1995) The NURBS book. Springer-Verlag, Berlin
99. Portela A (1993) Dual boundary element analysis of crack growth. Computational
Mechanics Publications
100. Poteralski A, Szczepanik M, Dziatkiewicz G, Górski R, Kuś W, Burczyński T (2009)
Immune optimization and identification of solids modelled by the boundary element method.
In: Proceedings of the 8th world congress on structural and multidisciplinary optimization
(WCSMO-8), Lisbon, CD-ROM
101. Rajasekaran S, Lavanya S (2007) Hybridization of genetic algorithm with immune system
for optimization problems in structural engineering. Struct Multidisc Optim 34:415–429
102. Rizzo FJ, Shippy DJ (1977) An Advanced boundary integral equation method for
three-dimensional thermoelasticity. Int J Numer Meth Eng 11:1753–1768
103. Sandgren E, Jensen E, Welton J (1990) Topological design of structural components using
genetic optimization methods. In: Proceedings of winter annual meeting of the American
society of mechanical engineers, Dallas, Texas, pp 31–43
194 4 Structural Intelligent Optimization
104. Shewchuk JR (1996) Triangle: engineering a 2D quality mesh generator and delaunay
triangulator. In: First workshop on applied computational geometry, association for
computing machinery, Philadelphia, Pennsylvania, USA, pp 124–133
105. Sethian JA, Wiegmann A (2000) Structural boundary design via level set and immersed
interface methods. J Comput Phys 163:489–528
106. Seyedpoor SM, Gholizadeh S, Talebian SR (2010) An efficient structural optimization
algorithm using a hybrid version of particle swarm optimization with simultaneous
perturbation stochastic approximation. Civil Eng Environ Syst 27(4):295–313
107. Silverman E, Rhodes M, Dyer M (1999) Composite isogrid structures for spacecraft
components. SAMPE J 35:51–59
108. Sokołowski J, Żochowski A (1999) On topological derivative in shape optimisation. SIAM J
Control Optim 37(4):1251–1272
109. Szczepanik M, Burczyński T (2003) Evolutionary computation in optimisation of 2-D
structures. In: Proceedings of the WCSMO 2003 5th world congress on structural and
multidisciplinary optimization, Italy, Venice
110. Szczepanik M, Kuś W, Burczyński T (2011) Swarm optimization of stiffeners locations in
2-D structures. In: Proceedings of the computer methods in mechanics CMM-2011,
Warszawa
111. Szczepanik M, Poteralski A, Kuś W, Burczyński T (2009) Shape and topology optimization
of shell, solid and shell-solid structures using artificial immune systems. In: Proceedings of
the 8th world congress on structural and multidisciplinary optimization (WCSMO-8),
Lisbon, CD-ROM
112. Szczepanik M, Poteralski A, Kuś W, Burczyński T (2010) Optimal de-sign of shell, solid
and shell-solid structures using particle swarm optimizer. In: 9th world congress on
computational mechanics and 4th asian pacific congress on computational mechanics,
Sydney
113. Szczepanik M, Poteralski A, Górski R, Kuś W, Burczyński T (2011) Shape swarm
optimization of reinforced 2-D structures, full paper, CD-Rom, In: 9th world congress on
structural and multidisciplinary optimization, June 13–17, Shizuoka, Japan
114. Tanaka K (1974) Fatigue crack propagation from a crack inclined to the cyclic tensile axis.
Eng Fract Mech 6:493–507
115. Tanese R (1989) Distributed genetic algorithms. In: Schaffer JD (ed) Proceedings of the 3rd
ICGA, San Mateo, USA, pp 434–439
116. Tcherniak D (2002) Topology optimization of resonating structures using SIMP method.
Int J Numer Methods Eng 54:1605–1622
117. Vrbka J, Knésl Z (1986) Optimized design of a high pressure compound vessel by FEM.
Comput Struct 24(5):809–812
118. Wierzchoń ST (2001) Artificial immune systems, theory and applications. EXIT (in Polish)
119. Woon SY, Tong L, Querin OM, Steven GP (2003) Optimising topologies through a
Multi-GA system. In: Proceedings of 5th world congress on structural and multidisciplinary
optimisation, WCSMO 2003, Italy, Venice
120. Xie YM, Steven GP (1997) Evolutionary structural optimisation. London: Springer;
Zitzler E, Laumanns M, Thiele L (2001) SPEA2: improving the strength pareto evolutionary
algorithm. TIK-Report 103
121. Xie YM, Huang X (2010) Recent developments in evolutionary structural optimization
(ESO) for continuum structures. IOP Conf Ser Mater Sci Eng 10
122. Yamakawa H (1984) Optimum structural designs for dynamic response. In: Atrek A,
Gallagher RG, Ragsdell KM, Zienkiewicz OC (eds) New directions in optimum structural
design. Wiley, New York, pp 249–266
123. Zabaras N, Bao Y, Srikanth A, Frazier WG (2000) A continuum Lagrangian sensitivity
analysis for metal forming processes with applications to die design problems. Int J Numer
Meth Eng 48:679–720
References 195
124. Zhao X, Zhao G, Wang G, Wang T (2002) Preform die shape design for uniformity of
deformation in forging based on preform sensitivity analysis. J Mater Process Technol
128:25–32
125. Zhu JH, Zhang WH, Qiu KP (2005) Investigation of localized modes in topology
optimization of dynamic structures. Acta Aeronaut ET Astronaut Sin 26:619–623
126. Zienkiewicz OC, Taylor RL (2000) The finite element method. Butterworth Heinemann,
Oxford
Chapter 5
Intelligent Computing in Inverse
Problems
5.2.1 Introduction
From the mathematical point of view, the identification problem is expressed as the
minimization of the following functional:
X X
J¼ wj ðxi ^xi Þ2 ð5:2:1Þ
j i
ch ¼ ½a1 ; a2 ; . . .; aR ð5:2:2Þ
200 5 Intelligent Computing in Inverse Problems
Fig. 5.1 The block diagram of EA coupled with thermo-elasticity and cracked analysis
with constrains:
amin
t ar amax
k ; r ¼ 1...R ð5:2:3Þ
Depending on the task, the gene in the chromosome can represent either the
value or the position of the boundary condition.
Measurements are practically never ideal, so it is proper to take for granted that
there occurs a measurement error. This problem has been solved as well as for ideal
deterministic (no noise) and disturbed randomly (with noise) values of measured
displacements and temperatures. For nonideal deterministic measured values, the
Gaussian distribution is applied. The density function N ðl; rÞ is shown in Fig. 5.2,
where q ¼ ^ uk or q ¼ T^ l .
The expected value l is equal to the ideal deterministic value of measured
displacements or measured temperatures. The standard deviation r is equal to 1/3 of
the maximal error. The maximal error of measurement is assumed at 10%.
5.2 Identification of Boundary Conditions 201
displacements are measured at the sensor points 1 and 2. The temperatures are
measured at the sensor points 3 and 4.
The position and radius of the hole and temperature on the boundary of the hole
are searched for. The boundary-value problem is solved by using the own imple-
mentation of BEM for the linear steady-state thermo-elasticity. The boundary of the
structure is discretized with 48 linear boundary elements. Table 5.3 contains evo-
lutionary parameters which were applied.
Four numerical tests were performed. Table 5.4 contains the results while
Table 5.5 contains relative errors of X and Y coordinates, radius R and temperature
on internal boundary.
Fig. 5.6 Geometry, boundary conditions and location of the sensor points
Fig. 5.7 The distribution of temperature and the deformation of the plate
for the numerical tests, whereas Table 5.7 contains relative errors for these tests.
Figure 5.8 presents the change in objective function during the identification
process.
5.2 Identification of Boundary Conditions 207
Table 5.7 Relative errors for T1 (%) T2 (%) T3 (%) T4 (%) T5 (%)
the tests
Test 1 1.39 1.14 0.63 0.73 0.33
Test 2 1.32 0.66 2.87 0.12 1.21
Test 3 0.98 0.39 0.63 0.06 1.19
Test 4 0.56 0.65 1.01 0.51 0.56
Test 5 9.96 1.02 0.11 0.43 0.11
Fig. 5.9 Geometry, boundary conditions and location of the sensor points
Fig. 5.10 The distribution of the temperature and deformation of the structure
the results are unsatisfactory. This shows that inverse problems are not easy to
solve. For some ill-posed problems for the case where the responses of the model
are nonunique, the identification cannot be performed correctly.
The presented approach could be very effective especially when it is not possible
or hard to use classical optimization methods.
Evolutionary calculations are time-consuming, but nowadays this problem
becomes less burdensome because of the very fast increasing efficiency of the
computers.
5.3.1 Introduction
Many real structures contain internal defects in the form of voids, cracks or addi-
tional masses (inclusions), which can reduce the life-time of the structure. The
identification of the defect seems to be a practically important problem.
Nondestructive identification methods have to be employed to identify the internal
defects.
There exist many methods that allow the identification of internal defects on the
basis of knowledge about boundary state fields like displacements, stresses, tem-
perature or natural frequencies. One group of methods is based on the sensitivity
analysis [7]. This approach is very fast and precise but can lead to local optima. In
the present chapter the global optimization methods in the form of evolutionary
algorithms (EAs) are used to solve the identification problems.
The finite-element method (FEM) or the boundary element method (BEM) is
used to solve the boundary-value direct problem. Artificial neural networks (ANNs)
and neuro-fuzzy inference systems (NFISs) are employed to approximate the
boundary-value problem in order to reduce the computational time.
Identification tasks belong to inverse problems which are mathematically
ill-posed [8]. In such problems the kind of measured values and the number of
measurements is significant. The number of useful measurement data in many
practical cases is small which can lead to an indeterminate set of equations. The set
of equations may also be ill-conditioned. On the other hand, a large number of
measurements (and sensors) can be expensive and also not easy to apply in practice.
The chapter is devoted to application of intelligent techniques for nondestructive
identification of multiple internal defects (crack and voids) in mechanical systems
being under static loads, dynamical loads and in the free vibration state.
5.3 Identification of Defects 211
minðJ0 Þ ð5:3:1Þ
x
Z
^ðyÞdC
J0 ¼ ½qðyÞ q ð5:3:2Þ
C
• in a dynamical case:
tZF Z
J0 ¼ ^ðy; tÞdC
½qðy; tÞ q ð5:3:3Þ
0 C
X
m
J0 ¼ gi J0i ð5:3:4Þ
i¼1
where J0i are objective functions for ith state field data, ηi are non-negative weights
indicating the relative importance of each J0i.
212 5 Intelligent Computing in Inverse Problems
One of the important issues connected with the identification of defects is the
selection of design variables which enable the description of the shape, the position,
the kind and the number of defects. The defects in 2D structures are modelled as:
(i) a circular, (ii) an elliptical, (iii) any arbitrary shape by using the closed NURBS
curves or (iv) cracks (Fig. 5.12). In the case of 3D structures, the defects are
modelled as: (i) a spherical, (ii) an ellipsoidal, or (iii) an arbitrary shape by means of
a closed NURBS surfaces (Fig. 5.13).
If the number of defect is unknown, a few types of chromosomes describing the
identified defect are proposed. The maximal number of defects n_max is presumed.
In the 2D case the chromosomes have one of the types presented below.
Fig. 5.11 The block diagram of the identification procedure (EA and BEM or FEM)
5.3 Identification of Defects 213
Fig. 5.12 The modelled forms of the defects (2D): a, b, c voids; d, e, f cracks
Fig. 5.13 The modelled forms of the defects (3D): a spherical, b ellipsoidal, c arbitrary shape
where the actual number of defects is controlled by the condition rl \rmin . If this
condition is fulfilled, the lth defect does not exist.
The third type of the chromosome is constructed as follows:
w1 ; w2 ; . . .; wl ; . . .; wn max ; x1 ; y1 ; r1 ; x2 ; y2 ; r2 ; . . .; xl ; yl ; rl ;
ch ¼ ð5:3:7Þ
. . .; xn max ; yn max ; rn max
Calculation of the fitness function value is usually the most time-consuming ele-
ment of the evolutionary computations. It is possible to speedup the calculations by
replacing the BEM or the FEM solutions by their approximations with the help of
the artificial neural networks (ANNs) [11] or neuro-fuzzy inference system (NFIS)
5.3 Identification of Defects 215
[10, 15]. The block diagram of the proposed intelligent identification system is
presented in Fig. 5.14. The artificial neural network or the fuzzy inference system
works as the approximator of a boundary-value problem for the different number,
shapes and positions of defects. The EA is searching for the number, shapes and
positions of internal defects on the basis of the results obtained by means of the
approximators.
The approximators (the ANN and the NFIS) are trained with the help of gradient
methods, the evolutionary method and the evolutionary method coupled with the
gradient method. The neural network with Gaussian radial basis functions (see
Sect. 3.8.4) and the fuzzy inference system with Gaussian membership functions
(see Sect. 3.9.5) are considered. The input–output pairs for ANN or NFIS are
obtained by means of the BEM or the FEM calculations of the boundary-value
problem.
Then in both cases networks are trained by means of gradient method.
Parameters are modified in each step according to the formula:
@E
wðs þ 1Þ ¼ wðsÞ gðsÞ þ aDwðs 1Þ ð5:3:8Þ
@w
where s is the iteration step, η0 is the initial value of the learning rate and Q is a
constant.
The chromosome has the form ch = [x, y, r1, …, r6], where x and y are the
coordinates of the centre and rl, l = 1, …, 6, are the positions of control points on
rays. The angle between two neighbouring rays is equal to 60°. The actual position
and shape of the void is described by chr = [15, 50, 1.5, 1.5, 1.5, 4.5, 1.5, 2.5].
The parameters of the evolutionary algorithm are:
– the population size: pop_size = 600;
– the maximum number of generations: max_life = 100;
– the probability of uniform mutation: pum = 0.25;
– the probability of nonuniform mutation: pnm = 0.35;
– the probability of boundary mutation: pbm = 0.05;
– the probability of simple crossover: psc = 0.25;
– the probability of heuristic crossover: phc = 0.25;
– the probability of arithmetical crossover: pac = 0.25;
– the cloning probability: pcl = 0.05.
The identification problem has been solved for two kinds of measurements:
• the ideal data of displacements obtained from numerical simulation for the
actual void by the BEM;
• the disturbed measurements obtained by the additional introduction of the
Gaussian noise.
The identification results are presented in graphical form in Fig. 5.16.
Fig. 5.16 The identification results: a for ideal measurements; b for disturbed measurements
218 5 Intelligent Computing in Inverse Problems
by the elliptical description. It is assumed that the number of voids is unknown and
n_max = 3. The chromosome has the form ch = [x1, y1, r1, x2, y2, r2, x3, y3, r3],
rl = [rlx, rly, al]. The actual position and shape of the voids are described by
chr = [70, 0, 3, 3, 0; 20, 70, 2, 2, 0; 20, 20, 6, 3, 1].
The aim of the identification is to find the number of defects, their size
and coordinates having measured: (i) eigenvalues xi, I = 1, 2, 3; (ii) displacements
u(x, T) in 21 boundary sensor points. The objective function is given as:
J0 ¼ gx Jx þ gu Ju , nx = nu = 0.5.
The parameters of the evolutionary algorithm are:
– the population size: pop_size = 3000;
– the maximum number of generations: max_life = 100;
– the probability of uniform mutation: pum = 0.25;
– the probability of nonuniform mutation: pnm = 0.35;
– the probability of boundary mutation: pbm = 0.05;
– the probability of simple crossover: psc = 0.25;
– the probability of heuristic crossover: phc = 0.25;
– the probability of arithmetical crossover: pac = 0.25;
– the cloning probability: pcl = 0.05.
The best solutions in chosen generations of EA are shown in Fig. 5.17.
Fig. 5.17 The identification results for EA generation: a 1st, b 10th, c 50th, d 100th
Fig. 5.18 A structure with three voids: a shape; b boundary conditions and sensor points
Fig. 5.19 Identification results for ideal data having measured: a temperatures; b displacements;
c temperatures and displacements
Fig. 5.20 Identification results for disturbed data having measured: a temperatures; b displace-
ments; c temperatures and displacements
p = 100 MPa. The displacements are measured in 30 sensors located on the free
part of the boundary.
The EA is employed to identify the number of defects n and their parameters on
the basis of the knowledge about F natural frequencies of the structures with defect
and displacements in S sensor points on the boundary of the structure. The
unknown parameters of the defect are the coordinates of the hole centres (xi, yi) and
their radii Ri (I = 1, 2, …, n_max). Defects are described by a chromosome:
ch = [x1, y1, R1, …, xi, yi, Ri, …, xn_max, yn_max, Rn_max]. It is also assumed that the
number of circular voids is not known and n_max = 2 in both cases. As a result,
each chromosome has the form: ch = [x1, y1, R1, x2, y2, R2]. If Ri < Rmin ith defect
does not exist and xi, yi, Ri genes are inactive ones.
In the case of a plate with one defect, a training set consists of 1008 input–output
pairs and a testing set consists of 300 pairs. In the case of the plate with two defects
the training set consisted of 12,800 pairs and the testing sets consisted of 1600
input–output pairs.
222 5 Intelligent Computing in Inverse Problems
An elastic structural element of the shape and dimensions presented in Fig. 5.22
containing a single crack consisting of two linear segments is considered. The
material constants of the structure are: E = 2.0e9 MPa, m = 0.25. The structure is
loaded by a traction field p = 10 MN/m2.
The aim of the identification is to find a size and a position of the crack having
measured displacements at 37 boundary sensor points. It is assumed that mea-
surements are disturbed by Gaussian measurement error. The fitness function values
for each individual in the population are obtained from the analysis of the structure
by means of the dual BEM [18].
The chromosome has the form: ch = [x1, y1, ll1, al1, ll2, al2], where x1, y1 are
coordinates of the first crack tip, ll is the segment length, and al is the segment slope
angle.
The parameters of the EA are:
– the population size: pop_size = 100;
– the maximum number of generations: max_life = 1000;
– the probability of uniform mutation: pum = 0.01;
– the probability of boundary mutation: pbm = 0.05;
– the probability of simple crossover: psc = 0.1;
– the probability of heuristic crossover: phc = 0.1;
– the probability of arithmetical crossover: pac = 0.1.
The actual and final positions of the crack are shown Fig. 5.23. The actual and
final values of design variables are presented in Table 5.11. It can be observed that
the evolutionary algorithm properly identified the size and position of the segmental
crack.
actual posiƟon
final
posiƟon
5.3 Identification of Defects 225
Table 5.11 The identification results for the plate with one crack
Variable no., dimension Actual value Range Final value Error (%)
1 (m) 0.00 –0.20; 0.90 0.000 –
2 (m) –0.04 –0.20; 0.90 –0.0401 0.25
3 (m) 0.04 0.0; 0.1 –0.0413 3.25
4 (m) 0.0 –90; 90 0.0 –
5 (°) 0.058 0.0; 0.1 0.061 5.17
6 (°) 62.0 –90; 90 61.1 1.46
An elastic structural element of the shape and dimensions presented in Fig. 5.24
containing linear cracks is considered. The material constants of the structure are:
E = 2.0e9 MPa, m = 0.25. The structure is loaded by a traction field p = 10 MN/m2.
The aim of the identification is to find the number, the size and the position of
cracks having measured displacements at 81 boundary sensor points. It is assumed
that measurements are disturbed by Gaussian measurement error. The fitness
function values for each chromosome are calculated by means of the dual BEM.
It is assumed that the maximum number of cracks n_max = 5, while the actual
number of cracks is 2. A total of 21 design variables represent the number of cracks
n and coordinates
h of 10 possible crack tips, respectively. The chromosome
i has a
form: ch ¼ n; xtip1 tip1 tip2 tip2 tip1 tip2 tip2 tip2
1 ; y1 ; x1 ; y1 ; . . .; xn max ; yn max ; xn max ; yn max .
Table 5.12 The identification results for the plate with 1–5 cracks
Variable no., dimension Actual value Range Final value Error (%)
1 (m) 2 1, …, 5 2 0
2 (m) 0.00 –0.20; 0.90 0.01 –
3 (m) –0.04 –0.20; 0.90 –0.041 2.5
4 (m) 0.04 –0.20; 0.90 0.04 0
5 (m) –0.02 –0.20; 0.90 –0.019 5.0
6 (m) 0.04; –0.20; 0.90 0.042 5.0
7 (m) –0.07 –0.20; 0.90 –0.073 4.28
8 (m) 0.07 –0.20; 0.90 0.072 2.85
9 (m) –0.04 –0.20; 0.90 –0.043 7.5
5.3 Identification of Defects 227
5.4.1 Introduction
The aim of the identification procedure is to find values of the elastic constants for
multilayered, symmetrical laminates stacked of many layers and having different
fibres orientation. Simple and hybrid laminates (interply hybrids with laminas made
of different materials) are considered.
The identification is performed having measured: eigenfrequencies, accelerations
at the sensor points (frequency response) and displacements, strains or stresses at
the sensor points. To calculate the objective function value for each candidate
solution, the boundary-value problem for laminates is solved by means of the
commercial finite-element method (FEM) software package MSC.Patran/Nastran.
All measurements are simulated numerically (numerical experiment) assuming
ideal and disturbed responses of the structure.
To solve the laminates’ identification task, the following computational intelli-
gence methods have been used:
• distributed evolutionary algorithm (DEA) described in Sect. 3.3;
• parallel artificial immune systems (PAIS) presented in Sect. 3.6;
• Two-step optimization strategy depicted in Sect. 3.9.4.
The identification problem can be treated as the minimization of the objective
functional J0 with respect to a design variables vector x:
minðJ0 Þ ð5:4:1Þ
x
N
X
qi qi
^
J 0 ð xÞ ¼ ^ ð5:4:2Þ
i¼1
q i
or
X
N
J 0 ð xÞ ¼ qi qi Þ 2
ð^ ð5:4:3Þ
i¼1
5.4 Identification of Material Properties 229
where x = (xi) are parameters representing identified elastic constants, q^i are
measured values of the state fields, qi are values of the same state fields calculated
from the numerical model.
Laminate layers are orthotropic materials with four independent elastic constants
(see Sect. 4.9.2): two Young moduli E1 and E2; shear modulus G12 and Poisson’s
ratio m12. Design variables vector x (chromosome in the evolutionary algorithm,
B-cell in the artificial immune system) has the form [4]:
• for simple laminates:
x ¼ E1e ; E2e ; Ge12 ; me12 ; qe ; E1i ; E2i ; Gi12 ; mi12 ; qi ð5:4:5Þ
5.4.3 Measurements
Different measurement data are considered to solve the identification problem for
simple and hybrid laminates. Static measurements in the form of displacements,
strains or stresses require many sensor points, which can be inconvenient in
practice. The number of sensor points depends on the complexity of the problem
and influences the identification results. A too small set of sensor points can cause
the ambiguity of the identification procedure.
To avoid these drawbacks, dynamic properties of the laminate structures could
be considered and modal analysis techniques may be applied. The modal model of
the dynamic structure is the ordered set of: eigenfrequencies, damping coefficients
and vibration forms. The modal analysis can be performed theoretically, by using
the numerical models and finite or boundary element method, or experimentally.
The modal analysis if often used for the optimization of the dynamic properties of
the structure in order to minimize the vibration propagation in it and for the
machine diagnostics [16].
A typical attitude is to measure the natural frequencies of the structure. The
results obtained from the eigenfrequency measurements may be not satisfactory due
to the insufficient data number used in the identification. In order to increase the
number of measurement data, one can use the frequency response information. This
230 5 Intelligent Computing in Inverse Problems
Fig. 5.26 Identified simple laminate: a boundary conditions, b sensor points location
5.4 Identification of Material Properties 231
Fig. 5.27 Identified hybrid laminate: a geometry, excitation and sensor points, b material location
5.4 Identification of Material Properties 233
References
1. Aliabadi MH, Rooke DP (1991) Numerical fracture mechanics. Solid mechanics and its
applications, Computational Mechanics Publications, Southampton/Boston
2. Banerjee PK (1994) The boundary element method in engineering. McGraw-Hill Book
Company, London
3. Beluch W (2000) Crack identification using evolutionary algorithms. In: Proceedings of the
symposium on methods of artificial intelligence in mechanical engineering—AI-MECH 2000,
Gliwice (Poland), Gliwice 2000
4. Beluch W, Burczyński T, Kuś W (2004) Distributed evolutionary algorithms in identification
of material constants in composites. In: Proceedings of KAEIOG 2004 conference, Kazimierz,
pp 1–8
236 5 Intelligent Computing in Inverse Problems