Sei sulla pagina 1di 92

Differential Evolution in Constrained Numerical

Optimization. An Empirical Study


Efren Mezura-Montesa,, Mariana Edith Miranda-Varelab , Rub del Carmen
Gomez-Ramonc
a

Laboratorio Nacional de Inform


atica Avanzada (LANIA A.C.) Rebsamen 80, Centro,
Xalapa, Veracruz, 91000, MEXICO.
b
Universidad del Istmo, Campus Ixtepec. Ciudad Universitaria s/n, Cd. Ixtepec, Oaxaca,
70110, MEXICO
c
Universidad del Carmen. C. 56 #4, Ciudad del Carmen, Campeche, 24180, MEXICO

Abstract
Motivated by the recent success of diverse approaches based on Differential Evolution (DE) to solve constrained numerical optimization problems,
in this paper, the performance of this novel evolutionary algorithm is evaluated. Three experiments are designed to study the behavior of different DE
variants on a set of benchmark problems by using different performance measures proposed in the specialized literature. The first experiment analyzes the
behavior of four DE variants in 24 test functions considering dimensionality
and the type of constraints of the problem. The second experiment presents
a more in-depth analysis on two DE variants by varying two parameters (the
scale factor F and the population size NP ), which control the convergence
of the algorithm. From the results obtained, a simple but competitive combination of two DE variants is proposed and compared against state-of-the-art
DE-based algorithms for constrained optimization in the third experiment.
The study in this paper shows (1) important information about the behavior of DE in constrained search spaces and (2) the role of this knowledge in
the correct combination of variants, based on their capabilities, to generate
simple but competitive approaches.
Keywords: Evolutionary Algorithms, Differential Evolution, Constrained
Email addresses: emezura@lania.mx (Efren Mezura-Montes),
emiranda@bianni.unistmo.edu.mx (Mariana Edith Miranda-Varela),
gyr_17@hotmail.com (Rub del Carmen Gomez-Ramon)

Preprint submitted to Elsevier

May 9, 2011

Numerical Optimization, Performance Measures


1. Introduction
Nowadays, the use of Evolutionary Algorithms [12] (EAs) to solve optimization problems is a common practice due to their competitive performance on complex search spaces [33]. On the other hand, optimization problems usually include constraints in their models and EAs, in their original
versions, do not consider a mechanism to incorporate feasibility information
in the search process. Therefore, several constraint-handling mechanisms
have been proposed in the specialized literature [6, 46].
The most popular approach to deal with the constraints of an optimization problem is the use of (mainly exterior) penalty functions [53], where
the aim is to decrease the fitness of infeasible solutions in order to favor
the selection of feasible solutions. Despite its simplicity, a penalty function
requires the definition of penalty factors to determine the severity of the penalization, and these values depend on the problem being solved [52]. Based
on this important disadvantage, several alternative constraint-handling techniques have been proposed [34].
In the recent years, the research on constraint-handling for numerical
optimization problems has been focused mainly in the following aspects:
1. Multiobjective optimization concepts: A comprehensive survey of constraint-handling techniques based on Pareto ranking, Pareto dominance,
and other multiobjective concepts has been recently published [37].
These ideas have been recently coupled with steady-state EAs [64],
selection criteria based on the feasibility of solutions found in the current population [62, 63], real-world problems [48], pre-selection schemes
[15, 31], other meta-heuristics [23], and with swarm intelligence approaches [27].
2. Highly competitive penalty functions: In order to tackle the fine-tuning
required by traditional penalty functions, some works have been dedicated to balance the influence of the value of the objective function
and the sum of constraint violation by using rankings [18, 52]. Other
proposals have been focused on adaptive [1, 57, 58] and co-evolutionary
[17] penalty approaches, as well as alternative penalty functions such
as those based on special functions [60, 67].

3. Novel bio-inspired approaches: Other nature-inspired algorithms have


been used to solve numerical constrained problems, such as artificial
immune systems (AIS) [8], particle swarm optimization (PSO) [4, 39],
and differential evolution (DE) [22, 24, 38, 45, 54, 55, 56, 57, 73, 74, 75].
4. Combination of global and local search: Different approaches couple the
use of an EA, as a global search algorithm with different local search
algorithms. There are combinations such as agent-memetic-based [59],
co-evolution-memetic-based [30], and also crossover-memetic-based [61]
algorithms. Other approaches combine mathematical-programmingbased local search operators [55, 56, 65].
5. Hybrid approaches: Unlike the combination of global and local search,
these approaches look to combine the advantages of different EAs, such
as PSO and DE [47] or AIS and genetic algorithms (GAs) [2].
6. Special operators: Besides designing operators to preserve the feasibility of solutions [6], there are proposals dedicated to explore either the
boundaries of the feasible and infeasible regions [20, 26] or convenient
regions close to the parents in the crossover process [69, 71].
7. Self-adaptive mechanisms: There are studies regarding the parameter control in constrained search spaces, such as a proposal to control
the parameters of the algorithm (DE in this case) [3]. There is another approach where a self-adaptive parameter control was proposed
for the DE parameters and also for the parameters introduced by the
constraint-handling mechanism [42]. The selection of the most adequate DE variant was also controlled by an adaptive approach in [21].
Finally, fuzzy-logic has been also applied to control the DE parameters
[32].
8. Theoretical studies: Still scarce, there are interesting studies on runtime in constrained search spaces with EAs [72] and also in the usefulness of infeasible solutions in the search process [68].
Based on this overview of the recent research related with constrained
numerical optimization problems (CNOPs), some observations are summarized:
The research efforts have been mainly focused on generating competitive constraint-handling techniques (1 and 2 in the previous list).
The combination of different search algorithms has become very popular (4 and 5 in the list).
3

Topics related to special operators and parameter control are important


to design more robust algorithms to solve CNOPs (6 and 7 in the
aforementioned list).
Besides traditional EAs such as GAs, Evolution Strategies (ES), and
Evolutionary Programming (EP), novel nature-inspired algorithms such
as PSO, AIS, and DE have been explored (3 in the list)
DE has specially attracted the interest from researchers due to its excellent performance in constrained continuous search spaces (last set of
references in 3 in the list).
Despite the highly competitive performance showed by DE when solving
CNOPs, the research efforts, as it will be pointed out by a careful review of
the state-of-the-art later in the paper, have been focused on providing modifications to DE variants instead of analyzing the behavior of the algorithm
itself. This current work is precisely focused on providing empirical evidence
about the behavior of DE original variants (without additional mechanisms
or modifications) in constrained numerical search spaces. Furthermore, this
knowledge is used to propose a simple combination of DE variants in a competitive approach to solve CNOPs.
Different experiments are designed to test DE original variants by using,
in all cases, an effective but parameter-free constraint-handling technique.
Four performance measures found in the specialized literature are used to
analyze the behavior of four DE variants. These measures are related with the
capacity to reach the feasible region, the closeness to the feasible global optimum (or best known solution), and the computational cost. 24 well-known
test problems [28] recently used to compare state-of-the-art nature-inspired
techniques to solve CNOPs are used in the experiments. Nonparametric statistical tests are used to provide more statistical support to the obtained
results. It is known from the No Free Lunch Theorems for search [66] that
using such a limited set of functions does not guarantee, in any way, that
a variant which performs well on them, will necessarily be competitive in
a different set of problems. However, the main objective of this work is to
provide some insights about the behavior of DE variants depending of the
features of the problem. Besides, another goal is to analyze the effect of two
DE parameter values related with its convergence (the scale factor and the
population size) on different types of constrained numerical search spaces.

The last goal of this work is the use of the knowledge obtained in a simple approach which combines the strengths of two DE variants into a single
approach which does not use complex additional mechanisms.
The paper is organized as follows: In Section 2 the problem of interest is
stated. Section 3 introduces the DE algorithm, while in Section 4 a review of
the state-of-the-art on DE to solve CNOPs is included. Section 5 presents the
analysis proposed in this work. After that, in Section 6 the first experiment
on four DE variants in 24 test problems is explained and the obtained results
are discussed. An analysis of two DE parameters on two competitive (but
with different behaviors) DE variants is presented in Section 7. Section 8
comprises the combination of two DE variants into a single approach and
its performance is compared with respect to some DE-based state-of-theart approaches. Finally, in Section 9, the findings of the current work are
summarized and the future paths of research are shown.
2. Statement of the problem
The CNOP, known also as the general nonlinear programming problem
[10], without loss of generality can be defined as to:
Find ~x which minimizes
f (~x)
(1)
subject to
gi (~x) 0, i = 1, . . . , m

(2)

hj (~x) = 0, j = 1, . . . , p

(3)

where ~x IRn is the vector of solutions ~x = [x1 , x2 , . . . , xn ]T , m is the


number of inequality constraints, and p is the number of equality constraints.
Each xi , i = 1, ..., n is bounded by lower and upper limits Li xi Ui
which define the search space S, F comprises the set of all solutions which
satisfy the constraints of the problems and it is called the feasible region.
Both, the objective function and the constraints can be linear or nonlinear.
To handle equality constraints in EAs, they are transformed into inequality
constraints as follows [52]: |hj (~x)| 0, where is the tolerance allowed
(a very small value).

3. Differential Evolution
DE is a simple, but powerful algorithm that simulates natural evolution
combined with a mechanism to generate multiple search directions based on
the distribution of solutions (vectors) in the current population. Each vector
i, i = 1, . . . , NP in the population at generation g, ~xi,g = [x1,i,g , . . . , xn,i,g ]T ,
called at the moment of reproduction as the target vector, will be able to
generate one offspring, called trial vector ~ui,g . This trial vector is generated
as follows: First of all, a search direction is defined by calculating a difference
vector between a pair of vectors ~xr1 ,g and ~xr2 ,g , both of them chosen at
random from the population. This difference vector is also scaled by using a
user-defined parameter called scale factor F > 0 [50]. This scaled difference
vector is then added to a third vector ~xr0 ,g , called base vector. As a result, a
new vector is obtained, known as the mutant vector. After that, this mutant
vector is recombined, based on a user-defined parameter, called crossover
probability 0 CR 1, with the target vector (also called parent vector )
by using discrete recombination, usually uniform i.e. binomial crossover, to
generate a trial (child) vector. The CR value determines how similar the
trial vector will be with respect to the mutant vector.
Regarding DE variants, in [50] Price et al. present a notation to identify
different ways to generate new vectors on DE. The most popular of them
(and explained in the previous paragraph) is called DE/rand/1/bin, where
the first term means Differential Evolution, the second term indicates how
the base vector is chosen (at random in this case), the number in the third
term means how many vector differences (i.e. vector pairs) will contribute
in the differential mutation (one pair in this case). Finally, the fourth term
shows the type of crossover utilized (bin from binomial in this variant). The
detailed pseudocode of DE/rand/1/bin is presented in Figure 1 and a graphical example is explained in Figure 2.
[FIGURE 1 AROUND HERE]
This study is focused on four DE variants. Two of them are DE/rand/1/bin, explained before, and DE/best/1/bin, where the only difference with
respect to DE/rand/1/bin is that the base vector is not chosen at random;
instead, it is the best vector in the current population. Unlike the first two variants considered in this study, the next two use an arithmetic recombination.
They are DE/target-to-rand/1 and DE/target-to-best/1 which only vary in
6

the way the base vector is chosen (at random and the best vector in the
population, respectively). The details of each variant is presented in Table 1
and graphical examples for the remaining three, besides DE/rand/1/bin, are
shown in Figures 3 for the DE/best/1/bin, Figure 4 for DE/target-to-rand/1
and, finally, Figure 5 for DE/target-to-best/1.
[TABLE 1 AROUND HERE]
[FIGURE 2 AROUND HERE]
[FIGURE 3 AROUND HERE]
[FIGURE 4 AROUND HERE]
[FIGURE 5 AROUND HERE]

4. Related Work
As it was pointed out in Section 1, DE has provided highly competitive
results in constrained numerical search spaces. Therefore, it is a very popular
algorithm among researchers and practitioners.
One of the first attempts reported was made by Lampinen [24] with
DE/rand/1/bin, where superiority of feasible points and dominance in the
constraints space were used to bias the search to the feasible global optimum. The approach is known as Extended DE (EXDE). An extension of
Lampinens work was presented by Kukkonen and Lampinen [22], where
DE/rand/1/bin was then used to solve constrained multiobjective optimization problems with the same constraint-handling mechanism.
Lin et al. [29] used DE/rand/2/bin with local selection (the target and
the base vector are the same) and Lagrange functions to handle constraints,
besides a special mechanisms for diversity control and convergence speed.
Mezura-Montes et al. [38] used DE/rand/1/bin with three feasibility rules
originally proposed by Deb to be used with other EAs [11, 49] in an approach
called RDE. This algorithm was improved by allowing each target vector to
generate more than one trial vector in [43], called DDE. In a later work,
Mezura-Montes et al. [45] proposed a new DE variant where the combination
of the best vector and the target vector is incorporated into the differential
7

mutation operator, coupled with a binomial recombination plus a diversity


control, and (again) the chance for each target vector to generate more than
one trial vector. A more recent work by Mezura-Montes and PalomequeOrtiz [42] included DE/rand/1/bin to explore deterministic and self-adaptive
parameter control mechanisms in DE for constrained optimization, called ADDE.
Zielinsky and Laur [73] used DE/rand/1/bin with Debs rules [11] coupled with a novel mechanism to deal with boundary constraints for decision variable values. They also conducted a study on termination conditions for DE/rand/1/bin in constrained optimization [74]. These two authors
also analyzed the effect of the dynamic tolerance for equality constraints on
DE/rand/1/bin [75].
Takahama and Sakai [54] used DE/rand/1/exp with a novel constrainthandling mechanism called constrained method. They also added a gradientbase mutation to their approach. The authors presented an improved version
based on a new control for the tolerance in [55]. In a recent proposal, they
proposed two novel mechanisms to control boundary constraints to further
improve their approach [56].
Tasgetiren and Suganthan[57] proposed a subpopulation mechanism with
the combination of DE/rand/1/bin and DE/best/1/bin. Each variant was
used with a similar proportion. They opted for an adaptive penalty function
to deal with the constraints.
Huang et al. [21] combined four variants: DE/rand/1/bin, DE/rand/2/bin
DE/target-to-best/2, and DE/target-to-rand/1 with a local search mechanism based on Sequential Quadratic Programming. A mechanism to generate random values for two DE parameters, CR and F , was included in this
proposal and Debs rules were used to handle constraints.
Brest [3] proposed jDE-2, which is based on the combined use of DE/rand/1/bin, DE/target-to-best/1/bin, and DE/rand/2/bin. Brest also used
Debs rules for constraint-handling besides a restart technique for those k
worst solutions. In Brests approach, each vector had its own parameter
values, which were generated and updated with a random-based mechanism.
Landa and Coello [25] used DE/rand/1/bin and Debs rules combined
with cultural algorithms to incorporate knowledge of the problem in the
search process.
Huang et al. [19] used DE/rand/1/bin with a co-evolutionary penalty
function. One population evolved the penalty factors, while the other evolved
the solutions to the optimization problem, similar to the approach proposed
8

with GAs by Coello [5]. Huang et al. [20], in their new approach, used
DE/rand/1/bin with two sub-populations again, but now with a different
goal. The first subpopulation evolved with the aforementioned DE variant,
while the second subpopulation stored feasible solutions to help other vectors
to become feasible. Local search with Nelder-Mead Simplex method was
utilized. Instead of using penalty functions, Debs rules were considered for
constraint-handling.
Liu et al. [30] used DE/best/1/bin with a co-evolutionary approach where
two sub-populations are considered. One of them aimed to minimize the objective function while the other tried to satisfy the constraints of the problem. Gaussian mutation was used as a local search operator and individuals
in both sub-populations could migrate from one to another.
Zhang et al. [70] used the stochastic ranking method [52] with DE/rand/1/bin in an approach called Dynamic Stochastic Selection DE (DSS-DE) to
solve constrained problems.
Gong and Cai [15] used DE/rand/1/bin and Pareto dominance for constraint-handling. They utilized an external file coupled with -dominance
to store promising solutions. The initial population was generated with an
orthogonal method. A special operator, orthogonal crossover, was used to
improve the local search ability of the algorithm.
Regarding empirical comparisons with DE variants in constrained optimization, Mezura-Montes and Lopez-Ramrez [40] compared DE/rand/1/bin
with a global-best PSO, a real-coded GA, and a ( + )-ES in the solution of
13 benchmark problems. DE provided the best results in this study. Zielinsky
et al. [76] compared different adaptive approaches based on DE in constrained optimization. Other comparisons of DE variants, but in unconstrained
optimization, were made by Gamperle et al. [14], where convenient parameter values were found per each test problem, and by Mezura-Montes et al.
[44], where the good performance of each DE variant was linked to an specific
type of unconstrained problem.
5. Proposed analysis
From the summary of the state-of-the-art presented in Section 4 it is
clear that DE/rand/1/bin is used in more than half of the proposed approaches [15, 19, 20, 22, 24, 25, 38, 42, 70, 73, 74, 75], while similar variants such as DE/best/1/bin, are barely preferred [30]. The most popular constraint-handling mechanism used with DE is the set of feasibility
9

rules proposed by Deb [3, 20, 21, 22, 24, 25, 38, 42, 45, 73, 74, 75], while
penalty functions [19, 57] and multiobjective concepts [15, 30] are sparingly
utilized. There are several approaches which use local search (Gradient-base
mutation, Sequential Quadratic Programming, Nelder-Mead Simplex among
others) [15, 20, 21, 30, 54, 55, 56]. On the other hand, there is a tendency
to combine different variants in one single approach by adding self-adaptive
mechanisms [3] sub-populations [57] or mathematical programming methods
[21]. Finally, the most popular combination is DE/rand/1/bin with Debs
feasibility rules [20, 25, 38, 42, 73, 74, 75] or DE/rand/1/bin with a slightly
variant of Debs rules [22, 24].
From the review of the current research in constrained optimization in
Section 1, it is clear that DE is a convenient algorithm to be modified or
combined to solve CNOPs. Furthermore, based on the previous paragraph
in this section, it is also evident that one variant and one constraint-handling
mechanism have been extensively used. However, little knowledge about the
behavior of DEs original variants (without additional mechanisms and/or
parameters) have been presented, to the best of the knowledge of the authors,
in the specialized literature.
Based on the aforementioned, this work looks precisely to provide more
knowledge of the capabilities of DE (by itself) to reach the feasible region of
the search space and, even more, the vicinity of the feasible global optimum
(or best known solution), the number of evaluations required to do that (i.e.,
computational cost), and the best combination between computational cost
and consistency on generating solutions close to the optimum value.
Furthermore, two DE parameters related with the convergence of the
algorithm (the scale factor F and the population size NP ) are studied in two
DE variants with competitive performances, but with different behaviors, in
order to (1) detect convenient values for them, based in the features of the
optimization problem and (2) provide some insights on the differences in
the behavior of DE with respect to unconstrained numerical search spaces,
reported by Price & Ronkkonen [51].
From the information obtained in the analysis of DE when solving CNOPs,
a convenient combination of two DE variants is proposed and its results obtained are compared with respect to those provided by some DE-based algorithms. This proposed approach does not add complex mechanisms. Instead,
it conveniently uses two variants and their strengths into a simple approach.
The experimental design utilized in this paper is partially based on a previous study on DE mutations for global unconstrained optimization proposed
10

in [51]. However some adaptations were made based on the type of problem
considered in this work. In fact, this study only considers the mutation operator in DE variants. Crossover analysis is out of the scope of the present
research and it is considered as part of the future work.
Three experiments are presented. In the first one, four DE variants
are compared. One of them is the most popular in evolutionary constrained numerical optimization: DE/rand/1/bin. The second one is barely
used: DE/best/1/bin. The third and fourth variants have been used just
in combination with other variants to solve CNOPs: DE/target-to-rand/1
and DE/target-to-best/1. The selection of variants was made with the goal
to compare popular variants used to solve CNOPs against those which use
has not been explored. In this way, the findings may help to know the utility
of each variant when solving CNOPs. Nonparametric statistical tests are
used to add more confidence to the observed behaviors.
The second experiment analyzes two competitive DE variants, with different behaviors, in order to establish suitable values for two DE parameters
related with the convergence of the approach (F and NP ).
The third experiment tests the combination of two DE variants in different problems and the final results are compared against state-of-the-art
approaches.
Different aspects of DE are not considered in this study, such as the
number of pairs of difference vectors (one) and, as mentioned before, the
crossover effect i.e., CR = 1. These values remain fixed in both experiments
and their studies are considered as part of the future work detailed at the
end of the paper.
In order to keep the DE variants from extra parameters related to the
constraint-handling mechanism and also to be consistent with the most popular technique reported in the specialized literature, the feasibility criteria
proposed by Deb [11] are added as a comparison method (instead of using
just the objective function value as indicated in Figure 1 between the target
and trial vector. The three criteria are the following [11]:
1. If the two vectors are feasible, the one with the best value of the objective function is preferred.
2. If one vector is feasible and the other one is infeasible, the feasible one
is preferred.
3. If the two vectors are infeasible, the one with the lowest normalized
sum of constraint violation is preferred.
11

Four performance measures are utilized during the first two experiments
of this work: The first one has been used to measure the percentage of runs
where feasible solutions are found [28] and the other three were used by Price
and Ronkkonen [51] to analyze convergence and computational cost. Some
terms are defined to facilitate the definition of the performance measures.
A successful trial is an independent run where the best solution found f (~x)
is close to the best known value or optimum solution f (~x ). This closeness is
measured by a small tolerance on the difference between these two solutions
f (~x ) f (~x) . A feasible trial is an independent run where, at least, one
feasible solution was generated.
The four measures are detailed as follows:
The feasibility probability F P is the number of feasible trials (f ) divided by the total number of tests or independent runs performed (t),
as indicated in Equation 4.
f
(4)
t
The range of values for F P goes from 0 to 1, where 1 means that all
independent runs were feasible trials i.e. all of them reached the feasible
region of the search space. In this way, a higher value is preferred.
FP =

The probability of convergence P is calculated by the ratio of the number of successful trials (s) to the total number of tests or independent
runs performed (t), as indicated in Equation 5.
s
(5)
P =
t
Similar to F P , the range of values for P goes from 0 to 1, where 1
means that all independent runs were successful trials i.e. all of them
converged to the vicinity of the best known solution or the feasible
global optimum. Therefore, a higher value is preferred.
The average number of function evaluations AF ES is calculated by
averaging the number of evaluations required on each successful trial
to reach the vicinity of the best known value or optimum solution, as
indicated in Equation 6.
s

AF ES =
12

1X
EV ALi
s i=1

(6)

where EV ALi is the number of evaluations required to reach the vicinity of the best known value or optimum solution in the successful trial
i. For EV ALS, a lower value is preferred because it means that the
average cost (measured by the number of evaluations) is lower for an
algorithm to reach the vicinity of the feasible optimum solution.
The two previous performance measures (P and AF ES) are combined
to measure the speed and reliability of a variant through a successful
performance SP , calculated in Equation 7.
SP =

AF ES
P

(7)

For this measure, a lower value is preferred because it means a better


combination between speed and consistency of the algorithm.
In the next three Sections of the paper each experiment is presented. The
parameter settings and the test problems used are detailed, followed by the
obtained results and their corresponding discussions.
6. Comparison of DE variants
In this experiment, the four DE variants mentioned in Section 5 and detailed in Section 3 (DE/rand/1/bin, DE/best/1/bin, DE/target-to-rand/1,
and DE/target-to-best/1) are compared. A set of 24 benchmark problems
used to test nature-inspired optimization algorithms in constrained search
spaces was used in this experiment. The complete details of all problems can
be found in the Appendix, at the end of the paper, while a summary of their
features is presented in Table 2.
[TABLE 2 AROUND HERE]
The parameters for the four DE variants were the following: CR = 1.0
(this parameter is fixed with this value to discard the crossover influence
from our study), NP = 90, F = 0.9, and MAX GEN = 5556. The population size value NP for this experiment was chosen based on two criteria: (1)
Enough initial points lead to a generation of more diverse search directions
based on vector differences i.e. a better exploration of the search space and
(2) More DE-based approaches to solve CNOPs use population sizes near to
13

this value. The F value was selected based on the suggestions made by Price
[50] regarding the convenience of larger F values to avoid premature convergence and also by the corresponding values used in DE-based approaches for
CNOPs. The MAX GEN value was chosen to adjust the maximum number of evaluations of solutions to 500, 000, in order to give each DE variant
enough time to develop a competitive search, coupled with a high F value
and enough search points. The tolerance value for equality constraints was
defined as = 1E-4. The tolerance value for considering a successful trial
was fixed to = 1E-4. Each variant performed 30 independent runs per each
test problem and the four performance measures were calculated.
Considering the number of test problems used in this experiment and for
a better analysis, they were classified according to the dimensionality (number of variables) as indicated in Table 3. Also, they were divided by the
type of constraints (inequalities, equalities or both) as shown in Table 4. In
this way, the discussion of each performance measure was divided in three
phases: (1) based on the dimensionality of the problem, (2) based on its type
of constraints, and finally, (3) some partial conclusions about the measure
results.
[TABLE 3 AROUND HERE]
[TABLE 4 AROUND HERE]
6.1. Discussion of results of the first experiment
Problems g20 and g22 were discarded in the discussion because no feasible
solutions where found by the four variants compared (i.e., F P = 0, P = 0,
AF ES, and SP cannot be calculated). These problems share common features: High dimensionality (24 and 22 variables, respectively) and combined
equality and inequality constraints.
In order to have more confidence of the significant differences observed
in the samples, and based on Kolmogorov-Smirnov tests [9] which indicated
that the samples do not follow a Gaussian distribution, nonparametric statistical tests were applied to the samples of the AF ES measure (Table 7)
The Kruskal-Wallis [7]test was applied in test problems where the four DE
variants had samples of equal size i.e the same number of successful trials
(g04, g06, g08, g12, g16, and g24). The Mann-Whitney test [7] was applied
to pairs of variants with different size in the samples (i.e., different number
14

of successful trials) in the remaining test problems (except g02, g03, g13,
and g17, where the results were very poor by the four variants compared).
Both tests were applied with a significance level of 0.05. The results of the
statistical tests indicated that the differences observed in the samples are
significant, with some exceptions which are commented in Section 6.1.3.
6.1.1. FP measure
The results are presented in Table 5.
[TABLE 5 AROUND HERE]
Dimensionality-based analysis: For high-dimensionality problems, the
four DE variants were very competitive. Only DE/best/1/bin and DE/targetto-rand/1 failed to consistently reach the feasible region in problems g03 and
g14. Regarding the nine medium-dimensionality problems, all four DE variants obtained high F P values. However, they had difficulties to generate
feasible solutions in problems g13 and g23, being DE/target-to-rand/1 the
variant with the worst performance. Finally, for low-dimensionality problems, the four variants consistently reached the feasible region, except in
problem g05, where DE/target-to-best/1, was the only variant to generate
feasible solutions in all independent runs.
Constraint-based analysis: For all the problems with inequality constraints, the four DE variants obtained a good performance in the F P measure. However, for all the problems with equality constraints and also in problems g05 and g21 (problems with both type of constraints), only DE/targetto-best/1 consistently reached the feasible region i.e., DE/rand/1/bin, DE/best/1/bin, and DE/target-to-rand/1 failed in some trials. Furthermore, in
problems g13 and g23 this variant provided the most competitive F P values.
The overall results for the F P performance measure suggest that the
four DE variants, without the addition of special mechanisms or additional
parameters, provided a consistent approach to the feasible region, even in
presence of a combination of inequality and equality constraints. In contrast,
as reported in the specialized literature, other EAs usually require special
handling of the tolerance for equality constraints in order to find feasible
solutions, [16, 36, 63]. This is, in fact, a well-documented source of difficulty
[35, 75]. The most competitive variant in this performance measure was
15

DE/target-to-best/1.
6.1.2. P measure
The results are presented in Table 6.
[TABLE 6 AROUND HERE]
The general behavior, as expected, was different from that observed in
the F P measure. It is clear that feasible solutions found by DE are not
necessarily close to the feasible global optimum or best known solution, remarking the difficulty (more evident for some variants) to move inside the
feasible region.
Dimensionality-based analysis: Regarding high-dimensionality problems,
the four DE variants presented a very irregular performance. However, the
better average P value for these test problems was provided by DE/targetto-best/1 (0.55), followed by DE/rand/1/bin (0.49), DE/best/1/bin (0.47),
and DE/target-to-rand/1 (0.37). The four variants obtained a P = 1 value
in medium-dimensionality problems g04 and g16. The average values for
the P measure in this type of problems were as follows: DE/rand/1/bin
(0.71), followed by DE/target-to-rand/1 (0.63), DE/target-to-best/1 (0.61),
and DE/best/1/bin (0.58). Finally, for low-dimensionality problems DE/rand/1/bin reached a P = 1 value in the seven test problems (average 1.0).
DE/best/1/bin almost obtained the same value in all problems, except only
in problem g11 with P = 0.97 (average 0.99). DE/target-to-rand/1 obtained
an average value of 0.91 while DE/target-to-best/1 provided an average value
of 0.80.
Constraint-based analysis: An almost similar performance (with respect
to that observed in the dimensionality-based analysis) was exhibited by the
four variants in those problems with only inequality constraints: DE/targetto-best/1/ reached an average value of 0.88, DE/target-to-rand/1 (0.86),
DE/rand/1/bin (0.85), and DE/best/1/bin (0.77). In problems with only
equality constraints DE/rand/1/bin and DE/best/1/bin obtained an average
P value of 0.49, followed by DE/target-to-rand/1 with 0.32, and DE/targetto-best/1 with 0.29. Finally, in problems with both type of constraints
DE/rand/1/bin was clearly superior with a P average value of 0.80. DE/best/1/bin obtained a value of 0.67, DE/target-to-rand/1 0.41, and DE/target16

to-best/1 0.38.
Despite the fact that the four DE variants were very capable to reach the
feasible region of the search space (based on the results of the F P measure
explained before), the results for the P measure indicate that they presented
difficulties to reach the vicinity of the feasible global optimum or best known
solution. DE/rand/1/bin provided the most consistent approach to the best
feasible solution (average P value of 0.74), followed by DE/best/1/bin (average P value 0.68). Finally, DE/target-to-best/1 was competitive in highdimensionality problems and in presence of only inequality constraints.
6.1.3. AFES measure
The results are presented in Table 7.
[TABLE 7 AROUND HERE]
Dimensionality-based analysis: DE/best/1/bin was the most competitive variant in high-dimensionality problems with an average AFES value
of 7.89E+04 in five test problems (out of six), followed by DE/target-tobest/1 with 6.87E+04 but only in four test problems. From the statistical
test results, the performance of these two best-based variants was not significantly different in problems g07 and g19. The two rand-based variants were
less competitive: DE/target-to-rand/1 with 3.14E+05 and DE/rand/1/bin
with 3.23E+05, both in four test problems. A similar behavior was found in
medium-dimensionality problems. DE/best/1/bin was the best variant with
an average value of 7.26E+04 in eight problems (out of nine), followed by
DE/target-to-best/1 with 8.82E+04 in eight problems, (no significant differences were found by these two best-based variants in problems g09, g10,
g21, and g23). DE/target-to-rand/1 obtained an average value of 1.35E+05
and DE/rand/1/bin presented a value of 1.58E+05, both in seven test problems. In low-dimensionality problems the results were quite similar as well:
DE/best/1/bin was the best variant with an average value of 2.71E+04 in
the seven test problems, DE/target-to-best/1 obtained a value of 4.47E+04
in the seven problems (no significant differences were found in problems g05
and g15), DE/rand/1/bin and DE/target-to-rand/1 achieved average values
of 5.60E+04 and 9.64E+04, respectively, in the seven test problems.
Constraint-based analysis: In problems with only inequality constraints
the four variants succeeded in twelve out of thirteen test problems. How17

ever, DE/target-to-best/1 was the most competitive with an AFES average value of 3.09E+04, DE/best/1/bin was the second best with a value
of 3.65E+04 (from the statistical tests no significant differences were found
in problems g07, g09, g10, and g19). The two rand-based variants were
less competitive: DE/target-to-rand/1 with 1.33E+05 and DE/rand/1/bin
with 1.40E+05. The presence of only equality constraints did not prevent
DE/best/1/bin to be the most competitive with an average AFES value of
8.48E+04 on five (out of six) test problems. The second best performance
was obtained by DE/target-to-best/1 with 1.57E+05 in four test problems
(no significant differences were observed in problem g15). DE/target-to-rand
obtained an average value of 1.99E+05 in four problems. DE/rand/1/bin
reached a value of 1.18E+05 but in only three test problems. Finally, in
problems with both type of constraints, DE/target-to-best/1 was the most
competitive with an average value of 9.82E+04 in the three test problems,
followed by DE/best/1/bin with a value of 1.01E+05 also in the three test
problems (no significant differences were exhibited in problems g05, g21, and
g23). DE/rand/1/bin obtained a value of 2.52+05 in three test problems
while DE/target-to-rand/1 reached an average value of 2.42E+05 but in only
two problems.
The overall results regarding AF ES suggest that those best-based variants found the vicinity of the best known or optimal solution faster than
the rand-based variants. DE/best/1/bin was the most competitive variant.
Figure 6 shows a radial graphic where the AF ES values are shown and
each axis is associated with one DE variant. For a better visualization only
those test problems with a value below 80,000 for the AF ES measure are
presented, but the overall behavior is represented. A point near the origin
is better, because it represents a lower AF ES value. It is remarked in this
figure that both best-based variants required less evaluations with respect to
the two rand-based variants.
[FIGURE 6 AROUND HERE]
6.1.4. SP measure
The results are presented in Table 8.
[TABLE 8 AROUND HERE]
Dimensionality-based analysis: In a similar way with respect to the
18

AF ES measure, the best-based variants performed better in the SP measure in this classification of problems. For high-dimensionality problems
DE/best/1/bin obtained a SP average value of 7.18E+05 on five (out of
six) test problems, DE/target-to-best/1 reached a value of 1.01E+05 on four
problems, followed by DE/target-to-rand/1 with 1.61E+06 on four problems,
and DE/rand/1/bin with 3.24E+06 also in four problems. DE/best/1/bin
obtained the best SP average value in medium-dimensionality problems with
1.97E+05 on eight (out of nine) test problems, DE/target-to-best/1 was
second with 1.28E+06 also on eight problems, DE/rand/1/bin was third
with 2.19E+05 in only seven problems, and DE/target-to-rand/1 was fourth
with 1.25E+06 in seven test problems. In low dimensionality problems
DE/best/1/bin presented the lowest SP average value on the seven test
problems (2.72E+04). DE/rand/1/bin was second with 5.60E+04 also in
the seven problems. DE/target-to-best/1 was third with 1.32E+05 in the
seven problems. DE/target-to-rand/1 was the last with an average SP of
1.37E+05 also in the seven test problems.
Constraint-based analysis: DE/target-to-best/1 showed the lowest SP
average (3.36E+04) for the test problems with only inequality constraints
(twelve out of thirteen), followed by DE/best/1/bin with 7.31E+04 in twelve
problems, DE/target-to-rand/1 with 3.89E+05 and DE/rand/1/bin with
1.11E+06, both computed in twelve problems. The best SP average value
in five (out of six) problems with only equality constraints was obtained
by DE/best/1/bin with 7.93E+05, followed by DE/target-to-best/1 with
2.59E+06 in four test problems, DE/target-to-rand/1 with 2.67E+06 in four
test problems. The worst SP values were obtained by DE/rand/1/bin with
1.23E+05 but in only three test problems. Finally, in problems with equality and inequality constraints DE/best/1/bin dominated the remaining DE
variants with an average SP value (in the three test problems) of 1.72E+05,
followed by DE/target-to-best/1 with 2.58E+05 also in the three problems.
DE/rand/1/bin was third with 3.95E+05 in the three problems, while DE/target-to-rand/1 was fourth with 3.97E+05, but computed in only two test
problems.
The overall performance presented in the SP measure resembles that
found in the AF ES measure. The best-based DE variants outperformed
those rand-based. This is also noted in Figure 7, where the lowest SP values i.e. the best combination between computational cost (evaluations) and
successful trials (reaching the vicinity of the feasible best known or opti19

mum solution) were found by the two best-based DE variants. In fact,


DE/best/1/bin was, again, the most competitive variant on this measure.
As in Figure 6, for a better visualization, Figure 7 only included values below 80,000 for the SP measure, but the overall behavior is represented.
[FIGURE 7 AROUND HERE]
6.2. Conclusions of the first experiment
Interesting findings were obtained from this first set of experiments:
Regardless of the DE variant used, DE itself showed strong capabilities
to reach the feasible region of the search space in the benchmark problems utilized in this work. The dimensionality of the problems and the
type of constraints did not affect this convenient behavior. DE/targetto-best/1 was the most consistent DE variant on this regard.
DE/rand/1/bin showed the best performance regarding the percentage of successful trials (runs where the vicinity of the feasible global
optimum or best known solution was reached) followed by DE/targetto-rand/1.
Those best-based variants, mostly DE/best/1/bin, required significant
less evaluations to reach the vicinity of the feasible global optimum or
best known solution.
DE/best/1/bin obtained the best combination between computational
cost (measured by the number of evaluations to reach the best solution
in the search space) and the percentage of successful trials.
DE/target-to-best/1 and DE/target-to-rand/1 did not provide a significant better performance with respect to DE/rand/1/bin and DE/best/1/bin. However, DE/target-to-best/1 was very competitive in
high-dimensionality problems and in those test functions where only
inequality constraints were considered. Finally, based on the statistical
tests performed, this variant was equally competitive, regarding the
AF ES measure values, with respect to DE/best/1/bin in eight test
problems.

20

Additional results were obtained with the same structure of this experiment but varying the NP value and keeping the same limit of evaluations
(500,000). The values for the population size were NP = 30 and NP = 150.
The results are included in [41] and confirmed the findings previously commented.
Those findings motivated a more in-depth analysis of the two most competitive variants: DE/rand/1/bin, being the most consistent on reaching the
vicinity of the feasible optimum solution, and DE/best/1/bin, with the best
combination between the number of evaluations required and the number of
successful trials. The corresponding experiments and results are described in
the next section.
7. DE parameter study
The second set of experiments aims to determine the convenient values
and the relationship between two DE parameters in the performance of two
DE variants which exhibited different competitive behaviors in numerical
constrained search spaces, based on the findings of the first set of experiments
(DE/rand/1/bin and DE/best/1/bin).
The parameters considered are F and NP . As the stepsize in differential
mutation is controlled by F , the convergence speed depends of its value.
Regarding global unconstrained optimization, for low F values (to speed up
convergence) an increase in the NP value may be required to avoid premature
convergence [51]. The question remains open for numerical spaces in presence
of constraints. It is known in advance, from the set of experiments in Section
6, that DE/rand/1/bin is more consistent on reaching the vicinity of the
feasible best solution and that DE/best/1/bin is also capable to do it but
with a lower frequency combined with an also lower computational cost. The
experimental design now focuses on determining the best values for those two
aforementioned parameters for these variants and evaluating if the behavior
is similar to that found in unconstrained search spaces.
Six representative test problems are used in this second part of the research: g02, g06, g10, g13, g21, and g23. They were selected based on their
different characteristics and were organized as follows:
Test problems with different dimensionality. They are shown in Table 9
a).

21

Test problems with different type of constraints. They are presented


in Table 9 b).
[TABLE 9 AROUND HERE]
The parameter values for the two DE variants were the following: CR =
1.0, the Gmax was not considered because the termination condition was,
in all cases, 500, 000 evaluations computed, the tolerance value for equality constraints was = 1E 4, and the tolerance value to consider as
successful an independent run was = 1E 4; the F values were the
following: [0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9, 1.0] and those of NP were:
[30, 60, 90, 120, 150]. Each DE variant was executed 100 times per each test
problem per each combination of F and NP values. Three performance measures (P , AF ES, and SP ) were used in the following experiments. The F P
measure was not considered because both DE variants are able to reach the
feasible region consistently (based on the results obtained in the first set of
experiments). The results are presented as follows: One graph is presented
for each performance measure. In each graph, the horizontal axis includes
the different F values, while the corresponding value for the measure is indicated in the vertical axis. The inclined axis includes the five NP values.
There is one line or set of bars for each NP value previously defined. Some
graphs for the AF ES and SP measures do not consider either some F or
NP values because the measure value was not defined for such parameter
value. For a better visualuzation, the SP values are plotted with a nonlinear
scale.
7.1. High-dimensionality test problem
Figures 8 and 9 exhibit the graphs with the results obtained by DE/rand/1/bin and DE/best/1/bin, respectively, for problem g02.
[FIGURE 8 AROUND HERE]
[FIGURE 9 AROUND HERE]
A quick look in Figures 8 and 9 clearly reveals that both variants were
very sensitive to the two parameter values analyzed and that the results
obtained were poor.

22

In Figure 8 (a) DE/rand/1/bin provided some successful trials (P 0.6)


only with 0.5 F 0.9 combined with medium to large population 90
NP 150. The highest P value was obtained with F = 0.7 and NP = 120.
The results in Figure 8 (b) suggest that for DE/rand/1/bin, as the scale
factor value increased, the average number of evaluations also increased for
the three NP values where successful trials were found (90, 120, and 150).
Larger populations (NP = 150) performed less evaluations with lower F
values (F = 0.5), while smaller populations required higher F values (F =
0.7).
Figure 8 (c) presents the SP values obtained by DE/rand/1/bin. It is clear
that the best combination of convergence speed (AF ES) and probability of
convergence (P ) was obtained by this variant in medium to larger populations
(90 NP 150) with 0.6 F 0.7.
DE/best/1/bin, as shown in Figure 9 (a) could not reach the vicinity of
the best known solution except in one single run with F = 0.8 and NP = 150
(see also Figures 9 (b) and 9 (c)).
DE/rand/1/bin was clearly most competitive in this large search space,
where the challenge is to find the best known solution because almost all
solutions are feasible. This behavior suggests that larger populations (120
NP 150), combined with 0.6 F 0.7 increase the probability of this
variant to reach the vicinity of the best known solution with a moderated
computational cost measured by the number of evaluations utilized.
7.2. Medium-dimensionality test problem
The values obtained by each DE variant in the three performance measures when solving problem g21 are summarized in Figures 10 and 11.
[FIGURE 10 AROUND HERE]
[FIGURE 11 AROUND HERE]
Regarding the P measure, DE/rand/1/bin presented a similar behavior
to that showed by DE in unconstrained optimization [51], because the convergence to the feasible global optimum was obtained when high F and NP
values were utilized. It was also obtained by the combination of low scale factor values (0.5 F 0.6) but with larger population sizes (60 NP 150)
(see Figure 10 (a)). Regarding high F values (except F = 1.0), this variant
displayed also high P values for all population sizes.
23

Regarding the average of number of evaluations required by DE/rand/1/bin, Figure 10 (b) exhibits an increment of the AFES value as the scale factor
and population size values increased their values as well i.e. this variant did
not require larger populations nor high F values to provide a competitive
performance.
Figure 10 (c) confirms the behavior observed in the results of the AFES
measure, because the best mix of computational cost and convergence was
obtained by small populations (30 NP 60) combined with the following
scale factor values (0.6 F 0.8).
DE/best/1/bin presented a similar behavior with respect to DE/rand/1/bin in the P measure. However, the following high scale factor values 0.7
F 1.0 provided high P values combined with medium-large population
sizes (90 NP 150), see Figure 11 (a).
The results for the AF ES measure in Figure 11 (b) present the lowest values with NP = 30. However the P values for this population size (Figure 11
(a)) were very poor. Therefore, the better AF ES values were obtained by
DE/best/1/bin with medium size populations (60 NP 90) combined
with F = 0.7. It is worth noticing that medium size populations presented
the highest AF ES values in combination with the highest F value used (1.0).
The SP values in Figure 11 (c) reveal that the best compound effect
speed-convergence were provided by small-medium size populations (60
NP 90) with F = 0.8.
Both DE variants were competitive in this problem with medium dimensionality. However, DE/rand/1/bin was less sensitive to the two parameters
under study, by working well with small and medium population sizes combined with three different scale factor values. Similarly, DE/best/1/bin was
more competitive with small and medium size populations but with one scale
factor value.
7.3. Low-dimensionality test problem
Figures 12 and 13 present the results for the three measures obtained by
both compared variants in test problem g06.
[FIGURE 12 AROUND HERE]
[FIGURE 13 AROUND HERE]

24

DE/rand/1/bin obtained some feasible trials with low scale factor values
0.4 F 0.5 combined with larger population sizes (120 NP 150) in
Figure 12 (a). Values of P = 1.0 were consistently obtained with 0.6 F
1.0 combined with the four NP values (except NP = 30).
Figure 12 (b) includes the results for the AF ES measure. The observed
behavior indicates an increment in the value of the measure as the population
size and the scale factor values are also increased.
Most of the combination NP -F values provided low SP values, as indicated in Figure 12 (c) (only low F values with larger populations obtained
poor SP values). However, a small population (NP = 30) combined with
0.6 F 1.0 are the most convenient values to solve this problem.
DE/best/1/bin required slightly higher scale factor values (0.7 F
1.0) to consistently reach the vicinity of the global optimum (P = 1.0),
except with NP = 30 (see Figure 13 (a)).
A similar effect was obtained by DE/best/1/bin in the AF ES measure
with respect to DE/rand/1/bin (higher F and larger NP values caused higher
AF ES values). However, with NP = 30 and F = 1.0 the highest AF ES
value was obtained (see Figure 13 (b)).
The summary of results for the SP measure in Figure 13 (c) indicates
that a small population NP = 30 combined with 0.7 F 0.8 provided
the best blend between number of evaluations and convergence probability.
Both DE variants presented an almost similar behavior in this problem
with only two decision variables i.e. they required a small population to
provide a consistent approach to the vicinity of the feasible global optimum.
However, DE/best/1/bin was slightly more sensitive to the F parameter.
7.4. Test problem with only inequality constraints
Figures 14 and 15 include the results provided by both variants when
solving problem g10.
[FIGURE 14 AROUND HERE]
[FIGURE 15 AROUND HERE]
DE/rand/1/bin provided high P values with more consistency by using
medium size populations (60 NP 90) combined with high scale factor
values (0.6 F 1.0) in Figure 14 (a). Increasing the population size

25

(NP = 150) with lower scale factor values (F = 0.5) allowed DE/rand/1/bin
to maintain high P values.
The results for the AF ES measure in Figure 14 (b) confirms the convenience of using medium size populations, mostly with 0.6 F 0.7. Other
combination of values for these two parameters increased the AF ES value
(with the only exception of NP = 30 and F = 0.8).
Figure 14 (c) also confirms the findings previously discussed for DE/rand/1/bin in this test problem. The lowest SP values were found with
NP = 60 and F = 0.6.
In Figure 15 (a) DE/best/1/bin required larger populations to get successful trials more consistently (120 NP 150) combined with high scale
factor values (0.8 F 0.9). DE/best/1/bin was clearly affected by small
populations, regardless the scale factor value used.
The overall results for the AF ES measure in Figure 15 (b) suggest that
DE/best/1/bin increased the number of evaluations as F and NP values
also increased. Furthermore, the lowest AF ES values were obtained with
medium size populations (60 NP 90). However, the P values for this
population size were poor.
This last finding was confirmed in Figure 15 (c), where the lowest SP
values were obtained with medium to larger populations (90 NP 150)
combined with 0.7 F 0.9.
DE/rand/1/bin was less sensitive to both parameters analyzed and performed better with small-medium size populations. However, DE/best/1/bin
required less evaluations to provide competitive results by using larger populations.
7.5. Test problem with only equality constraints
Figures 16 and 17 present the summary of results in problem g13 by
DE/rand/1/bin and DE/best/1/bin, respectively.
[FIGURE 16 AROUND HERE]
[FIGURE 17 AROUND HERE]
Similar to the problem with a high dimensionality, the performance of
both variants was clearly affected in this test function.
DE/rand/1/bin was able to provide its best P values (below 0.5) with
only high scale factor values (0.6 F 1.0) combined with small and
26

medium population sizes (30 NP 90). However, the most consistent


results for P were obtained with NP = 60 (see Figure 16 (a)).
The lowest AF ES values were found with NP = 30 and 0.7 F 1.0
(see Figure 16 (b)). It is interesting to note that larger populations (NP =
120) combined with 0.6 F 0.7 presented the highest AF ES values.
The best composite of convergence speed and convergence probability
(SP value) was obtained with NP = 30 combined with F = 0.7 and F = 1.0,
and also with NP = 60 combined with F = 0.7 (see Figure 16 (c)).
Based on Figure 17 (a), DE/best/1/bin provided more successful trials
with high scale factor values (0.7 F 1.0) combined with larger populations (120 NP 150). Moreover, with medium size populations
(60 NP 90) required higher scale factor values (0.8 F 1.0). Unlike
DE/rand/1/bin, this variant had significant difficulties to reach the vicinity
of the best known solution with a small population.
Figure 17 (b) indicates that the lowest AF ES values were obtained with
NP = 30 combined with 0.9 F 1.0, followed by NP = 60 combined with
0.8 F 1.0. The highest AF ES values were obtained with the highest
NP and F values as well.
The results for the SP measure in Figure 17 (c) pointed out that the
best combination of AF ES and P values were obtained with NP = 60 and
0.8 F 1.0.
Both DE variants had difficulties to reach the vicinity of the feasible global
optimum. Interestingly, both variants performed better with medium size
populations with very similar scale factor values. However, DE/rand/1/bin
required slightly lower F values and obtained better results with a small
population with respect to DE/best/1/bin.
7.6. Test problem with both type of constraints
The results obtained by both variants in problem g23 are presented in
Figures 18 and 19 for DE/rand/1/bin and DE/best/1/bin, respectively.
[FIGURE 18 AROUND HERE]
[FIGURE 19 AROUND HERE]
Based on Figure 18 (a), competitive P values were consistently achieved
by DE/rand/1/bin with NP = 60 combined with 0.6 F 1.0. A very

27

irregular behavior was observed with large and small populations. The results
for the P measure were very poor with F = 1.0 (except with NP = 60).
The lowest average number of evaluations on successful trials were attained by DE/rand/1/bin with small to medium population sizes (30
NP 60) combined with 0.8 F 0.9 and 0.6 F 0.7, respectively
(see Figure 18 (b)). Larger populations (NP = 150) caused an increment in
the AF ES value.
The best SP values were obtained in the same combination of parameter
values observed for the AF ES measure (see Figure 18 (c)).
In contrast to DE/rand/1/bin, DE/best/1/bin, in Figure 19 (a), obtained
more successful trials with larger populations NP = 150 combined with high
scale factor values (0.7 F 1.0). The vicinity of the feasible best known
solution was not reached with a small population (NP = 30).
The lowest AFES values were obtained by DE/best/1/bin with NP = 90
in all the F values where P > 0 values were obtained (0.7 F 1.0), see
Figure 19 (b).
Regarding the SP values (Figure 19 (c)), the best values were found with
NP = 90 combined with 0.8 F 0.9.
In this last test problem, DE/rand/1/bin performed better with small to
medium size populations combined with different scale factor values. In contrast, DE/best/1/bin provided its best performance with medium to larger
populations coupled only with high scale factor values.
7.7. Conclusions of the second experiment
The findings of this second experiment are summarized in the following
list:
DE/rand/1/bin was clearly most competitive in the high-dimensionality
test problem.
DE/rand/1/bin was the variant with less sensitivity to NP and F in
all the six test problems.
DE/best/1/bin required less evaluations to reach the global feasible optimum in all the test problems, but it was less reliable than DE/rand/1/bin.
The most useful F values for both variants were 0.6 F 0.9, regardless the type of test problem.

28

DE/rand/1/bin/ performed better with small to medium size populations: 30 NP 90, while DE/best/1/bin required more vectors in
its population to provide competitive results 90 NP 150.
Regarding the convergence behavior reported in unconstrained numerical optimization with DE, a different comportment was observed in
the constrained case, because low scale factor values (F 0.5) prevented both DE variants to converge, even with larger populations.
The exception was the test problem with a low dimensionality.
Additional experiments were performed in test problems with similar features as those presented in the paper: g19 for high dimensionality, g09 for
medium dimensionality, g08 for low dimensionality, g07 with only inequality constraints, g17 with only equality constraints, and g05 with both type
of constraints. Those results can be found in [41] and they confirmed the
findings mentioned above.
The summary of findings suggests that the ability of DE/rand/1/bin
to generate search directions from different base vectors allows it to use
smaller populations. On the other hand, larger populations were required
by DE/best/1/bin, where the search directions are always based on the best
solution so far. Regarding the scale factor, the convenient values found in the
experiment showed that these two DE variants required a slow-convergence
behavior to approach the vicinity of the feasible global optimum or best
known solution. To speed up the convergence by decreasing the scale factor value does not seem to be an option, even with larger populations. The
combination of larger populations and DE/rand/1/bin seems to be more suitable for high-dimensionality problems. Finally, DE/rand/1/bin presented
less sensitivity to the two parameters analyzed. Meanwhile, DE/best/1/bin,
which may require a more careful fine-tuning, can provide competitive results with a lower number of evaluations. Based on this last comment, the
drawback found in DE/best/1/bin may be treated with parameter control
techniques [13].
8. A combination of two DE variants
Considering the results in Experiment 1 which pointed out that DE/rand/1/bin and DE/best/1/bin had better performances but different behaviors with respect to other DE variants, they were chosen to be combined
into one single approach.
29

The capacity observed in the four DE variants to efficiently reach the feasible region of the search space in Experiment 1, coupled with the feature of
DE/rand/1/bin to generate a more diverse set of search directions, suggested
the use of this variant as a first search algorithm. As the feasible region will
be reached faster, the criterion to switch to the other DE variant was to get
10% of feasible vectors. In this way, DE/best/1/bin could focus the search
in the vicinity of the current best feasible vector, expecting a low number of
evaluations to reach competitive solutions. This percentage value was chosen after some tests reported in [41]. The approach was called Differential
Evolution Combined Variants (DECV).
Based on the fact that Experiment 2 revealed that DE/rand/1/bin performed better with small to medium size populations and that DE/best/1/bin
required medium to large size populations, the number of vectors in DECV
was fixed to 90. The convenience of using larger F values also observed in
Experiment 2 suggested a value of 0.9. The CR parameter was kept fixed
at 1.0 and the number of generations was set to 2666 in order to perform
240, 000 evaluations.
30 independent runs per each one of the 24 test problems used in Experiment 1 were computed. Statistics on the final results are summarized in
Tables 10 and 11 for DECV.
[TABLE 10 AROUND HERE]
[TABLE 11 AROUND HERE]
Those final results for the first 13 test problems and the corresponding
computational cost, measured by the number of evaluations required, were
compared with those reported by some state-of-the-art DE-based algorithms:
The superiority of feasible points (EXDE) [24], the feasibility rules [11] in DE
(RDE) [38], the DE with ability to generate more than one trial vector per
target vector [43] (DDE), the adaptive DE (A-DDE) [42], and the dynamic
stochastic selection in DE (DSS-MDE) [70]. The comparison in the last
11 test problems was made with respect to A-DDE in Table 11, which has
provided a highly competitive performance in such problems. Both, EXDE
[24] and RDE [38] were chosen because of the fact that they keep intact the
DE mutation operator with respect to its original version. DDE [43] was
chosen as a DE with modifications to the original mutation operator with
very competitive results and, finally, DSS-MDE [70] was chosen as a recent
30

representative approach. The results of the approaches used for comparison


were taken from [70].
DECV reached the feasible region of the search space in every single run
for 22 test problems (except in problems g20 and g22). Regarding the first
thirteen problems, DECV was able to consistently reach the best known solution in seven test problems (g04, g05, g06, g08, g09, g11, and g12). Moreover,
DECV found the best known solution in some runs in three test problems
(g01, g07, and g10), but was unable to provide good results in three test
problems (g02, g03, and g13). The overall behavior showed a very competitive performance with respect to the compared algorithms. The number
of evaluations required by DECV was 240, 000, higher than the 180, 000 required by A-DDE [42], the 225, 000 required by DSS-MDE [70] and DDE
[43], and lower than the 350, 000 computed by RDE [38]. It is important
to remark that A-DDE utilizes a self-adaptive mechanism similar to that
used in Evolution Strategies [42], which adds extra computational time to
the algorithm, DDE requires the careful fine-tuning of extra-parameters related with the number of trial vectors generated per each target vector and
another parameter to control diversity in the population, and DSS-MDE utilizes a dynamic mechanism to deal with the stochastic ranking technique
[71]. On the other hand, DECV only joins two DE variants using the same
set of parameters and a parameter-free constraint-handling technique, which
clearly simplifies the implementation issues for interested practitioners and
researchers. Furthermore, its operation and parameter values are based on
empirical evidence supported by statistical tests. In fact, this empirical evidence helps to provide different combination of variants such as the opposite
combination (called A-DECV): DE/best/1/bin first and DE/rand/1/bin at
the end, with the aim to use DE/rand/1/bin more time during the search
(assuming the fast approach to the feasible region by all DE variants) in order
to deal in a better way with high dimensionality problems (as concluded in
Experiment 2) such as g01, g02, g07, and g10. The same switch criterion and
parameter values were utilized, except for the number of generations which
was set to 5556 in order to perform 500, 000 evaluations with the aim of
giving DE/rand/1/bin even more time to work (as suggested in Section 7.7).
The obtained results in 30 independent runs for the six test problems were
the first DECV version was not very competitive (g01, g02, g03, g07, g10,
and g13) are presented in Table 12 for A-DECV. The improvement in the
performance is evident. It is worth remarking that the comparison between
DECV and A-DECV does not try to find a winner between them. It just
31

aims to show how the variants can be combined to get a different behavior
which may result in a better performance in some type of problems.
[TABLE 12 AROUND HERE]

9. Conclusions and Future Work


An empirical analysis of Differential Evolution in constrained numerical
search spaces was presented. Four performance measures were used in the
experimental design to estimate (1) the probability to generate feasible solutions, (2) the probability to reach the vicinity of the feasible global optimum
or best known solution, (3) the computational cost measured by the average
number of evaluations required to find the optimum solution, and (4) the best
combination of convergence speed and convergence probability. Three experiments were designed. The first analyzed the performance of four DE variants
(DE/rand/1/bin, DE/best/1/bin, DE/target-to-rand/1, and DE/target-tobest/1) in 24 well-known benchmark problems based on their dimensionality
and the type of constraints. From the obtained results, it was found that (1)
DE, regardless the variant used, is very capable to generate feasible solutions
without complex additional mechanisms to bias the search to the feasible
region of the search space; however, DE/target-to-best/1 was the most competitive variant in this regard, (2) DE/rand/1/bin was the most consistent
variant to reach the vicinity of the feasible global optimum or best known solution, and (3) DE/best/1/bin was the variant with the best combination of
successful trials and the lowest average number of evaluations used in them.
A second experiment was performed to analyze two parameters: the scale
factor and the population size (F and NP ) in two competitive variants but
with different behaviors (DE/rand/1/bin and DE/best/1/bin). Six test problems (plus other six problems reported in [41]) with different features were
used. The obtained results suggested that (1) DE/rand/1/bin was less sensitive to the NP and F values, performed better with a population with
small-medium size and it was the most competitive in high dimensionality
test problems. (2) DE/best/1/bin reported the lowest average number of
evaluations in successful trials but used larger populations to provide competitive results and, finally, (3) decreasing the scale factor, even with the
increase of the population size, did not allow these two DE variants to converge to the global optimum. Instead, premature convergence was observed.
32

From the knowledge provided by the results in the first two experiments,
the simple combination of DE/rand/1/bin and DE/best/1/bin into one single
approach (DECV) was proposed and the results obtained in 24 test problems
were compared with those obtained by some DE-based approaches to solve
CNOPs. The performance obtained by DECV was similar with respect to
the algorithms used for comparison. DECV did not add extra complex mechanisms. Instead, it firstly used DE/rand/1/bins ability to generate a diverse
set of search directions in the whole search space in order to switch to the
ability of DE/best/1/bin to generate search directions from the best solution
when 10% of the population is feasible. Furthermore, an alternative combination of variants allowed to improve the performance of DECV in problems
where the first combination was not very competitive, showing the flexibility
of usage of the empirical information provided in this work.
The conclusions obtained in this work remarked the good (or bad) influence of the parameter values in DE to solve CNOPs. Therefore, the
future paths of research include the empirical study of the CR parameter
and the number of difference vector pairs in constrained numerical optimization. Furthermore, the performance of other DE variants e.g., DE/target-torand/1/bin, DE/target-to-best/1/bin, DE/rand/1/exp, and other constrainthandling mechanisms e.g., penalty functions will be analyzed. Finally, a
parameter control mechanism will be added to deal with the percentage of
feasible solutions utilized in DECV and A-DECV and more test problems
will be considered.
Acknowledgments
The first author acknowledges support from CONACyT through project
Number 79809.
Appendix A.
The details of the 24 test problems utilized in this work are the following:
g01
Minimize:

f (~
x) = 5

4
X
i=1

xi 5

Subject to:

33

4
X
i=1

x2i

13
X
i=5

xi

(A.1)

g1 (~
x) = 2x1 + 2x2 + x10 + x11 10 0
g2 (~
x) = 2x1 + 2x3 + x10 + x12 10 0
g3 (~
x) = 2x2 + 2x3 + x11 + x12 10 0
g4 (~
x) = 8x1 + x10 0
g5 (~
x) = 8x2 + x11 0
g6 (~
x) = 8x3 + x12 0
g7 (~
x) = 2x4 x5 + x10 0
g8 (~
x) = 2x6 x7 + x11 0
g9 (~
x) = 2x8 x9 + x12 0
where 0 xi 1 (i = 1, . . . , 9), 0 xi 100 (i = 10, 11, 12), and 0 x13 1. The feasible global
optimum is located at x = (1, 1, 1, 1, 1, 1, 1, 1, 1, 3, 3, 3, 1) where f (x ) = -15. where g1 , g2 , g3 , g7 ,
g8 g9 are active constraints.
g02
Minimize:

Subject to:




P
n cos4 (x ) 2 Qn cos2 (x )
i
i

i=1
qP
f (~
x) = i=1



n
2


i=1 ixi

(A.2)

Qn
g1 (~
x) = 0.75
Pn i=1 xi 0
g2 (~
x) = i=1 xi 7.5n 0

where n = 20 and 0 xi 10 (i = 1, . . . , n). The best known solution is located at x = (3.16246061572185,


3.12833142812967, 3.09479212988791, 3.06145059523469, 3.02792915885555, 2.99382606701730,
2.95866871765285, 2.92184227312450, 0.49482511456933, 0.48835711005490, 0.48231642711865,
0.47664475092742, 0.47129550835493, 0.46623099264167, 0.46142004984199, 0.45683664767217,
0.45245876903267, 0.44826762241853, 0.44424700958760, 0.44038285956317),
with f (x ) = 0.80361910412559. g1 is close to be active.
g03
Minimize:

f (~
x) =

n
n Y
n
xi

(A.3)

i=1

Subject to:

h(~
x) =

n
X
i=1

x2i 1 = 0

where n = 10 and 0 xi 1 (i = 1, . . . , n). The feasible global minimum is located at xi = 1/ n

(i = 1, . . . , n) where f (x ) = -1.00050010001000.

34

g04
Minimize:
f (~
x) = 5.3578547x23 + 0.8356891x1 x5 + 37.293239x1 40792.141

(A.4)

Subject to:

g1 (~
x)
g2 (~
x)
g3 (~
x)
g4 (~
x)
g5 (~
x)
g6 (~
x)

= 85.334407 + 0.0056858x2 x5 + 0.0006262x1 x4 0.0022053x3 x5 92


= 85.334407 0.0056858x2 x5 0.0006262x1 x4 + 0.0022053x3 x5
= 80.51249 + 0.0071317x2 x5 + 0.0029955x1 x2 + 0.0021813x23 110
= 80.51249 0.0071317x2 x5 0.0029955x1 x2 0.0021813x23 + 90
= 9.300961 + 0.0047026x3 x5 + 0.0012547x1 x3 + 0.0019085x3 x4 25
= 9.300961 0.0047026x3 x5 0.0012547x1 x3 0.0019085x3 x4 + 20

0
0
0
0
0
0

where: 78 x1 102, 33 x2 45, 27 xi 45 (i = 3, 4, 5). The feasible global optimum is located


at x = (78, 33, 29.9952560256815985, 45, 36.7758129057882073) where f (x )=-30665.539. g1 and g6 are
active constraints.
g05
Minimize:

f (~
x) = 3x1 + 0.000001x31 + 2x2 + (0.000002/3)x32

(A.5)

Subject to:
g1 (~
x)
g2 (~
x)
h3 (~
x)
h4 (~
x)
h5 (~
x)

= x4 + x3 0.55
= x3 + x4 0.55

0
0

= 1000 sin(x3 0.25) + 1000 sin(x4 0.25) + 894.8 x1


= 1000 sin(x3 0.25) + 1000 sin(x3 x4 0.25) + 894.8 x2
= 1000 sin(x4 0.25) + 1000 sin(x4 x3 0.25) + 1294.8

=0
=0
=0

where 0 x1 1200, 0 x2 1200, 0.55 x3 0.55, and 0.55 x4 0.55. The best
known solution is located at: x =(679.945148297028709, 1026.06697600004691, 0.118876369094410433,
0.396233485215178266) where f (x ) = 5126.4967140071.
g06
Minimize:

f (~
x) = (x1 10)3 + (x2 20)3

(A.6)

Subject to:

g1 (~
x)
g2 (~
x)

= (x1 5)2 (x2 5)2 + 100


= (x1 6)2 + (x2 5)2 82.81

0
0

where 13 x1 100 and 0 x2 100. The feasible global optimum is located at:
x = (14.09500000000000064, 0.8429607892154795668) where f (x ) = 6961.81387558015. Both constraints are active.
g07
Minimize:

35

f (~
x)

x21 + x22 + x1 x2 14x1 16x2 + (x3 10)2 + 4(x4 5)2 + (x5 3)2 +
2(x6 1)2 + 5x27 + 7(x8 11)2 + 2(x9 10)2 + (x10 7)2 + 45

(A.7)

Subject to:

g1 (~
x)
g2 (~
x)
g3 (~
x)
g4 (~
x)
g5 (~
x)
g6 (~
x)
g7 (~
x)
g8 (~
x)

=
=
=
=
=
=
=
=

105 + 4x1 + 5x2 3x7 + 9x8


10x1 8x2 17x7 + 2x8
8x1 + 2x2 + 5x9 2x10 12
3(x1 2)2 + 4(x2 3)2 + 2x23 7x4 120
5x21 + 8x2 + (x3 6)2 2x4 40
x21 + 2(x2 2)2 2x1 x2 + 14x5 6x6
0.5(x1 8)2 + 2(x2 4)2 + 3x25 x6 30
3x1 + 6x2 + 12(x9 8)2 7x10

0
0
0
0
0
0
0
0

where 10 xi 10 (i = 1, . . . , 10). The feasible global optimum is located at x = (2.17199634142692,


2.3636830416034, 8.77392573913157, 5.09598443745173, 0.990654756560493, 1.43057392853463,
1.32164415364306, 9.82872576524495, 8.2800915887356, 8.3759266477347) where
f (x ) = 24.30620906818. g1 , g2 , g3 , g4 , g5 , and g6 are active constraints.
g08
Minimize:
f (~
x) =

sin3 (2x1 ) sin(2x2 )


x31 (x1 + x2 )

(A.8)

Subject to:

g1 (~
x)
g2 (~
x)

= x21 x2 + 1
= 1 x1 + (x2 4)2

0
0

where 0 x1 10 and 0 x2 10. The feasible global optimum is located at: x = (1.22797135260752599,
4.24537336612274885) with f (x ) = 0.0958250414180359.
g09
Minimize:

f (~
x) = (x1 10)2 + 5(x2 12)2 + x43 + 3(x4 11)2 + 10x65 + 7x26 + x47 4x6 x7 10x6 8x7

(A.9)

Subject to:

g1 (~
x)
g2 (~
x)
g3 (~
x)
g4 (~
x)

=
=
=
=

127 + 2x21 + 3x42 + x3 + 4x24 + 5x5


282 + 7x1 + 3x2 + 10x23 + x4 x5
196 + 23x1 + x22 + 6x26 8x7
4x21 + x22 3x1 x2 + 2x23 + 5x6 11x7

0
0
0
0

where 10 xi 10 (i = 1, . . . , 7). The feasible global optimum is located at:x =(2.33049935147405174,


1.95137236847114592, 0.477541399510615805, 4.36572624923625874, 0.624486959100388983,

36

1.03813099410962173, 1.5942266780671519) with f (x ) = 680.630057374402. g1 and g4 are active constraints.


g10
Minimize:
f (~
x) = x1 + x2 + x3

(A.10)

Subject to:

g1 (~
x)
g2 (~
x)
g3 (~
x)
g4 (~
x)
g5 (~
x)
g6 (~
x)

= 1 + 0.0025(x4 + x6 )
= 1 + 0.0025(x5 + x7 x4 )
= 1 + 0.01(x8 x5 )
= x1 x6 + 833.33252x4 + 100x1 83333.333
= x2 x7 + 1250x5 + x2 x4 1250x4
= x3 x8 + 1250000 + x3 x5 2500x5

0
0
0
0
0
0

where 100 x1 10000, 1000 xi 10000, (i = 2, 3), 10 xi 1000, (i = 4, . . . , 8). The feasible
global optimum is located at x = (579.306685017979589, 1359.97067807935605, 5109.97065743133317,
182.01769963061534, 295.601173702746792, 217.982300369384632, 286.41652592786852,
395.601173702746735) with f (x ) = 7049.24802052867. g1 , g2 , and g3 are active constraints.
g11
Minimize:
f (~
x) = x21 + (x2 1)2

(A.11)

Subject to:
h(~
x) = x2 x21 = 0

where: 1 x1 1, 1 x2 1. The feasible global optimum is located at:x = (1/ 2, 1/2) with
f (x ) = 0.7499.
g12
Minimize:
f (~
x) =

100 (x1 5)2 (x2 5)2 (x3 5)2


100

(A.12)

Subject to:

g1 (~
x) = (x1 p)2 + (x2 q)2 + (x3 r)2 0.0625 0
where 0 xi 10 (i = 1, 2, 3) and p, q, r = 1, 2, . . . , 9. The feasible region consists on 93 disjoint spheres.
A point (x1 , x2 , x3 ) is feasible if and only if there exist p, q, r such that the above inequality holds. The
feasible global optimum is located at x = (5, 5, 5) with f (x ) = 1.

37

g13
Minimize:
f (~
x) = ex1 x2 x3 x4

(A.13)

Subject to:
h1 (~
x)
h2 (~
x)
h3 (~
x)

= x21 + x22 + x23 + x24 + x25 10


= x2 x3 5x4 x5
= x31 + x32 + 1

=0
=0
=0

where 2.3 xi 2.3 (i = 1, 2) and 3.2 xi 3.2 (1 = 3, 4, 5). The feasible global optimum is
at x~ = (1.71714224003, 1.59572124049468, 1.8272502406271, 0.763659881912867, 0.76365986736498)
with f (x~ ) = 0.053941514041898.
g14
Minimize:
f (~
x) =

10
X
i=1

Subject to:
h1 (~
x)
h2 (~
x)
h3 (~
x)

xi

xi

ci + ln P10

j=1

xj

= x1 + 2x2 + 2x3 + x6 + x10 2


= x4 + 2x5 + x6 + x7 1
= x3 + x7 + x8 + 2x9 + x10 1

(A.14)

=0
=0
=0

where 0 < xi 10 (i = 1, ..., 10), and c1 = -6.089, c2 = -17.164, c3 = -34.054, c4 = -5.914, c5 = -24.721,
c6 = -14.986, c9 = -26.662, c10 = -22.179. The best known solution is at x = (0.0406684113216282,
0.147721240492452, 0.783205732104114, 0.00141433931889084, 0.485293636780388, 0.000693183051556082,
0.0274052040687766, 0.0179509660214818, 0.0373268186859717, 0.0968844604336845) with
f (x ) = 47.7648884594915.
g15
Minimize:

f (~
x) = 1000 x21 2x22 x23 x1 x2 x1 x3

(A.15)

Subject to:
h1 (~
x)
h2 (~
x)

= x21 + x22 + x23 25


= 8x1 + 14x2 + 7x3 56

=0
=0

where 0 xi 10 (i = 1, 2, 3). The best known solution is at x = (3.51212812611795133,


0.216987510429556135, 3.55217854929179921) with f (x ) = 961.715022289961.
g16
Minimize:
f (~
x)

0.000117y14 + 0.1365 + 0.00002358y13 + 0.000001502y16


+ 0.0321y12 + 0.004324y5 + 0.0001 cc15
16
+ 37.48 cy2 0.0000005843y17
12

38

(A.16)

Subject to:

g1 (~
x)
g2 (~
x)
g3 (~
x)
g4 (~
x)
g5 (~
x)
g6 (~
x)
g7 (~
x)
g8 (~
x)
g9 (~
x)
g10 (~
x)
g11 (~
x)
g12 (~
x)
g13 (~
x)
g14 (~
x)
g15 (~
x)
g16 (~
x)
g17 (~
x)
g18 (~
x)
g19 (~
x)
g20 (~
x)
g21 (~
x)
g22 (~
x)
g23 (~
x)
g24 (~
x)
g25 (~
x)
g26 (~
x)

g27 (~
x)
g28 (~
x)
g29 (~
x)
g30 (~
x)
g31 (~
x)
g32 (~
x)
g33 (~
x)
g34 (~
x)
g35 (~
x)
g36 (~
x)
g37 (~
x)
g38 (~
x)

= 00..28
y y4
72 5
= x3 1.5x2
= 3496 cy2 21
12
= 110.6 + y1 62212
c17
= 213.1 y1
= y1 405.23
= 17.505 y2
= y2 1053.6667
= 11.275 y3
= y3 35.03
= 214.228 y4
= y4 665.585
= 7.458 y5
= y5 584.463
= 0.961 y6
= y6 265.916
= 1.612 y7
= y7 7.046
= 0.146 y8
= y8 0.222
= 107.99 y9
= y9 273.366
= 922.693 y10
= y10 1286.105
= 926.832 y11
= y11 1444.046

=
=
=
=
=
=
=
=
=
=
=
=

18.766 y12
y12 537.141
1072.163 y13
y13 3247.039
8961.448 y14
y14 26844.086
0.063 y15
y15 0.386
71084.33 y16
140000 + y16
2802713 y17
y17 12146108

where

39

0
0
0
0
0
0
0
0
0
0
0
0

0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0

y1 = x2 + x3 + 41.6
c1 = 0.024x4 4.62
.5 + 12
y2 = 12
c1
c2 = 0.0003535x21 + .5311x1 + 0.08705y2 x1
c3 = 0.052x1 + 78 + 0.002377y2 x1
y3 = cc2
3
y4 = 19y3
0.1956(x1 y3 )2
c4 = 0.04782(x1 y3 ) +
+ 0.6376y4 + 1.594y3
x2
c5 = 100x2
c 6 = x 1 y3 y4
c7 = 0.950 cc4
5
y5 = c 6 c 7
y6 = x 1 y5 y4 y3
c8 = (y5 + y4 )0.995
y7 = yc8
1
c
8
y8 = 3798
7 0.3153
c9 = y7 0.0663y
y8
96.82
y9 = c + 0.321y1
9
y10 = 1.29y5 + 1.258y4 + 2.29y3 + 1.71y6
y11 = 1.71x1 0.452y4 + 0.580y3
12.3
c10 = 752
.3
c11 = (1.75y2 )(0.995x1 )
c12 = 0.995y10 + 1998
y12 = c10 x1 + cc11
12
y13 = c12 + 1.75y2
y14 = 3623 + 64.4x2 + 58.4x3 + 146312
y9 +x5
c13 = 0.995y10 + 60.8x2 + 48x4 0.1121y14 5095
y15 = yc 13
13
y16 = 148000 331000y15 + 40y13 61y15 y13
c14 = 2324y10 28740000y2
y17 = 14130000 1328y10 531y11 + cc14
12
c15 = yy13 0y.13
52
15
c16 = 1.104 0.72y15
c17 = y9 + x5
and where 704.4148 x1 906.3855, 68.6 x2 288.88, 0 x3 134.75, 193 x4 287.0966, and
25 x5 84.1988.
The best known solution is at: x = (705.174537070090537, 68.5999999999999943,
102.899999999999991, 282.324931593660324, 37.5841164258054832) with f (x ) = 1.90515525853479.
g17
Minimize:
f (~
x) = f (x1 ) + f (x2 )

40

(A.17)

where

f1 (x1 ) =

30x1
31x1

28x2
29x2
f2 (x2 ) =

30x2

Subject to:

h1 (~
x) = x1 + 300

x3 x4
131.078

0 x1 < 300
300 x1 < 400
0 x2 < 100
100 x2 < 200
200 x2 < 1000

cos(1.48477 x6 ) +

0.90798x2
3
131.078

cos(1.47588)

h2 (~
x) = x2

x3 x4
131.078

cos(1.48477 + x6 ) +

0.90798x2
4
131.078

cos(1.47588)

h3 (~
x) = x5

x3 x4
131.078

sin(1.48477 + x6 ) +

0.90798x2
4
131.078

sin(1.47588)

h4 (~
x) = 200

x3 x4
131.078

sin(1.48477 x6 ) +

0.90798x2
4
131.078

sin(1.47588)

and where 0 x1 400, 0 x2 1000, 340 x3 420, 340 x4 420, 1000 x5 1000, and
0 x6 0.5236. The best known solution is at x = (201.784467214523659, 99.9999999999999005,
383.071034852773266, 420, 10.9076584514292652, 0.0731482312084287128) with
f (x ) = 8853.53967480648.
g18
Minimize:
f (~
x) = 0.5(x1 x4 x2 x3 + x3 x9 x5 x9 + x5 x8 x6 x7 )

(A.18)

Subject to:

g1 (~
x)
g2 (~
x)
g3 (~
x)
g4 (~
x)
g5 (~
x)
g6 (~
x)

= x23 + x24 1
= x29 1
= x25 + x26 1
= x21 + (x2 x9 )2 1
= (x1 x5 )2 + (x2 x6 )2 1
= (x1 x7 )2 + (x2 x8 )2 1

0
0
0
0
0
0

g7 (~
x)
g8 (~
x)
g9 (~
x)
g10 (~
x)
g11 (~
x)
g12 (~
x)
g13 (~
x)

= (x3 x5 )2 + (x4 x6 )2 1
= (x3 x7 )2 + (x4 x8 )2 1
= x27 + (x8 x9 )2 1
= x2 x3 x1 x4
= x3 x9
= x5 x9
= x6 x7 x5 x8

0
0
0
0
0
0
0

where 10 xi 10 (i = 1, ..., 8) and 0 x9 20. The best known solution is at:


x = (0.657776192427943163, 0.153418773482438542, 0.323413871675240938,
0.946257611651304398, 0.657776194376798906, 0.753213434632691414, 0.323413874123576972,

41

0.346462947962331735, 0.59979466285217542) with f (x ) = 0.866025403784439.


g19
Minimize:
f (~
x) =

5 X
5
X

cij x(10+j) x(10+j) + 2

j=1 i=1

5
X

j=1

dj x3(10+j)

10
X

bi xi

(A.19)

i=1

Subject to:

gj (~
x) = 2

P5

i=1 cij x(10+i)

ej +

P10

i=1

aij xi 0

j = 1, . . . , 5

where ~b = [-40, -2, -0.25, -4, -4, -1, -40, -60, 5, 1] and the remaining values are taken from Table A.1,
0 xi 10 (i = 1, . . . , 15). The best known solution is at x = (1.66991341326291344e17 ,
3.95378229282456509e16 , 3.94599045143233784, 1.06036597479721211e16 , 3.2831773458454161,
9.99999999999999822, 1.12829414671605333e17 , 1.2026194599794709e 17, 2.50706276000769697e15 ,
2.24624122987970677e15 , 0.370764847417013987, 0.278456024942955571, 0.523838487672241171,
0.388620152510322781, 0.298156764974678579) with f (x ) = 32.6555929502463.

j
ej
c1j
c2j
c3j
c4j
c5j
dj
a1j
a2j
a3j
a4j
a5j
a6j
a7j
a9j
a10j

1
-15
30
-20
-10
32
-10
4
-16
0
-3.5
0
0
2
-1
1
1

2
-27
-20
39
-6
-31
32
8
2
-2
0
-2
-9
0
-1
2
1

3
-36
-10
-6
10
-6
-10
10
0
0
2
0
-2
-4
-1
3
1

4
-18
32
-31
-6
39
-20
6
1
0.4
0
-4
1
0
-1
4
1

5
-12
-10
32
-10
-20
30
2
0
2
0
-1
-2.8
0
-1
5
1

Table A.1: Data set for test problem g19

g20
Minimize:
f (~
x) =

24
X
i=1

Subject to:

42

ai xi

(A.20)

gi (~
x) =

(xi +x(i+12) )
P24
j=1 xj +ei

gi (~
x) =

(xi+3 +x(i+15) )
P24
j=1 xj +ei

h1 (~
x) =

h14 (~
x) =

i
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24

P24

i=1

i = 1, 2, 3

x(i+12)
P24

bi+12

h13 (~
x) =

xj
j=13 bj

i = 4, 5, 6

40bi

ci x i
P12

xj
j=1 bj

=0

i = 1, . . . , 12

xi 1 = 0

P12

xi
i=1 di

ai
0.0693
0.0577
0.05
0.2
0.26
0.55
0.06
0.1
0.12
0.18
0.1
0.09
0.0693
0.5777
0.05
0.2
0.26
0.55
0.06
0.1
0.12
0.18
0.1
0.09

+k

P24

xi
i=13 bi

bi 4
44.094
58.12
58.12
137.4
120.9
170.9
62.501
84.94
133.425
82.507
46.07
60.097
44.094
6
58.12
137.4
120.9
170.9
62.501
84.94
133.425
82.507
46.07
60.097

1.671 = 0

ci
123.7
31.7
45.7
14.7
84.7
27.7
49.7
7.1
2.1
17.7
0.85
0.64

4di
31.244
36.12
34.784
92.7
82.7
91.6
56.708
82.7
80.8
64.517
49.4
49.1

ei
0.1
0.3
0.4
0.3
0.6
0.3

58.12

Table A.2: Data set for test problem g20

where k = (0.7302)(530)(14.740) and the data set is detailed Table A.2. 0 xi 10 (i =


1, . . . , 24). The best known solution is at x = (1.28582343498528086e18 , 4.83460302526130664e34 ,
0, 0, 6.30459929660781851e18 , 7.57192526201145068e34 , 5.03350698372840437e34 ,
9.28268079616618064e34 , 0, 1.76723384525547359e 17, 3.55686101822965701e34 ,
2.99413850083471346e34 , 0.158143376337580827, 2.29601774161699833e19 , 1.06106938611042947e18 ,
1.31968344319506391e18 , 0.530902525044209539, 0, 2.89148310257773535e18 , 3.34892126180666159e18 ,
0, 0.310999974151577319, 5.41244666317833561e05 , 4.84993165246959553e16 ). This solution is slightly
infeasible and no feasible solution has been reported so far.

43

g21
Minimize:
f (~
x) = x1

(A.21)

Subject to:

g1 (~
x)

h1 (~
x)
h2 (~
x)

= x1 + 35x02.6 + 35x03.6

= 300x3 + 7500x5 7500x6 25x4 x5 + 25x4 x6 + x3 x4


= 100x2 + 155.365x4 + 2500x7 x2 x4 24x4 x7 15536.5

h3 (~
x)
h4 (~
x)
h5 (~
x)

= x5 + ln(x4 + 900)
= x6 + ln(x4 + 300)
= x7 + ln(2x4 + 700)

=0
=0

=0
=0
=0

where 0 x1 1000, 0 x2 , x3 40, 100 x4 300, 6.3 x5 6.7, 5.9 x6 6.4, and
4.5 x7 6.25. The best known solution is at: x = (193.724510070034967, 5.56944131553368433e27 ,
17.3191887294084914, 100.047897801386839, 6.68445185362377892, 5.99168428444264833,
6.21451648886070451) with f (x ) = 193.724510070035.
g22
Minimize:
f (~
x) = x1

(A.22)

Subject to:

g1 (~
x)
h1 (~
x)
h2 (~
x)
h3 (~
x)
h4 (~
x)
h5 (~
x)
h6 (~
x)
h7 (~
x)
h8 (~
x)
h9 (~
x)
h10 (~
x)
h11 (~
x)

h12 (~
x)
h13 (~
x)
h14 (~
x)
h15 (~
x)
h16 (~
x)
h17 (~
x)
h18 (~
x)
h19 (~
x)

=
=
=
=
=
=
=
=
=
=
=
=

x1 + x02.6 + x03.6 x04.6


x5 100000x8 + 1 107
x6 100000x8 100000x9
x7 100000x9 5 107
x5 100000x10 3.3 107
x6 100000x11 4.4 107
x7 100000x12 6.6 107
x5 120x2 x13
x6 80x3 x14
x7 40x4 x15
x8 x11 + x16
x9 x12 + x17

0
=0
=0
=0
=0
=0
=0
=0
=0
=0
=0
=0

= x18 + ln(x10 100)


= x19 + ln(x8 + 300)
= x20 + ln(x16 )
= x21 + ln(x9 + 400)
= x22 + ln(x17 )
= x8 x10 + x13 x18 x13 x19 + 400
= x8 x9 x11 + x14 x20 x14 x21 + 400
= x9 x12 4.60517x15 + x15 x22 + 100

44

=
=
=
=
=
=
=
=

0
0
0
0
0
0
0
0

where 0 x1 20000, 0 x2 , x3 , x4 1 106 , 0 x5 , x6 , x7 4 107 , 100 x8 299.99,


100 x9 399.99, 100.01 x10 300, 100 x11 400, 100 x12 600, 0 x13 , x14 , x15 500,
0.01 x16 300, 0.01 x17 400, 4.7 x18 , x19 , x20 , x21 , x22 6.25. The best known solution is at:
x = (236.430975504001054, 135.82847151732463, 204.818152544824585, 6446.54654059436416,
3007540.83940215595, 4074188.65771341929, 32918270.5028952882, 130.075408394314167,
170.817294970528621, 299.924591605478554, 399.258113423595205, 330.817294971142758,
184.51831230897065, 248.64670239647424, 127.658546694545862, 269.182627528746707,
160.000016724090955, 5.29788288102680571, 5.13529735903945728, 5.59531526444068827,
5.43444479314453499, 5.07517453535834395) with f (x ) = 236.430975504001.
g23
Minimize:
f (~
x) = 9x5 15x8 + 6x1 + 16x2 + 10(x6 + x7 )

(A.23)

Subject to:

g1 (~
x)
g2 (~
x)
h1 (~
x)
h2 (~
x)
h3 (~
x)
h4 (~
x)

=
=
=
=
=
=

x9 x3 + 0.02x6 0.025x5
x9 x4 + 0.02x7 0.015x8
x1 + x2 x3 x4
0.03x1 + 0.01x2 x9 (x3 + x4 )
x3 + x6 x5
x4 + x7 x8

=
=
=
=

0
0
0
0
0
0

where 0 x1 , x2 , x6 300, 0 x3 , x5 , x7 100, 0 x4 , x8 200, and 0.01 x9 0.03. The best


known solution is at:x = (0.00510000000000259465, 99.9947000000000514, 9.01920162996045897e18 ,
99.9999000000000535, 0.000100000000027086086, 2.75700683389584542e14 , 99.9999999999999574,
2000.0100000100000100008) with f (x ) = 400.055099999999584.
g24
Minimize:
f (~
x) = x1 x2

(A.24)

Subject to:

g1 (~
x)
g2 (~
x)

= 2x41 + 8x31 8x21 + x2 2


= 4x41 + 32x31 88x21 + 96x1 + x 2 36

0
0

where 0 x1 3 and 0 x2 4. The feasible global minimum is at:x = (2.329520197477623,


17849307411774) with f (x ) = 5.50801327159536. This problem has a feasible region consisting on two
disconnected sub-regions.

References
[1] H.J. Barbosa, A.C. Lemonge, A new adaptive penalty scheme for genetic
algorithms, Information Sciences 156 (34) (2003) 215251.

45

[2] H.S. Bernardino, H.J.C. Barbosa, A.C.C. Lemonge, L.G. Fonseca, A new
hybrid AIS-GA for constrained optimization problems in mechanical engineering, in: 2008 Congress on Evolutionary Computation (CEC2008),
IEEE Service Center, Hong Kong, 2008, pp. 14551462.
[3] J. Brest, V. Zumer, M.S. Maucec, Self-adaptative differential evolution
algorithm in constrained real-parameter optimization, in: 2006 IEEE
Congress on Evolutionary Computation (CEC2006), IEEE, Vancouver,
BC, Canada, 2006, pp. 919926.
[4] L. Cagnina, S. Esquivel, C.A. Coello-Coello, A bi-population PSO with
a shake-mechanism for solving constrained numerical optimization, in:
2007 IEEE Congress on Evolutionary Computation (CEC2007), IEEE
Press, Singapore, 2007, pp. 670676.
[5] C.A. Coello Coello, Use of a self-adaptive penalty approach for engineering optimization problems, Computers in Industry 41 (2) (2000)
113127.
[6] C.A. Coello Coello, Theoretical and numerical constraint handling techniques used with evolutionary algorithms: A survey of the state of the
art, Computer Methods in Applied Mechanics and Engineering 191 (1112) (2002) 12451287.
[7] W. Conover, Practical Nonparametric Statistics, John Wiley and Sons,
3rd edition, 1999.
[8] N. Cruz-Cortes, Handling constraints in global optimization using artificial immune systems, in: E. Mezura-Montes (Ed.), Constraint-Handling
in Evolutionary Optimization, volume 198, Springer-Verlag, Studies in
Computational Intelligence Series, ISBN:978-3-642-00618-0, 2009, pp.
237262.
[9] W. Daniel, Biostatistics: Basic Concepts and Methodology for the
Health Sciences, John Wiley and Sons, 2002.
[10] K. Deb, Optimization for Engineering Design, Prentice-Hall, India,
1995.

46

[11] K. Deb, An efficient constraint handling method for genetic algorithms,


Computer Methods in Applied Mechanics and Engineering 186 (2-4)
(2000) 311338.
[12] A. Eiben, J.E. Smith, Introduction to Evolutionary Computing, Natural
Computing Series, Springer Verlag, 2003.
[13] G. Eiben, M. Schut, New ways to calibrate evolutionary algorithms,
in: P. Siarry, Z. Michalewicz (Eds.), Advances in Metaheuristics for
Hard Optimization, Natural Computing, Springer, Heidelberg, Germany, 2008, pp. 153177.
[14] R. Gamperle, S. Muller, P. Koumoutsakos, A parameter study for differential evolution, in: Proceedings of the 3rd WSEAS Int.Conf. on Evolutionary Computation, WSEAS Press, Interlaken, Switzerland, 2002, pp.
293298.
[15] W. Gong, Z. Cai, A multiobjective differential evolution algorithm for
constrained optimization, in: 2008 Congress on Evolutionary Computation (CEC2008), IEEE Service Center, Hong Kong, 2008, pp. 181188.
[16] S.B. Hamida, M. Schoenauer, ASCHEA: New results using adaptive
segregational constraint handling, in: Proceedings of the Congress on
Evolutionary Computation 2002 (CEC2002), volume 1, IEEE Service
Center, Piscataway, New Jersey, 2002, pp. 884889.
[17] Q. He, L. Wang, F.Z. Huang, Nonlinear constrained optimization by enhanced co-evolutionary PSO, in: 2008 Congress on Evolutionary Computation (CEC2008), IEEE Service Center, Hong Kong, 2008, pp. 83
89.
[18] P.Y. Ho, K. Shimizu, Evolutionary constrained optimization using an
addition of ranking method and a percentage-based tolerance value adjustment scheme, Information Sciences 177 (1415) (2007) 29853004.
[19] F. Huang, L. Wang, Q. He, An effective co-evolutionary differential evolution for constrained optimization, Applied Mathematics and Computation 186 (1) (2007) 340356.

47

[20] F.Z. Huang, L. Wang, Q. He, A hybrid differential evolution with double
populations for constrained optimization, in: 2008 Congress on Evolutionary Computation (CEC2008), IEEE Service Center, Hong Kong,
2008, pp. 1825.
[21] V.L. Huang, A.K. Qin, P.N. Suganthan, Self-adaptative differential evolution algorithm for constrained real-parameter optimization, in: 2006
IEEE Congress on Evolutionary Computation (CEC2006), IEEE, Vancouver, BC, Canada, 2006, pp. 324331.
[22] S. Kukkonen, J. Lampinen, Constrained real-parameter optimization
with generalized differential evolution, in: 2006 IEEE Congress on Evolutionary Computation (CEC2006), IEEE, Vancouver, BC, Canada,
2006, pp. 911918.
[23] H. Kumar-Singh, T. Ray, W. Smith, C-PSA: Constrained Pareto simulated annealing for constrained multi-objective optimization, Information Sciences 180 (13) (2010) 24992513.
[24] J. Lampinen, A constraint handling approach for the differential evolution algorithm, in: Proceedings of the Congress on Evolutionary Computation 2002 (CEC2002), volume 2, IEEE Service Center, Piscataway,
New Jersey, 2002, pp. 14681473.
[25] R. Landa Becerra, C.A. Coello Coello, Cultured differential evolution
for constrained optimization, Computer Methods in Applied Mechanics
and Engineering 195 (3336) (2006) 43034322.
[26] G. Leguizamon, C.A. Coello Coello, A boundary search based ACO
algorithm coupled with stochastic ranking, in: 2007 IEEE Congress on
Evolutionary Computation (CEC2007), IEEE Press, Singapore, 2007,
pp. 165172.
[27] L.D. Li, X. Li, X. Yu, A multi-objective constraint-handling method
with pso algorithm for constrained engineering optimization problems,
in: 2008 Congress on Evolutionary Computation (CEC2008), IEEE
Service Center, Hong Kong, 2008, pp. 15281535.
[28] J.J. Liang, T. Runarsson, E. Mezura-Montes, M. Clerc, P. Suganthan,
C.A. Coello Coello, K. Deb, Problem Definitions and Evaluation Criteria for the CEC 2006 Special Session on Constrained Real-Parameter
48

Optimization, Technical Report, Nanyang Technological University, Singapore, December, 2005.


[29] Y.C. Lin, K.S. Hwang, F.S. Wang, Hybrid differential evolution with
multiplier updating method for nonlinear constrained optimization problems, in: Proceedings of the Congress on Evolutionary Computation
2002 (CEC2002), volume 1, IEEE Service Center, Piscataway, New Jersey, 2002, pp. 872877.
[30] B. Liu, H. Ma, X. Zhang, B. Liu, Y. Zhou, A memetic co-evolutionary
differential evolution algorithm for constrained optimization, in: 2007
IEEE Congress on Evolutionary Computation (CEC2007), IEEE Press,
Singapore, 2007, pp. 29963002.
[31] H. Liu, Z. Cai, Y. Wang, A new constrained optimization evolutionary
algorithm by using good point set, in: 2007 IEEE Congress on Evolutionary Computation (CEC2007), IEEE Press, Singapore, 2007, pp.
12471254.
[32] J. Liu, J. Lampinen, A fuzzy adaptive differential evolution algorithm,
Soft Computing 9 (6) (2005) 448462.
[33] R. Mallipeddi, S. Mallipeddi, P. Suganthan, Ensemble strategies with
adaptive evolutionary programming, Information Sciences 180 (9) (2010)
15711581.
[34] E. Mezura-Montes (Ed.), Constraint-Handling in Evolutionary Optimization, volume 198 of Studies in Computational Intelligence, SpringerVerlag, 2009.
[35] E. Mezura-Montes, C.A. Coello Coello, Identifying on-line behavior and
some sources of difficulty in two competitive approaches for constrained
optimization, in: 2005 IEEE Congress on Evolutionary Computation
(CEC2005), volume 2, IEEE Press, Edinburgh, Scotland, 2005, pp.
14771484.
[36] E. Mezura-Montes, C.A. Coello Coello, A simple multimembered evolution strategy to solve constrained optimization problems, IEEE Transactions on Evolutionary Computation 9 (1) (2005) 117.

49

[37] E. Mezura-Montes, C.A. Coello Coello, Constrained optimization via


multiobjective evolutionary algorithms, in: J. Knowles, D. Corne,
K. Deb (Eds.), Multiobjective Problems Solving from Nature: From
Concepts to Applications, Springer-Verlag, Natural Computing Series,
2008, ISBN: 978-3-540-72963-1., 2008, pp. 5376.
[38] E. Mezura-Montes, C.A. Coello Coello, E.I. Tun-Morales, Simple feasibility rules and differential evolution for constrained optimization, in:
R. Monroy, G. Arroyo-Figueroa, L.E. Sucar, H. Sossa (Eds.), Proceedings of the 3rd Mexican International Conference on Artificial Intelligence (MICAI2004), Springer Verlag, Heidelberg, Germany, 2004, pp.
707716. Lecture Notes in Artificial Intelligence No. 2972.
[39] E. Mezura-Montes, J.I. Flores-Mendoza, Improved particle swarm optimization in constrained numerical search spaces, in: R. Chiong (Ed.),
Nature-Inspired Algorithms for Optimization, volume 193, SpringerVerlag, Studies in Computational Intelligence Series, 2009, ISBN: 9783-540-72963-1., 2009, pp. 299332.
[40] E. Mezura-Montes, B.C. Lopez-Ramrez, Comparing bio-inspired algorithms in constrained optimization problems, in: 2007 IEEE Congress on
Evolutionary Computation (CEC2007), IEEE Press, Singapore, 2007,
pp. 662669.
[41] E. Mezura-Montes, M.E. Miranda-Varela, R. del Carmen GomezRamon, Additional Results of the Empirical Study on Differential Evolution in Constrained Numerical Optimization, Technical Report TREMM-01-2010, LANIA Educational Center, April, 2010. Available at:
http://www.lania.mx/emezura/tr-EMM-01-2010.pdf.
[42] E. Mezura-Montes, A.G. Palomeque-Ortiz, Parameter control in differential evolution for constrained optimization, in: 2009 Congress on Evolutionary Computation (CEC2009), IEEE Service Center, Trondheim,
Norway, 2009, pp. 13751382.
[43] E. Mezura-Montes, J. Velazquez-Reyes, C.A.C. Coello, Promising infeasibility and multiple offspring incorporated to differential evolution for
constrained optimization, in: H.G. Beyer, et al. (Eds.), Proceedings of
the Genetic and Evolutionary Computation Conference (GECCO2005),

50

volume 1, Washington DC, USA, ACM Press, New York, 2005, pp. 225
232. ISBN 1-59593-010-8.
[44] E. Mezura-Montes, J. Velazquez-Reyes, C.A. Coello Coello, Comparing
differential evolution models for global optimization, in: 2006 Genetic
and Evolutionary Computation Conference (GECCO2006), volume 1,
pp. 485492.
[45] E. Mezura-Montes, J. Velazquez-Reyes, C.A. Coello Coello, Modified differential evolution for constrained optimization, in: 2006 IEEE
Congress on Evolutionary Computation (CEC2006), IEEE, Vancouver,
BC, Canada, 2006, pp. 332339.
[46] Z. Michalewicz, M. Schoenauer, Evolutionary algorithms for constrained parameter optimization problems, Evolutionary Computation 4 (1)
(1996) 132.
[47] A.E. Mu
noz-Zavala, A. Hernandez-Aguirre, E.R. Villa-Diharce,
S. Botello-Rionda, PESO+ for constrained optimization, in: 2006 IEEE
Congress on Evolutionary Computation (CEC2006), IEEE, Vancouver,
BC, Canada, 2006, pp. 935942.
[48] A. Oyama, K. Shimoyama, K. Fujii, New constraint-handling method for
multi-objective and multi-constraint evolutionary optimization, Transactions of the Japan Society for Aeronautical and Space Sciences 50
(167) (2007) 5662.
[49] A.I. Oyman, K. Deb, H.G. Beyer, An alternative constraint handling
method for evolution strategies, in: Proceedings of the Congress on Evolutionary Computation 1999 (CEC99), volume 1, IEEE Service Center,
Piscataway, New Jersey, 1999, pp. 612619.
[50] K. Price, R. Storn, J. Lampinen, Differential Evolution: A Practical
Approach to Global Optimization, Natural Computing Series, SpringerVerlag, 2005.
[51] K.V. Price, J.I. Ronkkonen, Comparing the uni-modal scaling performance of global and local selection in a mutation-only differential evolution algorithm, in: 2006 IEEE Congress on Evolutionary Computation
(CEC2006), IEEE Press, Vancouver, Canada, 2006, pp. 73877394.
51

[52] T.P. Runarsson, X. Yao, Stochastic ranking for constrained evolutionary


optimization, IEEE Transactions on Evolutionary Computation 4 (3)
(2000) 284294.
[53] A.E. Smith, D.W. Coit, Constraint handling techniquespenalty functions, in: T. Back, D.B. Fogel, Z. Michalewicz (Eds.), Handbook of
Evolutionary Computation, Oxford University Press and Institute of
Physics Publishing, 1997, pp. C 5.2:1C 5.2:6.
[54] T. Takahama, S. Sakai, Constrained optimization by the constrained
differential evolution with gradient-based mutation and feasible elites,
in: 2006 IEEE Congress on Evolutionary Computation (CEC2006),
IEEE, Vancouver, BC, Canada, 2006, pp. 308315.
[55] T. Takahama, S. Sakai, Constrained optimization by constrained differential evolution with dynamic -level control, in: U.K. Chakraborty
(Ed.), Advances in Differential Evolution, Springer, Berlin, 2008, pp.
139154. ISBN 978-3-540-68827-3.
[56] T. Takahama, T. Sakai, Solving difficult constrained optimization problems by the constrained differential evolution with gradient-based
mutation, in: E. Mezura-Montes (Ed.), Constraint-Handling in Evolutionary Optimization, volume 198, Springer-Verlag, Studies in Computational Intelligence Series, ISBN:978-3-642-00618-0, 2009, pp. 5172.
[57] M.F. Tasgetiren, P.N. Suganthan, A multi-populated differential evolution algorithm for solving constrained optimization problems, in: 2006
IEEE Congress on Evolutionary Computation (CEC2006), IEEE, Vancouver, BC, Canada, 2006, pp. 340354.
[58] B. Tessema, G.G. Yen, A self adaptative penalty function based algorithm for constrained optimization, in: 2006 IEEE Congress on Evolutionary Computation (CEC2006), IEEE, Vancouver, BC, Canada, 2006,
pp. 950957.
[59] A.S.S.M.B. Ullah, R. Sarker, D. Cornforth, C. Lokan, An agentbased memetic algorithm (AMA) for solving constrainted optimization problems, in: 2007 IEEE Congress on Evolutionary Computation
(CEC2007), IEEE Press, Singapore, 2007, pp. 9991006.

52

[60] B.W. Wah, T. Wang, Y. Shang, Z. Wu, Improving the performance of


weighted Lagrange-multiplier methods for nonlinear constrained optimization, Information Sciences 124 (14) (2000) 241272.
[61] Y. Wang, Z. Cai, G. Guo, Y. Zhou, Multiobjective optimization and
hybrid evolutionary algorithm to solve constrained optimization problems, IEEE Transactions on Systems, Man and Cybernetics Part B
Cybernetics 37 (3) (2007) 560575.
[62] Y. Wang, Z. Cai, Y. Zhou, Z. Fan, Constrained optimization based on
hybrid evolutionary algorithm and adaptive constraint-handling technique, Structural and Multidisciplinary Optimization 37 (4) (2009) 395
413.
[63] Y. Wang, Z. Cai, Y. Zhou, W. Zeng, An adaptive tradeoff model for constrained evolutionary optimization, IEEE Transactions on Evolutionary
Computation 12 (1) (2008) 8092.
[64] Y. Wang, H. Liu, Z. Cai, Y. Zhou, An orthogonal design based constrained evolutionary optimization algorithm, Engineering Optimization 39
(6) (2007) 715736.
[65] E.F. Wanner, F.G. Guimaraes, R.H.C. Takahashi, P.J. Flemming, Local
search with quadratic approximation in genetic algorithms for expensive
optimization problems, in: 2007 IEEE Congress on Evolutionary Computation (CEC2007), IEEE Press, Singapore, 2007, pp. 677683.
[66] D.H. Wolpert, W.G. Macready, No free lunch theorems for optimization,
IEEE Transactions on Evolutionary Computation 1 (1) (1997) 6782.
[67] J. Xiao, J. Xu, Z. Shao, C. Jiang, L. Pan, A genetic algorithm for solving
multi-constrained function optimization problems based on ks function,
in: 2007 IEEE Congress on Evolutionary Computation (CEC2007),
IEEE Press, Singapore, 2007, pp. 44974501.
[68] Y. Yu, Z.H. Zhou, On the usefulness of infeasible solutions in evolutionary search: A theoretical study, in: 2008 Congress on Evolutionary
Computation (CEC2008), IEEE Service Center, Hong Kong, 2008, pp.
835840.

53

[69] S. Zeng, H. Shi, H. Li, G. Chen, L. Ding, L. Kang, A lower-dimensionalsearch evolutionary algorithm and its application in constrained optimization problem, in: 2007 IEEE Congress on Evolutionary Computation (CEC2007), IEEE Press, Singapore, 2007, pp. 12551260.
[70] M. Zhang, W. Luo, X. Wang, Differential evolution with dynamic
stochastic selection for constrained optimization, Information Sciences
178 (15) (2008) 30433074.
[71] Q. Zhang, S. Zeng, R. Wang, H. Shi, G. Chen, L. Ding, L. Kang, Constrained optimization by the evolutionary algorithm with lower dimensional crossover and gradient-based mutation, in: 2008 Congress on Evolutionary Computation (CEC2008), IEEE Service Center, Hong Kong,
2008, pp. 273279.
[72] Y. Zhou, J. He, A runtime analysis of evolutionary algorithms for
constrained optimization problems, IEEE Transactions on Evolutionary
Computation 11 (5) (2007) 608619.
[73] K. Zielinski, R. Laur, Constrained single-objective optimization using
differential evolution, in: 2006 IEEE Congress on Evolutionary Computation (CEC2006), IEEE, Vancouver, BC, Canada, 2006, pp. 927934.
[74] K. Zielinski, R. Laur, Stopping criteria for differential evolution in constrained single-objective optimization, in: U.K. Chakraborty (Ed.), Advances in Differential Evolution, Springer, Berlin, 2008, pp. 111138.
ISBN 978-3-540-68827-3.
[75] K. Zielinski, S.P. Vudathu, R. Laur, Influence of different deviations allowed for equality constraints on particle swarm optimization
and differential evolution, in: N. Krasnogor, G. Nicosia, M. Pavone,
D. Pelta (Eds.), Nature Inspired Cooperative Strategies for Optimization, Springer, Berlin, 2008, pp. 249259. ISBN 978-3-540-78986-4.
[76] K. Zielinski, X. Wang, R. Laur, Comparison of adaptive approaches for
differential evolution, in: G. Rudolph, T. Jansen, S. Lucas, C. Poloni,
N. Beume (Eds.), Parallel Problem Solving from NaturePPSN X,
Springer. Lecture Notes in Computer Science Vol. 5199, Dortmund, Germany, 2008, pp. 641650.

54

Table Captions
Table 1: DE variants used in this study. jrand is a random integer number
generated between [1, n], where n is the number of variables of the problem.
randj [0, 1] is a real number generated at random between 0 an 1. Both numbers are generated using a uniform distribution. ~ui,g+1 is the trial vector
(child vector), ~xr0 ,g is the base vector chosen at random from the current population, ~xbest,g is the best vector in the population, ~xi,g is the target vector
(parent vector), and ~xr1 ,g and ~xr2 ,g are used to generate the difference vector.
Table 2: Details of the 24 test problems [28]. n is the number of decision variables, = |F | / |S| is the estimated ratio between the feasible region
and the search space, LI is the number of linear inequality constraints, NI
the number of nonlinear inequality constraints, LE is the number of linear
equality constraints, and NE is the number of nonlinear equality constraints.
a is the number of active constraints at the optimum.
Table 3: Classification of problems for the first experiment based on the
number of decision variables.
Table 4: Classification of problems for the first experiment based on the
type of constraints.
Table 5: F P values obtained by each DE variant on each test problem.
Best results are remarked with boldface.
Table 6: P values obtained by each DE variant on each test problem.
Best results are remarked with boldface.
Table 7: AF ES values obtained by each DE variant on each test problem.
Best results are remarked with boldface. - means that the performance
measure was not defined for this problem/variant.
Table 8: SP values obtained by each DE variant on each test problem.
Best results are remarked with boldface.- means that the performance
measure was not defined for this problem/variant.

55

Table 9: a) Test problems with a different dimensionality. b) Test problems with different type of constraints.
Table 10: Statistical results (B: Best, M: Mean, W: Worst, SD: Standard
Deviation) obtained by DECV with respect to those provided by state-ofthe-art approaches on 13 benchmark problems. Values in boldface mean
that the global optimum or best know solution was reached, values in italic
mean that the obtained result is better (but not the optimal or best known)
with respect to the approaches compared.
Table 11: Statistical results B: Best, M: Mean, W: Worst, SD: Standard
Deviation) obtained by DECV and A-DDE in the last 11 benchmark problems. Values in boldface mean that the global optimum or best know
solution was reached.
Table 12: Statistical results B: Best, M: Mean, W: Worst, SD: Standard
Deviation) obtained by the alternative DECV (called A-DECV) in those problems where the original DECV provided good but not competitive results.
Values in boldface mean that the global optimum or best know solution was
reached.

56

Tables on individual pages

57

Variant
DE/rand/1/bin:

xj,r0 ,g + F (xj,r1,g xj,r2 ,g ) if randj [0, 1] < CR or j = jrand
uj,i,g+1 =
xj,i,g
otherwise
DE/best/1/bin:

xj,best,g + F (xj,r1,g xj,r2,g ) if randj [0, 1] < CR or j = jrand
uj,i,g+1 =
xj,i,g
otherwise
DE/target-to-rand/1:
uj,i,g+1 = xj,i,g + F (xj,r0 ,g xj,i,g ) + F (xj,r1,g xj,r2 ,g )
DE/target-to-best/1:
uj,i,g+1 = xj,i,g + F (xj,best,g xj,i,g ) + F (xj,r1 ,g xj,r2 ,g )

58

Prob.n
g01
g02
g03
g04
g05
g06
g07
g08
g09
g10
g11
g12
g13
g14
g15
g16
g17
g18
g19
g20
g21
g22
g23
g24

13
20
10
5
4
2
10
2
7
8
2
3
5
10
3
5
6
9
15
24
7
22
9
2

Type
of
function
quadratic
nonlinear
polynomial
quadratic
cubic
cubic
quadratic
nonlinear
polynomial
linear
quadratic
quadratic
nonlinear
nonlinear
quadratic
nonlinear
nonlinear
quadratic
nonlinear
linear
linear
linear
linear
linear

LI

NI

LE

NE

0.0111%
99.9971%
0.0000%
52.1230%
0.0000%
0.0066%
0.0003%
0.8560%
0.5121%
0.0010%
0.0000%
4.7713%
0.0000%
0.0000%
0.0000%
0.0204%
0.0000%
0.0000%
33.4761%
0.0000%
0.0000%
0.0000%
0.0000%
79.6556%

9
0
0
0
2
0
3
0
0
3
0
0
0
0
0
4
0
0
0
0
0
0
0
0

0
2
0
6
0
2
5
2
4
3
0
1
0
0
0
34
0
12
5
6
1
1
2
2

0
0
0
0
0
0
0
0
0
0
0
0
0
3
1
0
0
0
0
2
0
8
3
0

0
0
1
0
3
0
0
0
0
0
1
0
3
0
1
0
4
0
0
12
5
11
1
0

6
1
1
2
3
2
6
0
2
6
1
0
3
3
2
4
4
6
0
16
6
19
6
2

59

Class
High
Medium
Low

Number of
variables
10 - 20
5-9
2-4

Problems
g01, g02, g03, g07, g14, g19, g20, g22
g04, g09, g10, g13, g16, g17, g18, g21, g23
g05, g06, g08, g11, g12, g15, g24

60

Type of
constraints
Only inequalities
Only equalities
Inequalities and Equalities

Problems
g01, g02, g04, g06, g07, g08, g09, g10, g12,
g16, g18, g19, g24
g03, g11, g13, g14, g15, g17
g05, g20, g21, g22, g23

61

Problem

DE/rand/1/bin

DE/best/1/bin

DE/target-to-rand/1

DE/target-to-best/1

g01
g02
g03
g04
g05
g06
g07
g08
g09
g10
g11
g12
g13
g14
g15
g16
g17
g18
g19
g21
g23
g24

1
1
1
1
0.97
1
1
1
1
1
1
1
0.87
1
1
1
1
1
1
1
0.90
1

1
1
0.9
1
0.93
1
1
1
1
1
1
1
0.87
0.93
1
1
0.93
1
1
0.97
0.90
1

1
1
0.83
1
0.77
1
1
1
1
1
1
1
0.3
0.43
1
1
0.87
1
1
0.97
0.17
1

1
1
1
1
1
1
1
1
1
1
1
1
0.97
1
1
1
1
1
1
1
0.97
1

62

Problem

DE/rand/1/bin

DE/best/1/bin

DE/target-to-rand/1

DE/target-to-best/1

g01
g02
g03
g04
g05
g06
g07
g08
g09
g10
g11
g12
g13
g14
g15
g16
g17
g18
g19
g21
g23
g24

1
0.03
0
1
1
1
1
1
1
1
1
1
0
0.93
1
1
0
1
0
0.9
0.5
1

0.8
0
0.03
1
1
1
0.37
1
0.93
0.2
0.97
1
0.27
0.67
1
1
0
0.8
0.93
0.5
0.5
1

1
0.13
0
1
0.6
1
1
1
1
1
1
1
0
0.1
0.8
1
0.03
1
0
0.63
0
1

0.87
0
0
1
0.27
1
1
1
1
0.67
1
1
0.03
0.43
0.3
1
0
0.97
1
0.43
0.43
1

63

Problem

DE/rand/1/bin

DE/best/1/bin

DE/target-to-rand/1

DE/target-to-best/1

g01
g02
g03
g04
g05
g06
g07
g08
g09
g10
g11
g12
g13
g14
g15
g16
g17
g18
g19
g21
g23
g24

361679.33
401419.00
41756.13
233141.67
16902.00
298298.50
2597.90
77154.70
205181.10
22051.90
7976.63
229520.11
101919.40
36011.43
221071.67
105550.63
416715.00
7165.30

37135.17
104859.00
22949.40
88544.47
11886.30
59669.55
1732.83
28459.21
75590.50
9649.76
3903.03
169700.63
70108.05
69464.63
16112.70
53699.04
122569.11
44098.13
170003.73
4780.37

311840.07
472004.25
40342.37
340144.89
16677.87
237150.83
2553.07
62958.03
170220.43
76508.90
11003.67
233873.33
221034.54
32605.73
266434.00
229835.00
143494.00
6994.37

33770.04
21687.03
64530.75
18429.37
59828.50
1832.67
27866.17
51686.05
31771.77
7330.00
316734.00
95154.08
184372.00
15506.43
42043.41
86005.70
47643.38
182492.77
4669.67

64

Problem

DE/rand/1/bin

DE/best/1/bin

DE/target-to-rand/1

DE/target-to-best/1

g01
g02
g03
g04
g05
g06
g07
g08
g09
g10
g11
g12
g13
g14
g15
g16
g17
g18
g19
g21
g23
g24

3.62E+05
1.20E+07
4.18E+04
2.33E+05
1.69E+04
2.98E+05
2.60E+03
7.72E+04
2.05E+05
2.21E+04
7.98E+03
2.46E+05
1.02E+05
3.60E+04
2.21E+05
1.17E+05
8.33E+05
7.17E+03

4.64E+04
3.15E+06
2.29E+04
8.85E+04
1.19E+04
1.63E+05
1.73E+03
3.05E+04
3.78E+05
9.98E+03
3.90E+03
6.36E+05
1.05E+05
6.95E+04
1.61E+04
6.71E+04
1.31E+05
8.82E+04
3.40E+05
4.78E+03

3.12E+05
3.54E+06
4.03E+04
5.67E+05
1.67E+04
2.37E+05
2.55E+03
6.30E+04
1.70E+05
7.65E+04
1.10E+04
2.34E+06
2.76E+05
3.26E+04
7.99E+06
2.30E+05
2.27E+05
6.99E+03

3.90E+04
2.17E+04
2.42E+05
1.84E+04
5.98E+04
1.83E+03
2.79E+04
7.75E+04
3.18E+04
7.33E+03
9.50E+06
2.20E+05
6.15E+05
1.55E+04
4.35E+04
8.60E+04
1.10E+05
4.21E+05
4.67E+03

65

Problem
g02
g21
g06

Dimensionality
High
Medium
Low
a)

Problem
g10
g13
g23

Type of constraints
Inequalities
Equalities
Both of them
b)

66

Problem
BKS

Stat

RDE
[38]

EXDE
[24]

DDE
[43]

A-DDE
[42]

DSS-MDE
[70]

DECV

g01
-15.000

B
M
W
SD
B
M
W
SD
B
M
W
SD
B
M
W
SD
B
M
W
SD
B
M
W
SD
B
M
W
SD
B
M
W
SD
B
M
W
SD
B
M
W
SD
B
M
W
SD
B
M
W
SD
B
M
W
SD

-15.000
-14.792
-12.743
NA
-0.803619
-0.746236
-0.302179
NA
-1.000
-0.640
-0.029
NA
-30665.539
-30592.154
-29986.214
NA
5126.497
5218.729
5502.410
NA
-6961.814
-6367.575
-2236.950
NA
24.306
104.599
1120.541
NA
-0.095825
-0.091292
-0.027188
NA
680.630
692.472
839.780
NA
7049.248
8842.660
15580.370
NA
0.75
0.76
0.87
NA
-1.000
-1.000
-1.000
NA
0.053866
0.747227
2.259875
NA

-15.000
-15.000
-15.000
NA
NA
NA
NA
NA
-1.025
-1.025
-1.025
NA
-31025.600
-31025.600
-31025.600
NA
5126.484
5126.484
5126.484
NA
-6961.814
-6961.814
-6961.814
NA
24.306
24.306
24.307
NA
-0.095825
-0.095825
-0.095825
NA
680.630
680.630
680.630
NA
7049.248
7049.248
7049.248
NA
0.75
0.75
0.75
NA
NA
NA
NA
NA
NA
NA
NA
NA

-15.000
-15.000
-15.000
1.00E-09
-0.803619
-0.798079
-0.751742
1.01E-02
-1.000
-1.000
-1.000
0
-30665.539
-30665.539
-30665.539
0
5126.497
5126.497
5126.497
0
-6961.814
-6961.814
-6961.814
0
24.306
24.306
24.306
8.22E-09
-0.095825
-0.095825
-0.095825
0
680.630
680.630
680.630
0
7049.248
7049.266
7049.617
4.45E-02
0.75
0.75
0.75
0
-1.000
-1.000
-1.000
0
0.053941
0.069336
0.438803
7.58E-02

-15.000
-15.000
-15.000
7.00E-06
-0.803605
-0.771090
-0.609853
3.66E-02
-1.000
-1.000
-1.000
9.30E-12
-30665.539
-30665.539
-30665.539
3.20E-13
5126.497
5126.497
5126.497
2.10E-11
-6961.814
-6961.814
-6961.814
2.11E-12
24.306
24.306
24.306
4.20E-05
-0.095825
-0.095825
-0.095825
9.10E-10
680.630
680.630
680.630
1.15E-10
7049.248
7049.248
7049.248
3.23E-4
0.75
0.75
0.75
5.35E-15
-1.000
-1.000
-1.000
4.10E-9
0.053942
0.079627
0.438803
1.00E-13

-15.000
-15.000
-15.000
1.30E-10
-0.803619
-0.786970
-0.728531
1.50E-02
-1.005
-1.005
-1.005
1.90E-08
-30665.539
-30665.539
-30665.539
2.70E-11
5126.497
5126.497
5126.497
0
-6961.814
-6961.814
-6961.814
0
24.306
24.306
24.306
7.50E-07
-0.095825
-0.095825
-0.095825
4.00E-17
680.630
680.630
680.630
2.90E-13
7049.248
7049.249
7049.255
1.40E-03
0.749
0.749
0.749
0
-1.000
-1.000
-1.000
0
0.053942
0.053942
0.053942
1-00E-13

-15.000
-14.855
-13.000
4.59E-01
-0.704009
-0.569458
-0.238203
9.51E-02
-0.461
-0.134
-0.002
1.17E-01
-30665.539
-30665.539
-30665.539
1.56E-06
5126.497
5126.497
5126.497
0
-6961.814
-6961.814
-6961.814
0
24.306
24.794
29.511
1.37E+00
-0.095825
-0.095825
-0.095825
4.23E-17
680.630
680.630
680.630
3.45E-07
7049.248
7103.548
7808.980
1.48e+02
0.75
0.75
0.75
1.12E-16
-1.000
-1.000
-1.000
0
0.059798
0.382401
0.999094
2.68E-01

g02
-0.803619

g03
-1.000

g04
-30665.539

g05
5126.497

g06
-6961.814

g07
24.306

g08
-0.095825

g09
680.630

g10
7049.248

g11
0.75

g12
-1.000

g13
0.053942

67

Problem/BKS
g14
-47.764888

g15
961.715022

g16
-1.905155

g17
8853.539675

g18
-0.866025

g19
32.655593

g20
NA

g21
193.724510

g22
236.430976

g23
-400.0551

g24
-5.508013

Statistic
B
M
W
SD
B
M
W
SD
B
M
W
SD
B
M
W
SD
B
M
W
SD
B
M
W
SD
B
M
W
SD
B
M
W
SD
B
M
W
SD
B
M
W
SD
B
M
W
SD

A-DDE
-47.764888
-47.764131
-47.764064
9.0E-06
961.715022
961.715022
961.715022
0
-1.905155
-1.905155
-1.905155
0
8853.540000
8854.664
8858.874
1.43E+00
-0.866025
-0.866025
-0.866025
0
32.655593
32.658000
32.665000
1.72E-03
NA
NA
NA
NA
193.724510
193.724510
193.726000
2.60E-04
NA
NA
NA
NA
-400.055052
-391.415000
-367.452000
9.13E+00
-5.508013
-5.508013
-5.508013
3.12E-14

68

DECV
-47.764888
-47.722542
-47.036510
1.62E-01
961.715022
961.715022
961.715022
2.31E-13
-1.905155
-1.905155
-1.905149
1.10E-06
8853.541289
8919.936362
8938.571060
2.59E+01
-0.866025
-0.859657
-0.674981
3.48E-02
32.655593
32.660587
32.785360
2.37E-02
NA
NA
NA
NA
193.724510
198.090578
324.702842
2.39E+01
NA
NA
NA
NA
-400.055093
-392.029610
-342.524522
1.24E+01
-5.508013
-5.508013
-5.508013
2.71E-15

Problem/BKS
g01
-15.000

g02
-0.803619

g03
-1.000

g07
24.306

g10
7049.248

g13
0.053942

Statistic
B
M
W
SD
B
M
W
SD
B
M
W
SD
B
M
W
SD.
B
M
W
SD
B
M
W
S

69

A-DECV
-15.000
-14.999
-14.999
1.00E-06
-0.803592
-0.785055
-0.748354
0.012178
-1.000
-0.331
-0.000
3.45E-01
24.306
24.306
24.306
0
7049.248
7049.248
7049.248
4.63E-12
0.053942
0.336336
0.443497
1.73E-01

Figure captions
Figure 1: DE/rand/1/bin pseudocode. randj [0, 1] is a function that
returns a real number between 0 and 1. randint[min,max] is a function that
returns an integer number between min and max. NP , MAX GEN, CR,
and F are user-defined parameters. n is the dimensionality of the problem.
Steps indicated with may be changed from variant to variant as indicated
in Table 1.
Figure 2: DE/rand/1/bin graphical example. ~xi is the target vector, ~xr0
is the base vector chosen at random, ~xr1 and ~xr2 (also chosen at random) are
used to generate the difference vector as to define a search direction. The
black square represents the mutant vector, which can be the location of the
trial vector generated after performing recombination. The two filled squares
represent the other two possible locations for the trial vector after recombination.
Figure 3: DE/best/1/bin graphical example. ~xi is the target vector, ~xbest
is the base vector (the best vector so far in the population), ~xr1 and ~xr2
(chosen at random) are used to generate the difference vector as to define a
search direction. The black square represents the mutant vector, which can
be the location of the trial vector generated after performing recombination.
The two filled squares represent the other two possible locations for the trial
vector after recombination.
Figure 4: DE/target-to-rand/1 graphical example. ~xi is the target vector,
~xr0 is the base vector chosen at random, and the difference between them
defines a first search direction. ~xr1 and ~xr2 (also chosen at random) are used
to generate the difference vector as to define a second search direction. The
trial vector will be located in the black square.
Figure 5: DE/target-to-best/1/ graphical example. ~xi is the target vector, ~xbest is the base vector (the best vector so far in the population), and
the difference between them defines a first search direction. ~xr1 and ~xr2
(chosen at random) are used to generate the difference vector as to define
a second search direction. The trial vector will be located in the black square.

70

Figure 6: Radial graphic for those test problems where the AF ES values
were less than 80,000 for all variants.
Figure 7: Radial graphic for those test problems where the SP values
were less than 80,000 for all variants.
Figure 8: Results obtained in the three performance measures by DE/rand/1/bin
in problem g02.
Figure 9: Results obtained in the three performance measures by DE/best/1/bin
in problem g02.
Figure 10: Results obtained in the three performance measures by DE/rand/1/bin
in problem g21.
Figure 11: Results obtained in the three performance measures by DE/best/1/bin
in problem g21.
Figure 12: Results obtained in the three performance measures by DE/rand/1/bin
in problem g06.
Figure 13: Results obtained in the three performance measures by DE/best/1/bin
in problem g06.
Figure 14: Results obtained in the three performance measures by DE/rand/1/bin
in problem g10.
Figure 15: Results obtained in the three performance measures by DE/best/1/bin
in problem g10.
Figure 16: Results obtained in the three performance measures by DE/rand/1/bin
in problem g13.
Figure 17: Results obtained in the three performance measures by DE/best/1/bin
in problem g13.
Figure 18: Results obtained in the three performance measures by DE/rand/1/bin
in problem g23.
71

Figure 19: Results obtained in the three performance measures by DE/best/1/bin


in problem g23.

72

Figures on individual pages

73

Begin
g=0
Create a random initial population ~xi,g i, i = 1, . . . , NP
Evaluate f (~xi,g ) i, i = 1, . . . , NP
For g=1 to MAX GEN Do
For i=1 to NP Do
Select randomly r0 6= r1 6= r2 6= i
jrand = randint[1, n]
For j=1 to n Do
If (randj [0, 1] < CR or j = jrand ) Then
uj,i,g+1 = xj,r0,g + F (xj,r1 ,g xj,r2 ,g )
Else
uj,i,g+1 = xj,i,g
End If
End For
If (f (~ui,g+1) f (~xi,g )) Then
~xi,g+1 = ~ui,g+1
Else
~xi,g+1 = ~xi,g
End If
End For
g =g+1
End For
End

74

11
00
00
11

x2
~xr1
~xr2

~xi

11
00
00
11
~xr0 + F (~xr1 ~xr2 )

~xbest
F (~xr1 ~xr2 ) ~xr0

x1

75

~xi
01
~xbest + F (~xr1 ~xr2 )

01

x2
~xr1
~xr2

~xbest
F (~xr1 ~xr2 ) ~xr0

x1

76

~xi + F (~xr0 ~xi) + F (~xr1 ~xr2 )


~xr1
x2

~xi

~xr2

F (~xr0 ~xi )

F (~xr1 ~xr2 )~xbest

x1

77

~xr0

~xi + F (~xbest ~xi) + F (~xr1 ~xr2 )


~xi
x2
~xr1
~xr2

F (~xbest ~xi )
~xbest
F (~xr1 ~xr2 ) ~xr0

x1

78

79

80

(a)

(b)

(c)

81

(a)

(b)

(c)

82

(a)

(b)

(c)

83

(a)

(b)

(c)

84

(a)

(b)

(c)

85

(a)

(b)

(c)

86

(a)

(b)

(c)

87

(a)

(b)

(c)

88

(a)

(b)

(c)

89

(a)

(b)

(c)

90

(a)

(b)

(c)

91

(a)

(b)

(c)

92

Potrebbero piacerti anche