Documenti di Didattica
Documenti di Professioni
Documenti di Cultura
Abstract
Motivated by the recent success of diverse approaches based on Differential Evolution (DE) to solve constrained numerical optimization problems,
in this paper, the performance of this novel evolutionary algorithm is evaluated. Three experiments are designed to study the behavior of different DE
variants on a set of benchmark problems by using different performance measures proposed in the specialized literature. The first experiment analyzes the
behavior of four DE variants in 24 test functions considering dimensionality
and the type of constraints of the problem. The second experiment presents
a more in-depth analysis on two DE variants by varying two parameters (the
scale factor F and the population size NP ), which control the convergence
of the algorithm. From the results obtained, a simple but competitive combination of two DE variants is proposed and compared against state-of-the-art
DE-based algorithms for constrained optimization in the third experiment.
The study in this paper shows (1) important information about the behavior of DE in constrained search spaces and (2) the role of this knowledge in
the correct combination of variants, based on their capabilities, to generate
simple but competitive approaches.
Keywords: Evolutionary Algorithms, Differential Evolution, Constrained
Email addresses: emezura@lania.mx (Efren Mezura-Montes),
emiranda@bianni.unistmo.edu.mx (Mariana Edith Miranda-Varela),
gyr_17@hotmail.com (Rub del Carmen Gomez-Ramon)
May 9, 2011
The last goal of this work is the use of the knowledge obtained in a simple approach which combines the strengths of two DE variants into a single
approach which does not use complex additional mechanisms.
The paper is organized as follows: In Section 2 the problem of interest is
stated. Section 3 introduces the DE algorithm, while in Section 4 a review of
the state-of-the-art on DE to solve CNOPs is included. Section 5 presents the
analysis proposed in this work. After that, in Section 6 the first experiment
on four DE variants in 24 test problems is explained and the obtained results
are discussed. An analysis of two DE parameters on two competitive (but
with different behaviors) DE variants is presented in Section 7. Section 8
comprises the combination of two DE variants into a single approach and
its performance is compared with respect to some DE-based state-of-theart approaches. Finally, in Section 9, the findings of the current work are
summarized and the future paths of research are shown.
2. Statement of the problem
The CNOP, known also as the general nonlinear programming problem
[10], without loss of generality can be defined as to:
Find ~x which minimizes
f (~x)
(1)
subject to
gi (~x) 0, i = 1, . . . , m
(2)
hj (~x) = 0, j = 1, . . . , p
(3)
3. Differential Evolution
DE is a simple, but powerful algorithm that simulates natural evolution
combined with a mechanism to generate multiple search directions based on
the distribution of solutions (vectors) in the current population. Each vector
i, i = 1, . . . , NP in the population at generation g, ~xi,g = [x1,i,g , . . . , xn,i,g ]T ,
called at the moment of reproduction as the target vector, will be able to
generate one offspring, called trial vector ~ui,g . This trial vector is generated
as follows: First of all, a search direction is defined by calculating a difference
vector between a pair of vectors ~xr1 ,g and ~xr2 ,g , both of them chosen at
random from the population. This difference vector is also scaled by using a
user-defined parameter called scale factor F > 0 [50]. This scaled difference
vector is then added to a third vector ~xr0 ,g , called base vector. As a result, a
new vector is obtained, known as the mutant vector. After that, this mutant
vector is recombined, based on a user-defined parameter, called crossover
probability 0 CR 1, with the target vector (also called parent vector )
by using discrete recombination, usually uniform i.e. binomial crossover, to
generate a trial (child) vector. The CR value determines how similar the
trial vector will be with respect to the mutant vector.
Regarding DE variants, in [50] Price et al. present a notation to identify
different ways to generate new vectors on DE. The most popular of them
(and explained in the previous paragraph) is called DE/rand/1/bin, where
the first term means Differential Evolution, the second term indicates how
the base vector is chosen (at random in this case), the number in the third
term means how many vector differences (i.e. vector pairs) will contribute
in the differential mutation (one pair in this case). Finally, the fourth term
shows the type of crossover utilized (bin from binomial in this variant). The
detailed pseudocode of DE/rand/1/bin is presented in Figure 1 and a graphical example is explained in Figure 2.
[FIGURE 1 AROUND HERE]
This study is focused on four DE variants. Two of them are DE/rand/1/bin, explained before, and DE/best/1/bin, where the only difference with
respect to DE/rand/1/bin is that the base vector is not chosen at random;
instead, it is the best vector in the current population. Unlike the first two variants considered in this study, the next two use an arithmetic recombination.
They are DE/target-to-rand/1 and DE/target-to-best/1 which only vary in
6
the way the base vector is chosen (at random and the best vector in the
population, respectively). The details of each variant is presented in Table 1
and graphical examples for the remaining three, besides DE/rand/1/bin, are
shown in Figures 3 for the DE/best/1/bin, Figure 4 for DE/target-to-rand/1
and, finally, Figure 5 for DE/target-to-best/1.
[TABLE 1 AROUND HERE]
[FIGURE 2 AROUND HERE]
[FIGURE 3 AROUND HERE]
[FIGURE 4 AROUND HERE]
[FIGURE 5 AROUND HERE]
4. Related Work
As it was pointed out in Section 1, DE has provided highly competitive
results in constrained numerical search spaces. Therefore, it is a very popular
algorithm among researchers and practitioners.
One of the first attempts reported was made by Lampinen [24] with
DE/rand/1/bin, where superiority of feasible points and dominance in the
constraints space were used to bias the search to the feasible global optimum. The approach is known as Extended DE (EXDE). An extension of
Lampinens work was presented by Kukkonen and Lampinen [22], where
DE/rand/1/bin was then used to solve constrained multiobjective optimization problems with the same constraint-handling mechanism.
Lin et al. [29] used DE/rand/2/bin with local selection (the target and
the base vector are the same) and Lagrange functions to handle constraints,
besides a special mechanisms for diversity control and convergence speed.
Mezura-Montes et al. [38] used DE/rand/1/bin with three feasibility rules
originally proposed by Deb to be used with other EAs [11, 49] in an approach
called RDE. This algorithm was improved by allowing each target vector to
generate more than one trial vector in [43], called DDE. In a later work,
Mezura-Montes et al. [45] proposed a new DE variant where the combination
of the best vector and the target vector is incorporated into the differential
7
with GAs by Coello [5]. Huang et al. [20], in their new approach, used
DE/rand/1/bin with two sub-populations again, but now with a different
goal. The first subpopulation evolved with the aforementioned DE variant,
while the second subpopulation stored feasible solutions to help other vectors
to become feasible. Local search with Nelder-Mead Simplex method was
utilized. Instead of using penalty functions, Debs rules were considered for
constraint-handling.
Liu et al. [30] used DE/best/1/bin with a co-evolutionary approach where
two sub-populations are considered. One of them aimed to minimize the objective function while the other tried to satisfy the constraints of the problem. Gaussian mutation was used as a local search operator and individuals
in both sub-populations could migrate from one to another.
Zhang et al. [70] used the stochastic ranking method [52] with DE/rand/1/bin in an approach called Dynamic Stochastic Selection DE (DSS-DE) to
solve constrained problems.
Gong and Cai [15] used DE/rand/1/bin and Pareto dominance for constraint-handling. They utilized an external file coupled with -dominance
to store promising solutions. The initial population was generated with an
orthogonal method. A special operator, orthogonal crossover, was used to
improve the local search ability of the algorithm.
Regarding empirical comparisons with DE variants in constrained optimization, Mezura-Montes and Lopez-Ramrez [40] compared DE/rand/1/bin
with a global-best PSO, a real-coded GA, and a ( + )-ES in the solution of
13 benchmark problems. DE provided the best results in this study. Zielinsky
et al. [76] compared different adaptive approaches based on DE in constrained optimization. Other comparisons of DE variants, but in unconstrained
optimization, were made by Gamperle et al. [14], where convenient parameter values were found per each test problem, and by Mezura-Montes et al.
[44], where the good performance of each DE variant was linked to an specific
type of unconstrained problem.
5. Proposed analysis
From the summary of the state-of-the-art presented in Section 4 it is
clear that DE/rand/1/bin is used in more than half of the proposed approaches [15, 19, 20, 22, 24, 25, 38, 42, 70, 73, 74, 75], while similar variants such as DE/best/1/bin, are barely preferred [30]. The most popular constraint-handling mechanism used with DE is the set of feasibility
9
rules proposed by Deb [3, 20, 21, 22, 24, 25, 38, 42, 45, 73, 74, 75], while
penalty functions [19, 57] and multiobjective concepts [15, 30] are sparingly
utilized. There are several approaches which use local search (Gradient-base
mutation, Sequential Quadratic Programming, Nelder-Mead Simplex among
others) [15, 20, 21, 30, 54, 55, 56]. On the other hand, there is a tendency
to combine different variants in one single approach by adding self-adaptive
mechanisms [3] sub-populations [57] or mathematical programming methods
[21]. Finally, the most popular combination is DE/rand/1/bin with Debs
feasibility rules [20, 25, 38, 42, 73, 74, 75] or DE/rand/1/bin with a slightly
variant of Debs rules [22, 24].
From the review of the current research in constrained optimization in
Section 1, it is clear that DE is a convenient algorithm to be modified or
combined to solve CNOPs. Furthermore, based on the previous paragraph
in this section, it is also evident that one variant and one constraint-handling
mechanism have been extensively used. However, little knowledge about the
behavior of DEs original variants (without additional mechanisms and/or
parameters) have been presented, to the best of the knowledge of the authors,
in the specialized literature.
Based on the aforementioned, this work looks precisely to provide more
knowledge of the capabilities of DE (by itself) to reach the feasible region of
the search space and, even more, the vicinity of the feasible global optimum
(or best known solution), the number of evaluations required to do that (i.e.,
computational cost), and the best combination between computational cost
and consistency on generating solutions close to the optimum value.
Furthermore, two DE parameters related with the convergence of the
algorithm (the scale factor F and the population size NP ) are studied in two
DE variants with competitive performances, but with different behaviors, in
order to (1) detect convenient values for them, based in the features of the
optimization problem and (2) provide some insights on the differences in
the behavior of DE with respect to unconstrained numerical search spaces,
reported by Price & Ronkkonen [51].
From the information obtained in the analysis of DE when solving CNOPs,
a convenient combination of two DE variants is proposed and its results obtained are compared with respect to those provided by some DE-based algorithms. This proposed approach does not add complex mechanisms. Instead,
it conveniently uses two variants and their strengths into a simple approach.
The experimental design utilized in this paper is partially based on a previous study on DE mutations for global unconstrained optimization proposed
10
in [51]. However some adaptations were made based on the type of problem
considered in this work. In fact, this study only considers the mutation operator in DE variants. Crossover analysis is out of the scope of the present
research and it is considered as part of the future work.
Three experiments are presented. In the first one, four DE variants
are compared. One of them is the most popular in evolutionary constrained numerical optimization: DE/rand/1/bin. The second one is barely
used: DE/best/1/bin. The third and fourth variants have been used just
in combination with other variants to solve CNOPs: DE/target-to-rand/1
and DE/target-to-best/1. The selection of variants was made with the goal
to compare popular variants used to solve CNOPs against those which use
has not been explored. In this way, the findings may help to know the utility
of each variant when solving CNOPs. Nonparametric statistical tests are
used to add more confidence to the observed behaviors.
The second experiment analyzes two competitive DE variants, with different behaviors, in order to establish suitable values for two DE parameters
related with the convergence of the approach (F and NP ).
The third experiment tests the combination of two DE variants in different problems and the final results are compared against state-of-the-art
approaches.
Different aspects of DE are not considered in this study, such as the
number of pairs of difference vectors (one) and, as mentioned before, the
crossover effect i.e., CR = 1. These values remain fixed in both experiments
and their studies are considered as part of the future work detailed at the
end of the paper.
In order to keep the DE variants from extra parameters related to the
constraint-handling mechanism and also to be consistent with the most popular technique reported in the specialized literature, the feasibility criteria
proposed by Deb [11] are added as a comparison method (instead of using
just the objective function value as indicated in Figure 1 between the target
and trial vector. The three criteria are the following [11]:
1. If the two vectors are feasible, the one with the best value of the objective function is preferred.
2. If one vector is feasible and the other one is infeasible, the feasible one
is preferred.
3. If the two vectors are infeasible, the one with the lowest normalized
sum of constraint violation is preferred.
11
Four performance measures are utilized during the first two experiments
of this work: The first one has been used to measure the percentage of runs
where feasible solutions are found [28] and the other three were used by Price
and Ronkkonen [51] to analyze convergence and computational cost. Some
terms are defined to facilitate the definition of the performance measures.
A successful trial is an independent run where the best solution found f (~x)
is close to the best known value or optimum solution f (~x ). This closeness is
measured by a small tolerance on the difference between these two solutions
f (~x ) f (~x) . A feasible trial is an independent run where, at least, one
feasible solution was generated.
The four measures are detailed as follows:
The feasibility probability F P is the number of feasible trials (f ) divided by the total number of tests or independent runs performed (t),
as indicated in Equation 4.
f
(4)
t
The range of values for F P goes from 0 to 1, where 1 means that all
independent runs were feasible trials i.e. all of them reached the feasible
region of the search space. In this way, a higher value is preferred.
FP =
The probability of convergence P is calculated by the ratio of the number of successful trials (s) to the total number of tests or independent
runs performed (t), as indicated in Equation 5.
s
(5)
P =
t
Similar to F P , the range of values for P goes from 0 to 1, where 1
means that all independent runs were successful trials i.e. all of them
converged to the vicinity of the best known solution or the feasible
global optimum. Therefore, a higher value is preferred.
The average number of function evaluations AF ES is calculated by
averaging the number of evaluations required on each successful trial
to reach the vicinity of the best known value or optimum solution, as
indicated in Equation 6.
s
AF ES =
12
1X
EV ALi
s i=1
(6)
where EV ALi is the number of evaluations required to reach the vicinity of the best known value or optimum solution in the successful trial
i. For EV ALS, a lower value is preferred because it means that the
average cost (measured by the number of evaluations) is lower for an
algorithm to reach the vicinity of the feasible optimum solution.
The two previous performance measures (P and AF ES) are combined
to measure the speed and reliability of a variant through a successful
performance SP , calculated in Equation 7.
SP =
AF ES
P
(7)
this value. The F value was selected based on the suggestions made by Price
[50] regarding the convenience of larger F values to avoid premature convergence and also by the corresponding values used in DE-based approaches for
CNOPs. The MAX GEN value was chosen to adjust the maximum number of evaluations of solutions to 500, 000, in order to give each DE variant
enough time to develop a competitive search, coupled with a high F value
and enough search points. The tolerance value for equality constraints was
defined as = 1E-4. The tolerance value for considering a successful trial
was fixed to = 1E-4. Each variant performed 30 independent runs per each
test problem and the four performance measures were calculated.
Considering the number of test problems used in this experiment and for
a better analysis, they were classified according to the dimensionality (number of variables) as indicated in Table 3. Also, they were divided by the
type of constraints (inequalities, equalities or both) as shown in Table 4. In
this way, the discussion of each performance measure was divided in three
phases: (1) based on the dimensionality of the problem, (2) based on its type
of constraints, and finally, (3) some partial conclusions about the measure
results.
[TABLE 3 AROUND HERE]
[TABLE 4 AROUND HERE]
6.1. Discussion of results of the first experiment
Problems g20 and g22 were discarded in the discussion because no feasible
solutions where found by the four variants compared (i.e., F P = 0, P = 0,
AF ES, and SP cannot be calculated). These problems share common features: High dimensionality (24 and 22 variables, respectively) and combined
equality and inequality constraints.
In order to have more confidence of the significant differences observed
in the samples, and based on Kolmogorov-Smirnov tests [9] which indicated
that the samples do not follow a Gaussian distribution, nonparametric statistical tests were applied to the samples of the AF ES measure (Table 7)
The Kruskal-Wallis [7]test was applied in test problems where the four DE
variants had samples of equal size i.e the same number of successful trials
(g04, g06, g08, g12, g16, and g24). The Mann-Whitney test [7] was applied
to pairs of variants with different size in the samples (i.e., different number
14
of successful trials) in the remaining test problems (except g02, g03, g13,
and g17, where the results were very poor by the four variants compared).
Both tests were applied with a significance level of 0.05. The results of the
statistical tests indicated that the differences observed in the samples are
significant, with some exceptions which are commented in Section 6.1.3.
6.1.1. FP measure
The results are presented in Table 5.
[TABLE 5 AROUND HERE]
Dimensionality-based analysis: For high-dimensionality problems, the
four DE variants were very competitive. Only DE/best/1/bin and DE/targetto-rand/1 failed to consistently reach the feasible region in problems g03 and
g14. Regarding the nine medium-dimensionality problems, all four DE variants obtained high F P values. However, they had difficulties to generate
feasible solutions in problems g13 and g23, being DE/target-to-rand/1 the
variant with the worst performance. Finally, for low-dimensionality problems, the four variants consistently reached the feasible region, except in
problem g05, where DE/target-to-best/1, was the only variant to generate
feasible solutions in all independent runs.
Constraint-based analysis: For all the problems with inequality constraints, the four DE variants obtained a good performance in the F P measure. However, for all the problems with equality constraints and also in problems g05 and g21 (problems with both type of constraints), only DE/targetto-best/1 consistently reached the feasible region i.e., DE/rand/1/bin, DE/best/1/bin, and DE/target-to-rand/1 failed in some trials. Furthermore, in
problems g13 and g23 this variant provided the most competitive F P values.
The overall results for the F P performance measure suggest that the
four DE variants, without the addition of special mechanisms or additional
parameters, provided a consistent approach to the feasible region, even in
presence of a combination of inequality and equality constraints. In contrast,
as reported in the specialized literature, other EAs usually require special
handling of the tolerance for equality constraints in order to find feasible
solutions, [16, 36, 63]. This is, in fact, a well-documented source of difficulty
[35, 75]. The most competitive variant in this performance measure was
15
DE/target-to-best/1.
6.1.2. P measure
The results are presented in Table 6.
[TABLE 6 AROUND HERE]
The general behavior, as expected, was different from that observed in
the F P measure. It is clear that feasible solutions found by DE are not
necessarily close to the feasible global optimum or best known solution, remarking the difficulty (more evident for some variants) to move inside the
feasible region.
Dimensionality-based analysis: Regarding high-dimensionality problems,
the four DE variants presented a very irregular performance. However, the
better average P value for these test problems was provided by DE/targetto-best/1 (0.55), followed by DE/rand/1/bin (0.49), DE/best/1/bin (0.47),
and DE/target-to-rand/1 (0.37). The four variants obtained a P = 1 value
in medium-dimensionality problems g04 and g16. The average values for
the P measure in this type of problems were as follows: DE/rand/1/bin
(0.71), followed by DE/target-to-rand/1 (0.63), DE/target-to-best/1 (0.61),
and DE/best/1/bin (0.58). Finally, for low-dimensionality problems DE/rand/1/bin reached a P = 1 value in the seven test problems (average 1.0).
DE/best/1/bin almost obtained the same value in all problems, except only
in problem g11 with P = 0.97 (average 0.99). DE/target-to-rand/1 obtained
an average value of 0.91 while DE/target-to-best/1 provided an average value
of 0.80.
Constraint-based analysis: An almost similar performance (with respect
to that observed in the dimensionality-based analysis) was exhibited by the
four variants in those problems with only inequality constraints: DE/targetto-best/1/ reached an average value of 0.88, DE/target-to-rand/1 (0.86),
DE/rand/1/bin (0.85), and DE/best/1/bin (0.77). In problems with only
equality constraints DE/rand/1/bin and DE/best/1/bin obtained an average
P value of 0.49, followed by DE/target-to-rand/1 with 0.32, and DE/targetto-best/1 with 0.29. Finally, in problems with both type of constraints
DE/rand/1/bin was clearly superior with a P average value of 0.80. DE/best/1/bin obtained a value of 0.67, DE/target-to-rand/1 0.41, and DE/target16
to-best/1 0.38.
Despite the fact that the four DE variants were very capable to reach the
feasible region of the search space (based on the results of the F P measure
explained before), the results for the P measure indicate that they presented
difficulties to reach the vicinity of the feasible global optimum or best known
solution. DE/rand/1/bin provided the most consistent approach to the best
feasible solution (average P value of 0.74), followed by DE/best/1/bin (average P value 0.68). Finally, DE/target-to-best/1 was competitive in highdimensionality problems and in presence of only inequality constraints.
6.1.3. AFES measure
The results are presented in Table 7.
[TABLE 7 AROUND HERE]
Dimensionality-based analysis: DE/best/1/bin was the most competitive variant in high-dimensionality problems with an average AFES value
of 7.89E+04 in five test problems (out of six), followed by DE/target-tobest/1 with 6.87E+04 but only in four test problems. From the statistical
test results, the performance of these two best-based variants was not significantly different in problems g07 and g19. The two rand-based variants were
less competitive: DE/target-to-rand/1 with 3.14E+05 and DE/rand/1/bin
with 3.23E+05, both in four test problems. A similar behavior was found in
medium-dimensionality problems. DE/best/1/bin was the best variant with
an average value of 7.26E+04 in eight problems (out of nine), followed by
DE/target-to-best/1 with 8.82E+04 in eight problems, (no significant differences were found by these two best-based variants in problems g09, g10,
g21, and g23). DE/target-to-rand/1 obtained an average value of 1.35E+05
and DE/rand/1/bin presented a value of 1.58E+05, both in seven test problems. In low-dimensionality problems the results were quite similar as well:
DE/best/1/bin was the best variant with an average value of 2.71E+04 in
the seven test problems, DE/target-to-best/1 obtained a value of 4.47E+04
in the seven problems (no significant differences were found in problems g05
and g15), DE/rand/1/bin and DE/target-to-rand/1 achieved average values
of 5.60E+04 and 9.64E+04, respectively, in the seven test problems.
Constraint-based analysis: In problems with only inequality constraints
the four variants succeeded in twelve out of thirteen test problems. How17
ever, DE/target-to-best/1 was the most competitive with an AFES average value of 3.09E+04, DE/best/1/bin was the second best with a value
of 3.65E+04 (from the statistical tests no significant differences were found
in problems g07, g09, g10, and g19). The two rand-based variants were
less competitive: DE/target-to-rand/1 with 1.33E+05 and DE/rand/1/bin
with 1.40E+05. The presence of only equality constraints did not prevent
DE/best/1/bin to be the most competitive with an average AFES value of
8.48E+04 on five (out of six) test problems. The second best performance
was obtained by DE/target-to-best/1 with 1.57E+05 in four test problems
(no significant differences were observed in problem g15). DE/target-to-rand
obtained an average value of 1.99E+05 in four problems. DE/rand/1/bin
reached a value of 1.18E+05 but in only three test problems. Finally, in
problems with both type of constraints, DE/target-to-best/1 was the most
competitive with an average value of 9.82E+04 in the three test problems,
followed by DE/best/1/bin with a value of 1.01E+05 also in the three test
problems (no significant differences were exhibited in problems g05, g21, and
g23). DE/rand/1/bin obtained a value of 2.52+05 in three test problems
while DE/target-to-rand/1 reached an average value of 2.42E+05 but in only
two problems.
The overall results regarding AF ES suggest that those best-based variants found the vicinity of the best known or optimal solution faster than
the rand-based variants. DE/best/1/bin was the most competitive variant.
Figure 6 shows a radial graphic where the AF ES values are shown and
each axis is associated with one DE variant. For a better visualization only
those test problems with a value below 80,000 for the AF ES measure are
presented, but the overall behavior is represented. A point near the origin
is better, because it represents a lower AF ES value. It is remarked in this
figure that both best-based variants required less evaluations with respect to
the two rand-based variants.
[FIGURE 6 AROUND HERE]
6.1.4. SP measure
The results are presented in Table 8.
[TABLE 8 AROUND HERE]
Dimensionality-based analysis: In a similar way with respect to the
18
AF ES measure, the best-based variants performed better in the SP measure in this classification of problems. For high-dimensionality problems
DE/best/1/bin obtained a SP average value of 7.18E+05 on five (out of
six) test problems, DE/target-to-best/1 reached a value of 1.01E+05 on four
problems, followed by DE/target-to-rand/1 with 1.61E+06 on four problems,
and DE/rand/1/bin with 3.24E+06 also in four problems. DE/best/1/bin
obtained the best SP average value in medium-dimensionality problems with
1.97E+05 on eight (out of nine) test problems, DE/target-to-best/1 was
second with 1.28E+06 also on eight problems, DE/rand/1/bin was third
with 2.19E+05 in only seven problems, and DE/target-to-rand/1 was fourth
with 1.25E+06 in seven test problems. In low dimensionality problems
DE/best/1/bin presented the lowest SP average value on the seven test
problems (2.72E+04). DE/rand/1/bin was second with 5.60E+04 also in
the seven problems. DE/target-to-best/1 was third with 1.32E+05 in the
seven problems. DE/target-to-rand/1 was the last with an average SP of
1.37E+05 also in the seven test problems.
Constraint-based analysis: DE/target-to-best/1 showed the lowest SP
average (3.36E+04) for the test problems with only inequality constraints
(twelve out of thirteen), followed by DE/best/1/bin with 7.31E+04 in twelve
problems, DE/target-to-rand/1 with 3.89E+05 and DE/rand/1/bin with
1.11E+06, both computed in twelve problems. The best SP average value
in five (out of six) problems with only equality constraints was obtained
by DE/best/1/bin with 7.93E+05, followed by DE/target-to-best/1 with
2.59E+06 in four test problems, DE/target-to-rand/1 with 2.67E+06 in four
test problems. The worst SP values were obtained by DE/rand/1/bin with
1.23E+05 but in only three test problems. Finally, in problems with equality and inequality constraints DE/best/1/bin dominated the remaining DE
variants with an average SP value (in the three test problems) of 1.72E+05,
followed by DE/target-to-best/1 with 2.58E+05 also in the three problems.
DE/rand/1/bin was third with 3.95E+05 in the three problems, while DE/target-to-rand/1 was fourth with 3.97E+05, but computed in only two test
problems.
The overall performance presented in the SP measure resembles that
found in the AF ES measure. The best-based DE variants outperformed
those rand-based. This is also noted in Figure 7, where the lowest SP values i.e. the best combination between computational cost (evaluations) and
successful trials (reaching the vicinity of the feasible best known or opti19
20
Additional results were obtained with the same structure of this experiment but varying the NP value and keeping the same limit of evaluations
(500,000). The values for the population size were NP = 30 and NP = 150.
The results are included in [41] and confirmed the findings previously commented.
Those findings motivated a more in-depth analysis of the two most competitive variants: DE/rand/1/bin, being the most consistent on reaching the
vicinity of the feasible optimum solution, and DE/best/1/bin, with the best
combination between the number of evaluations required and the number of
successful trials. The corresponding experiments and results are described in
the next section.
7. DE parameter study
The second set of experiments aims to determine the convenient values
and the relationship between two DE parameters in the performance of two
DE variants which exhibited different competitive behaviors in numerical
constrained search spaces, based on the findings of the first set of experiments
(DE/rand/1/bin and DE/best/1/bin).
The parameters considered are F and NP . As the stepsize in differential
mutation is controlled by F , the convergence speed depends of its value.
Regarding global unconstrained optimization, for low F values (to speed up
convergence) an increase in the NP value may be required to avoid premature
convergence [51]. The question remains open for numerical spaces in presence
of constraints. It is known in advance, from the set of experiments in Section
6, that DE/rand/1/bin is more consistent on reaching the vicinity of the
feasible best solution and that DE/best/1/bin is also capable to do it but
with a lower frequency combined with an also lower computational cost. The
experimental design now focuses on determining the best values for those two
aforementioned parameters for these variants and evaluating if the behavior
is similar to that found in unconstrained search spaces.
Six representative test problems are used in this second part of the research: g02, g06, g10, g13, g21, and g23. They were selected based on their
different characteristics and were organized as follows:
Test problems with different dimensionality. They are shown in Table 9
a).
21
22
Regarding the average of number of evaluations required by DE/rand/1/bin, Figure 10 (b) exhibits an increment of the AFES value as the scale factor
and population size values increased their values as well i.e. this variant did
not require larger populations nor high F values to provide a competitive
performance.
Figure 10 (c) confirms the behavior observed in the results of the AFES
measure, because the best mix of computational cost and convergence was
obtained by small populations (30 NP 60) combined with the following
scale factor values (0.6 F 0.8).
DE/best/1/bin presented a similar behavior with respect to DE/rand/1/bin in the P measure. However, the following high scale factor values 0.7
F 1.0 provided high P values combined with medium-large population
sizes (90 NP 150), see Figure 11 (a).
The results for the AF ES measure in Figure 11 (b) present the lowest values with NP = 30. However the P values for this population size (Figure 11
(a)) were very poor. Therefore, the better AF ES values were obtained by
DE/best/1/bin with medium size populations (60 NP 90) combined
with F = 0.7. It is worth noticing that medium size populations presented
the highest AF ES values in combination with the highest F value used (1.0).
The SP values in Figure 11 (c) reveal that the best compound effect
speed-convergence were provided by small-medium size populations (60
NP 90) with F = 0.8.
Both DE variants were competitive in this problem with medium dimensionality. However, DE/rand/1/bin was less sensitive to the two parameters
under study, by working well with small and medium population sizes combined with three different scale factor values. Similarly, DE/best/1/bin was
more competitive with small and medium size populations but with one scale
factor value.
7.3. Low-dimensionality test problem
Figures 12 and 13 present the results for the three measures obtained by
both compared variants in test problem g06.
[FIGURE 12 AROUND HERE]
[FIGURE 13 AROUND HERE]
24
DE/rand/1/bin obtained some feasible trials with low scale factor values
0.4 F 0.5 combined with larger population sizes (120 NP 150) in
Figure 12 (a). Values of P = 1.0 were consistently obtained with 0.6 F
1.0 combined with the four NP values (except NP = 30).
Figure 12 (b) includes the results for the AF ES measure. The observed
behavior indicates an increment in the value of the measure as the population
size and the scale factor values are also increased.
Most of the combination NP -F values provided low SP values, as indicated in Figure 12 (c) (only low F values with larger populations obtained
poor SP values). However, a small population (NP = 30) combined with
0.6 F 1.0 are the most convenient values to solve this problem.
DE/best/1/bin required slightly higher scale factor values (0.7 F
1.0) to consistently reach the vicinity of the global optimum (P = 1.0),
except with NP = 30 (see Figure 13 (a)).
A similar effect was obtained by DE/best/1/bin in the AF ES measure
with respect to DE/rand/1/bin (higher F and larger NP values caused higher
AF ES values). However, with NP = 30 and F = 1.0 the highest AF ES
value was obtained (see Figure 13 (b)).
The summary of results for the SP measure in Figure 13 (c) indicates
that a small population NP = 30 combined with 0.7 F 0.8 provided
the best blend between number of evaluations and convergence probability.
Both DE variants presented an almost similar behavior in this problem
with only two decision variables i.e. they required a small population to
provide a consistent approach to the vicinity of the feasible global optimum.
However, DE/best/1/bin was slightly more sensitive to the F parameter.
7.4. Test problem with only inequality constraints
Figures 14 and 15 include the results provided by both variants when
solving problem g10.
[FIGURE 14 AROUND HERE]
[FIGURE 15 AROUND HERE]
DE/rand/1/bin provided high P values with more consistency by using
medium size populations (60 NP 90) combined with high scale factor
values (0.6 F 1.0) in Figure 14 (a). Increasing the population size
25
(NP = 150) with lower scale factor values (F = 0.5) allowed DE/rand/1/bin
to maintain high P values.
The results for the AF ES measure in Figure 14 (b) confirms the convenience of using medium size populations, mostly with 0.6 F 0.7. Other
combination of values for these two parameters increased the AF ES value
(with the only exception of NP = 30 and F = 0.8).
Figure 14 (c) also confirms the findings previously discussed for DE/rand/1/bin in this test problem. The lowest SP values were found with
NP = 60 and F = 0.6.
In Figure 15 (a) DE/best/1/bin required larger populations to get successful trials more consistently (120 NP 150) combined with high scale
factor values (0.8 F 0.9). DE/best/1/bin was clearly affected by small
populations, regardless the scale factor value used.
The overall results for the AF ES measure in Figure 15 (b) suggest that
DE/best/1/bin increased the number of evaluations as F and NP values
also increased. Furthermore, the lowest AF ES values were obtained with
medium size populations (60 NP 90). However, the P values for this
population size were poor.
This last finding was confirmed in Figure 15 (c), where the lowest SP
values were obtained with medium to larger populations (90 NP 150)
combined with 0.7 F 0.9.
DE/rand/1/bin was less sensitive to both parameters analyzed and performed better with small-medium size populations. However, DE/best/1/bin
required less evaluations to provide competitive results by using larger populations.
7.5. Test problem with only equality constraints
Figures 16 and 17 present the summary of results in problem g13 by
DE/rand/1/bin and DE/best/1/bin, respectively.
[FIGURE 16 AROUND HERE]
[FIGURE 17 AROUND HERE]
Similar to the problem with a high dimensionality, the performance of
both variants was clearly affected in this test function.
DE/rand/1/bin was able to provide its best P values (below 0.5) with
only high scale factor values (0.6 F 1.0) combined with small and
26
27
irregular behavior was observed with large and small populations. The results
for the P measure were very poor with F = 1.0 (except with NP = 60).
The lowest average number of evaluations on successful trials were attained by DE/rand/1/bin with small to medium population sizes (30
NP 60) combined with 0.8 F 0.9 and 0.6 F 0.7, respectively
(see Figure 18 (b)). Larger populations (NP = 150) caused an increment in
the AF ES value.
The best SP values were obtained in the same combination of parameter
values observed for the AF ES measure (see Figure 18 (c)).
In contrast to DE/rand/1/bin, DE/best/1/bin, in Figure 19 (a), obtained
more successful trials with larger populations NP = 150 combined with high
scale factor values (0.7 F 1.0). The vicinity of the feasible best known
solution was not reached with a small population (NP = 30).
The lowest AFES values were obtained by DE/best/1/bin with NP = 90
in all the F values where P > 0 values were obtained (0.7 F 1.0), see
Figure 19 (b).
Regarding the SP values (Figure 19 (c)), the best values were found with
NP = 90 combined with 0.8 F 0.9.
In this last test problem, DE/rand/1/bin performed better with small to
medium size populations combined with different scale factor values. In contrast, DE/best/1/bin provided its best performance with medium to larger
populations coupled only with high scale factor values.
7.7. Conclusions of the second experiment
The findings of this second experiment are summarized in the following
list:
DE/rand/1/bin was clearly most competitive in the high-dimensionality
test problem.
DE/rand/1/bin was the variant with less sensitivity to NP and F in
all the six test problems.
DE/best/1/bin required less evaluations to reach the global feasible optimum in all the test problems, but it was less reliable than DE/rand/1/bin.
The most useful F values for both variants were 0.6 F 0.9, regardless the type of test problem.
28
DE/rand/1/bin/ performed better with small to medium size populations: 30 NP 90, while DE/best/1/bin required more vectors in
its population to provide competitive results 90 NP 150.
Regarding the convergence behavior reported in unconstrained numerical optimization with DE, a different comportment was observed in
the constrained case, because low scale factor values (F 0.5) prevented both DE variants to converge, even with larger populations.
The exception was the test problem with a low dimensionality.
Additional experiments were performed in test problems with similar features as those presented in the paper: g19 for high dimensionality, g09 for
medium dimensionality, g08 for low dimensionality, g07 with only inequality constraints, g17 with only equality constraints, and g05 with both type
of constraints. Those results can be found in [41] and they confirmed the
findings mentioned above.
The summary of findings suggests that the ability of DE/rand/1/bin
to generate search directions from different base vectors allows it to use
smaller populations. On the other hand, larger populations were required
by DE/best/1/bin, where the search directions are always based on the best
solution so far. Regarding the scale factor, the convenient values found in the
experiment showed that these two DE variants required a slow-convergence
behavior to approach the vicinity of the feasible global optimum or best
known solution. To speed up the convergence by decreasing the scale factor value does not seem to be an option, even with larger populations. The
combination of larger populations and DE/rand/1/bin seems to be more suitable for high-dimensionality problems. Finally, DE/rand/1/bin presented
less sensitivity to the two parameters analyzed. Meanwhile, DE/best/1/bin,
which may require a more careful fine-tuning, can provide competitive results with a lower number of evaluations. Based on this last comment, the
drawback found in DE/best/1/bin may be treated with parameter control
techniques [13].
8. A combination of two DE variants
Considering the results in Experiment 1 which pointed out that DE/rand/1/bin and DE/best/1/bin had better performances but different behaviors with respect to other DE variants, they were chosen to be combined
into one single approach.
29
The capacity observed in the four DE variants to efficiently reach the feasible region of the search space in Experiment 1, coupled with the feature of
DE/rand/1/bin to generate a more diverse set of search directions, suggested
the use of this variant as a first search algorithm. As the feasible region will
be reached faster, the criterion to switch to the other DE variant was to get
10% of feasible vectors. In this way, DE/best/1/bin could focus the search
in the vicinity of the current best feasible vector, expecting a low number of
evaluations to reach competitive solutions. This percentage value was chosen after some tests reported in [41]. The approach was called Differential
Evolution Combined Variants (DECV).
Based on the fact that Experiment 2 revealed that DE/rand/1/bin performed better with small to medium size populations and that DE/best/1/bin
required medium to large size populations, the number of vectors in DECV
was fixed to 90. The convenience of using larger F values also observed in
Experiment 2 suggested a value of 0.9. The CR parameter was kept fixed
at 1.0 and the number of generations was set to 2666 in order to perform
240, 000 evaluations.
30 independent runs per each one of the 24 test problems used in Experiment 1 were computed. Statistics on the final results are summarized in
Tables 10 and 11 for DECV.
[TABLE 10 AROUND HERE]
[TABLE 11 AROUND HERE]
Those final results for the first 13 test problems and the corresponding
computational cost, measured by the number of evaluations required, were
compared with those reported by some state-of-the-art DE-based algorithms:
The superiority of feasible points (EXDE) [24], the feasibility rules [11] in DE
(RDE) [38], the DE with ability to generate more than one trial vector per
target vector [43] (DDE), the adaptive DE (A-DDE) [42], and the dynamic
stochastic selection in DE (DSS-MDE) [70]. The comparison in the last
11 test problems was made with respect to A-DDE in Table 11, which has
provided a highly competitive performance in such problems. Both, EXDE
[24] and RDE [38] were chosen because of the fact that they keep intact the
DE mutation operator with respect to its original version. DDE [43] was
chosen as a DE with modifications to the original mutation operator with
very competitive results and, finally, DSS-MDE [70] was chosen as a recent
30
aims to show how the variants can be combined to get a different behavior
which may result in a better performance in some type of problems.
[TABLE 12 AROUND HERE]
From the knowledge provided by the results in the first two experiments,
the simple combination of DE/rand/1/bin and DE/best/1/bin into one single
approach (DECV) was proposed and the results obtained in 24 test problems
were compared with those obtained by some DE-based approaches to solve
CNOPs. The performance obtained by DECV was similar with respect to
the algorithms used for comparison. DECV did not add extra complex mechanisms. Instead, it firstly used DE/rand/1/bins ability to generate a diverse
set of search directions in the whole search space in order to switch to the
ability of DE/best/1/bin to generate search directions from the best solution
when 10% of the population is feasible. Furthermore, an alternative combination of variants allowed to improve the performance of DECV in problems
where the first combination was not very competitive, showing the flexibility
of usage of the empirical information provided in this work.
The conclusions obtained in this work remarked the good (or bad) influence of the parameter values in DE to solve CNOPs. Therefore, the
future paths of research include the empirical study of the CR parameter
and the number of difference vector pairs in constrained numerical optimization. Furthermore, the performance of other DE variants e.g., DE/target-torand/1/bin, DE/target-to-best/1/bin, DE/rand/1/exp, and other constrainthandling mechanisms e.g., penalty functions will be analyzed. Finally, a
parameter control mechanism will be added to deal with the percentage of
feasible solutions utilized in DECV and A-DECV and more test problems
will be considered.
Acknowledgments
The first author acknowledges support from CONACyT through project
Number 79809.
Appendix A.
The details of the 24 test problems utilized in this work are the following:
g01
Minimize:
f (~
x) = 5
4
X
i=1
xi 5
Subject to:
33
4
X
i=1
x2i
13
X
i=5
xi
(A.1)
g1 (~
x) = 2x1 + 2x2 + x10 + x11 10 0
g2 (~
x) = 2x1 + 2x3 + x10 + x12 10 0
g3 (~
x) = 2x2 + 2x3 + x11 + x12 10 0
g4 (~
x) = 8x1 + x10 0
g5 (~
x) = 8x2 + x11 0
g6 (~
x) = 8x3 + x12 0
g7 (~
x) = 2x4 x5 + x10 0
g8 (~
x) = 2x6 x7 + x11 0
g9 (~
x) = 2x8 x9 + x12 0
where 0 xi 1 (i = 1, . . . , 9), 0 xi 100 (i = 10, 11, 12), and 0 x13 1. The feasible global
optimum is located at x = (1, 1, 1, 1, 1, 1, 1, 1, 1, 3, 3, 3, 1) where f (x ) = -15. where g1 , g2 , g3 , g7 ,
g8 g9 are active constraints.
g02
Minimize:
Subject to:
P
n cos4 (x ) 2 Qn cos2 (x )
i
i
i=1
qP
f (~
x) = i=1
n
2
i=1 ixi
(A.2)
Qn
g1 (~
x) = 0.75
Pn i=1 xi 0
g2 (~
x) = i=1 xi 7.5n 0
f (~
x) =
n
n Y
n
xi
(A.3)
i=1
Subject to:
h(~
x) =
n
X
i=1
x2i 1 = 0
(i = 1, . . . , n) where f (x ) = -1.00050010001000.
34
g04
Minimize:
f (~
x) = 5.3578547x23 + 0.8356891x1 x5 + 37.293239x1 40792.141
(A.4)
Subject to:
g1 (~
x)
g2 (~
x)
g3 (~
x)
g4 (~
x)
g5 (~
x)
g6 (~
x)
0
0
0
0
0
0
f (~
x) = 3x1 + 0.000001x31 + 2x2 + (0.000002/3)x32
(A.5)
Subject to:
g1 (~
x)
g2 (~
x)
h3 (~
x)
h4 (~
x)
h5 (~
x)
= x4 + x3 0.55
= x3 + x4 0.55
0
0
=0
=0
=0
where 0 x1 1200, 0 x2 1200, 0.55 x3 0.55, and 0.55 x4 0.55. The best
known solution is located at: x =(679.945148297028709, 1026.06697600004691, 0.118876369094410433,
0.396233485215178266) where f (x ) = 5126.4967140071.
g06
Minimize:
f (~
x) = (x1 10)3 + (x2 20)3
(A.6)
Subject to:
g1 (~
x)
g2 (~
x)
0
0
where 13 x1 100 and 0 x2 100. The feasible global optimum is located at:
x = (14.09500000000000064, 0.8429607892154795668) where f (x ) = 6961.81387558015. Both constraints are active.
g07
Minimize:
35
f (~
x)
x21 + x22 + x1 x2 14x1 16x2 + (x3 10)2 + 4(x4 5)2 + (x5 3)2 +
2(x6 1)2 + 5x27 + 7(x8 11)2 + 2(x9 10)2 + (x10 7)2 + 45
(A.7)
Subject to:
g1 (~
x)
g2 (~
x)
g3 (~
x)
g4 (~
x)
g5 (~
x)
g6 (~
x)
g7 (~
x)
g8 (~
x)
=
=
=
=
=
=
=
=
0
0
0
0
0
0
0
0
(A.8)
Subject to:
g1 (~
x)
g2 (~
x)
= x21 x2 + 1
= 1 x1 + (x2 4)2
0
0
where 0 x1 10 and 0 x2 10. The feasible global optimum is located at: x = (1.22797135260752599,
4.24537336612274885) with f (x ) = 0.0958250414180359.
g09
Minimize:
f (~
x) = (x1 10)2 + 5(x2 12)2 + x43 + 3(x4 11)2 + 10x65 + 7x26 + x47 4x6 x7 10x6 8x7
(A.9)
Subject to:
g1 (~
x)
g2 (~
x)
g3 (~
x)
g4 (~
x)
=
=
=
=
0
0
0
0
36
(A.10)
Subject to:
g1 (~
x)
g2 (~
x)
g3 (~
x)
g4 (~
x)
g5 (~
x)
g6 (~
x)
= 1 + 0.0025(x4 + x6 )
= 1 + 0.0025(x5 + x7 x4 )
= 1 + 0.01(x8 x5 )
= x1 x6 + 833.33252x4 + 100x1 83333.333
= x2 x7 + 1250x5 + x2 x4 1250x4
= x3 x8 + 1250000 + x3 x5 2500x5
0
0
0
0
0
0
where 100 x1 10000, 1000 xi 10000, (i = 2, 3), 10 xi 1000, (i = 4, . . . , 8). The feasible
global optimum is located at x = (579.306685017979589, 1359.97067807935605, 5109.97065743133317,
182.01769963061534, 295.601173702746792, 217.982300369384632, 286.41652592786852,
395.601173702746735) with f (x ) = 7049.24802052867. g1 , g2 , and g3 are active constraints.
g11
Minimize:
f (~
x) = x21 + (x2 1)2
(A.11)
Subject to:
h(~
x) = x2 x21 = 0
where: 1 x1 1, 1 x2 1. The feasible global optimum is located at:x = (1/ 2, 1/2) with
f (x ) = 0.7499.
g12
Minimize:
f (~
x) =
(A.12)
Subject to:
g1 (~
x) = (x1 p)2 + (x2 q)2 + (x3 r)2 0.0625 0
where 0 xi 10 (i = 1, 2, 3) and p, q, r = 1, 2, . . . , 9. The feasible region consists on 93 disjoint spheres.
A point (x1 , x2 , x3 ) is feasible if and only if there exist p, q, r such that the above inequality holds. The
feasible global optimum is located at x = (5, 5, 5) with f (x ) = 1.
37
g13
Minimize:
f (~
x) = ex1 x2 x3 x4
(A.13)
Subject to:
h1 (~
x)
h2 (~
x)
h3 (~
x)
=0
=0
=0
where 2.3 xi 2.3 (i = 1, 2) and 3.2 xi 3.2 (1 = 3, 4, 5). The feasible global optimum is
at x~ = (1.71714224003, 1.59572124049468, 1.8272502406271, 0.763659881912867, 0.76365986736498)
with f (x~ ) = 0.053941514041898.
g14
Minimize:
f (~
x) =
10
X
i=1
Subject to:
h1 (~
x)
h2 (~
x)
h3 (~
x)
xi
xi
ci + ln P10
j=1
xj
(A.14)
=0
=0
=0
where 0 < xi 10 (i = 1, ..., 10), and c1 = -6.089, c2 = -17.164, c3 = -34.054, c4 = -5.914, c5 = -24.721,
c6 = -14.986, c9 = -26.662, c10 = -22.179. The best known solution is at x = (0.0406684113216282,
0.147721240492452, 0.783205732104114, 0.00141433931889084, 0.485293636780388, 0.000693183051556082,
0.0274052040687766, 0.0179509660214818, 0.0373268186859717, 0.0968844604336845) with
f (x ) = 47.7648884594915.
g15
Minimize:
f (~
x) = 1000 x21 2x22 x23 x1 x2 x1 x3
(A.15)
Subject to:
h1 (~
x)
h2 (~
x)
=0
=0
38
(A.16)
Subject to:
g1 (~
x)
g2 (~
x)
g3 (~
x)
g4 (~
x)
g5 (~
x)
g6 (~
x)
g7 (~
x)
g8 (~
x)
g9 (~
x)
g10 (~
x)
g11 (~
x)
g12 (~
x)
g13 (~
x)
g14 (~
x)
g15 (~
x)
g16 (~
x)
g17 (~
x)
g18 (~
x)
g19 (~
x)
g20 (~
x)
g21 (~
x)
g22 (~
x)
g23 (~
x)
g24 (~
x)
g25 (~
x)
g26 (~
x)
g27 (~
x)
g28 (~
x)
g29 (~
x)
g30 (~
x)
g31 (~
x)
g32 (~
x)
g33 (~
x)
g34 (~
x)
g35 (~
x)
g36 (~
x)
g37 (~
x)
g38 (~
x)
= 00..28
y y4
72 5
= x3 1.5x2
= 3496 cy2 21
12
= 110.6 + y1 62212
c17
= 213.1 y1
= y1 405.23
= 17.505 y2
= y2 1053.6667
= 11.275 y3
= y3 35.03
= 214.228 y4
= y4 665.585
= 7.458 y5
= y5 584.463
= 0.961 y6
= y6 265.916
= 1.612 y7
= y7 7.046
= 0.146 y8
= y8 0.222
= 107.99 y9
= y9 273.366
= 922.693 y10
= y10 1286.105
= 926.832 y11
= y11 1444.046
=
=
=
=
=
=
=
=
=
=
=
=
18.766 y12
y12 537.141
1072.163 y13
y13 3247.039
8961.448 y14
y14 26844.086
0.063 y15
y15 0.386
71084.33 y16
140000 + y16
2802713 y17
y17 12146108
where
39
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
y1 = x2 + x3 + 41.6
c1 = 0.024x4 4.62
.5 + 12
y2 = 12
c1
c2 = 0.0003535x21 + .5311x1 + 0.08705y2 x1
c3 = 0.052x1 + 78 + 0.002377y2 x1
y3 = cc2
3
y4 = 19y3
0.1956(x1 y3 )2
c4 = 0.04782(x1 y3 ) +
+ 0.6376y4 + 1.594y3
x2
c5 = 100x2
c 6 = x 1 y3 y4
c7 = 0.950 cc4
5
y5 = c 6 c 7
y6 = x 1 y5 y4 y3
c8 = (y5 + y4 )0.995
y7 = yc8
1
c
8
y8 = 3798
7 0.3153
c9 = y7 0.0663y
y8
96.82
y9 = c + 0.321y1
9
y10 = 1.29y5 + 1.258y4 + 2.29y3 + 1.71y6
y11 = 1.71x1 0.452y4 + 0.580y3
12.3
c10 = 752
.3
c11 = (1.75y2 )(0.995x1 )
c12 = 0.995y10 + 1998
y12 = c10 x1 + cc11
12
y13 = c12 + 1.75y2
y14 = 3623 + 64.4x2 + 58.4x3 + 146312
y9 +x5
c13 = 0.995y10 + 60.8x2 + 48x4 0.1121y14 5095
y15 = yc 13
13
y16 = 148000 331000y15 + 40y13 61y15 y13
c14 = 2324y10 28740000y2
y17 = 14130000 1328y10 531y11 + cc14
12
c15 = yy13 0y.13
52
15
c16 = 1.104 0.72y15
c17 = y9 + x5
and where 704.4148 x1 906.3855, 68.6 x2 288.88, 0 x3 134.75, 193 x4 287.0966, and
25 x5 84.1988.
The best known solution is at: x = (705.174537070090537, 68.5999999999999943,
102.899999999999991, 282.324931593660324, 37.5841164258054832) with f (x ) = 1.90515525853479.
g17
Minimize:
f (~
x) = f (x1 ) + f (x2 )
40
(A.17)
where
f1 (x1 ) =
30x1
31x1
28x2
29x2
f2 (x2 ) =
30x2
Subject to:
h1 (~
x) = x1 + 300
x3 x4
131.078
0 x1 < 300
300 x1 < 400
0 x2 < 100
100 x2 < 200
200 x2 < 1000
cos(1.48477 x6 ) +
0.90798x2
3
131.078
cos(1.47588)
h2 (~
x) = x2
x3 x4
131.078
cos(1.48477 + x6 ) +
0.90798x2
4
131.078
cos(1.47588)
h3 (~
x) = x5
x3 x4
131.078
sin(1.48477 + x6 ) +
0.90798x2
4
131.078
sin(1.47588)
h4 (~
x) = 200
x3 x4
131.078
sin(1.48477 x6 ) +
0.90798x2
4
131.078
sin(1.47588)
and where 0 x1 400, 0 x2 1000, 340 x3 420, 340 x4 420, 1000 x5 1000, and
0 x6 0.5236. The best known solution is at x = (201.784467214523659, 99.9999999999999005,
383.071034852773266, 420, 10.9076584514292652, 0.0731482312084287128) with
f (x ) = 8853.53967480648.
g18
Minimize:
f (~
x) = 0.5(x1 x4 x2 x3 + x3 x9 x5 x9 + x5 x8 x6 x7 )
(A.18)
Subject to:
g1 (~
x)
g2 (~
x)
g3 (~
x)
g4 (~
x)
g5 (~
x)
g6 (~
x)
= x23 + x24 1
= x29 1
= x25 + x26 1
= x21 + (x2 x9 )2 1
= (x1 x5 )2 + (x2 x6 )2 1
= (x1 x7 )2 + (x2 x8 )2 1
0
0
0
0
0
0
g7 (~
x)
g8 (~
x)
g9 (~
x)
g10 (~
x)
g11 (~
x)
g12 (~
x)
g13 (~
x)
= (x3 x5 )2 + (x4 x6 )2 1
= (x3 x7 )2 + (x4 x8 )2 1
= x27 + (x8 x9 )2 1
= x2 x3 x1 x4
= x3 x9
= x5 x9
= x6 x7 x5 x8
0
0
0
0
0
0
0
41
5 X
5
X
j=1 i=1
5
X
j=1
dj x3(10+j)
10
X
bi xi
(A.19)
i=1
Subject to:
gj (~
x) = 2
P5
ej +
P10
i=1
aij xi 0
j = 1, . . . , 5
where ~b = [-40, -2, -0.25, -4, -4, -1, -40, -60, 5, 1] and the remaining values are taken from Table A.1,
0 xi 10 (i = 1, . . . , 15). The best known solution is at x = (1.66991341326291344e17 ,
3.95378229282456509e16 , 3.94599045143233784, 1.06036597479721211e16 , 3.2831773458454161,
9.99999999999999822, 1.12829414671605333e17 , 1.2026194599794709e 17, 2.50706276000769697e15 ,
2.24624122987970677e15 , 0.370764847417013987, 0.278456024942955571, 0.523838487672241171,
0.388620152510322781, 0.298156764974678579) with f (x ) = 32.6555929502463.
j
ej
c1j
c2j
c3j
c4j
c5j
dj
a1j
a2j
a3j
a4j
a5j
a6j
a7j
a9j
a10j
1
-15
30
-20
-10
32
-10
4
-16
0
-3.5
0
0
2
-1
1
1
2
-27
-20
39
-6
-31
32
8
2
-2
0
-2
-9
0
-1
2
1
3
-36
-10
-6
10
-6
-10
10
0
0
2
0
-2
-4
-1
3
1
4
-18
32
-31
-6
39
-20
6
1
0.4
0
-4
1
0
-1
4
1
5
-12
-10
32
-10
-20
30
2
0
2
0
-1
-2.8
0
-1
5
1
g20
Minimize:
f (~
x) =
24
X
i=1
Subject to:
42
ai xi
(A.20)
gi (~
x) =
(xi +x(i+12) )
P24
j=1 xj +ei
gi (~
x) =
(xi+3 +x(i+15) )
P24
j=1 xj +ei
h1 (~
x) =
h14 (~
x) =
i
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
P24
i=1
i = 1, 2, 3
x(i+12)
P24
bi+12
h13 (~
x) =
xj
j=13 bj
i = 4, 5, 6
40bi
ci x i
P12
xj
j=1 bj
=0
i = 1, . . . , 12
xi 1 = 0
P12
xi
i=1 di
ai
0.0693
0.0577
0.05
0.2
0.26
0.55
0.06
0.1
0.12
0.18
0.1
0.09
0.0693
0.5777
0.05
0.2
0.26
0.55
0.06
0.1
0.12
0.18
0.1
0.09
+k
P24
xi
i=13 bi
bi 4
44.094
58.12
58.12
137.4
120.9
170.9
62.501
84.94
133.425
82.507
46.07
60.097
44.094
6
58.12
137.4
120.9
170.9
62.501
84.94
133.425
82.507
46.07
60.097
1.671 = 0
ci
123.7
31.7
45.7
14.7
84.7
27.7
49.7
7.1
2.1
17.7
0.85
0.64
4di
31.244
36.12
34.784
92.7
82.7
91.6
56.708
82.7
80.8
64.517
49.4
49.1
ei
0.1
0.3
0.4
0.3
0.6
0.3
58.12
43
g21
Minimize:
f (~
x) = x1
(A.21)
Subject to:
g1 (~
x)
h1 (~
x)
h2 (~
x)
= x1 + 35x02.6 + 35x03.6
h3 (~
x)
h4 (~
x)
h5 (~
x)
= x5 + ln(x4 + 900)
= x6 + ln(x4 + 300)
= x7 + ln(2x4 + 700)
=0
=0
=0
=0
=0
where 0 x1 1000, 0 x2 , x3 40, 100 x4 300, 6.3 x5 6.7, 5.9 x6 6.4, and
4.5 x7 6.25. The best known solution is at: x = (193.724510070034967, 5.56944131553368433e27 ,
17.3191887294084914, 100.047897801386839, 6.68445185362377892, 5.99168428444264833,
6.21451648886070451) with f (x ) = 193.724510070035.
g22
Minimize:
f (~
x) = x1
(A.22)
Subject to:
g1 (~
x)
h1 (~
x)
h2 (~
x)
h3 (~
x)
h4 (~
x)
h5 (~
x)
h6 (~
x)
h7 (~
x)
h8 (~
x)
h9 (~
x)
h10 (~
x)
h11 (~
x)
h12 (~
x)
h13 (~
x)
h14 (~
x)
h15 (~
x)
h16 (~
x)
h17 (~
x)
h18 (~
x)
h19 (~
x)
=
=
=
=
=
=
=
=
=
=
=
=
0
=0
=0
=0
=0
=0
=0
=0
=0
=0
=0
=0
44
=
=
=
=
=
=
=
=
0
0
0
0
0
0
0
0
(A.23)
Subject to:
g1 (~
x)
g2 (~
x)
h1 (~
x)
h2 (~
x)
h3 (~
x)
h4 (~
x)
=
=
=
=
=
=
x9 x3 + 0.02x6 0.025x5
x9 x4 + 0.02x7 0.015x8
x1 + x2 x3 x4
0.03x1 + 0.01x2 x9 (x3 + x4 )
x3 + x6 x5
x4 + x7 x8
=
=
=
=
0
0
0
0
0
0
(A.24)
Subject to:
g1 (~
x)
g2 (~
x)
0
0
References
[1] H.J. Barbosa, A.C. Lemonge, A new adaptive penalty scheme for genetic
algorithms, Information Sciences 156 (34) (2003) 215251.
45
[2] H.S. Bernardino, H.J.C. Barbosa, A.C.C. Lemonge, L.G. Fonseca, A new
hybrid AIS-GA for constrained optimization problems in mechanical engineering, in: 2008 Congress on Evolutionary Computation (CEC2008),
IEEE Service Center, Hong Kong, 2008, pp. 14551462.
[3] J. Brest, V. Zumer, M.S. Maucec, Self-adaptative differential evolution
algorithm in constrained real-parameter optimization, in: 2006 IEEE
Congress on Evolutionary Computation (CEC2006), IEEE, Vancouver,
BC, Canada, 2006, pp. 919926.
[4] L. Cagnina, S. Esquivel, C.A. Coello-Coello, A bi-population PSO with
a shake-mechanism for solving constrained numerical optimization, in:
2007 IEEE Congress on Evolutionary Computation (CEC2007), IEEE
Press, Singapore, 2007, pp. 670676.
[5] C.A. Coello Coello, Use of a self-adaptive penalty approach for engineering optimization problems, Computers in Industry 41 (2) (2000)
113127.
[6] C.A. Coello Coello, Theoretical and numerical constraint handling techniques used with evolutionary algorithms: A survey of the state of the
art, Computer Methods in Applied Mechanics and Engineering 191 (1112) (2002) 12451287.
[7] W. Conover, Practical Nonparametric Statistics, John Wiley and Sons,
3rd edition, 1999.
[8] N. Cruz-Cortes, Handling constraints in global optimization using artificial immune systems, in: E. Mezura-Montes (Ed.), Constraint-Handling
in Evolutionary Optimization, volume 198, Springer-Verlag, Studies in
Computational Intelligence Series, ISBN:978-3-642-00618-0, 2009, pp.
237262.
[9] W. Daniel, Biostatistics: Basic Concepts and Methodology for the
Health Sciences, John Wiley and Sons, 2002.
[10] K. Deb, Optimization for Engineering Design, Prentice-Hall, India,
1995.
46
47
[20] F.Z. Huang, L. Wang, Q. He, A hybrid differential evolution with double
populations for constrained optimization, in: 2008 Congress on Evolutionary Computation (CEC2008), IEEE Service Center, Hong Kong,
2008, pp. 1825.
[21] V.L. Huang, A.K. Qin, P.N. Suganthan, Self-adaptative differential evolution algorithm for constrained real-parameter optimization, in: 2006
IEEE Congress on Evolutionary Computation (CEC2006), IEEE, Vancouver, BC, Canada, 2006, pp. 324331.
[22] S. Kukkonen, J. Lampinen, Constrained real-parameter optimization
with generalized differential evolution, in: 2006 IEEE Congress on Evolutionary Computation (CEC2006), IEEE, Vancouver, BC, Canada,
2006, pp. 911918.
[23] H. Kumar-Singh, T. Ray, W. Smith, C-PSA: Constrained Pareto simulated annealing for constrained multi-objective optimization, Information Sciences 180 (13) (2010) 24992513.
[24] J. Lampinen, A constraint handling approach for the differential evolution algorithm, in: Proceedings of the Congress on Evolutionary Computation 2002 (CEC2002), volume 2, IEEE Service Center, Piscataway,
New Jersey, 2002, pp. 14681473.
[25] R. Landa Becerra, C.A. Coello Coello, Cultured differential evolution
for constrained optimization, Computer Methods in Applied Mechanics
and Engineering 195 (3336) (2006) 43034322.
[26] G. Leguizamon, C.A. Coello Coello, A boundary search based ACO
algorithm coupled with stochastic ranking, in: 2007 IEEE Congress on
Evolutionary Computation (CEC2007), IEEE Press, Singapore, 2007,
pp. 165172.
[27] L.D. Li, X. Li, X. Yu, A multi-objective constraint-handling method
with pso algorithm for constrained engineering optimization problems,
in: 2008 Congress on Evolutionary Computation (CEC2008), IEEE
Service Center, Hong Kong, 2008, pp. 15281535.
[28] J.J. Liang, T. Runarsson, E. Mezura-Montes, M. Clerc, P. Suganthan,
C.A. Coello Coello, K. Deb, Problem Definitions and Evaluation Criteria for the CEC 2006 Special Session on Constrained Real-Parameter
48
49
50
volume 1, Washington DC, USA, ACM Press, New York, 2005, pp. 225
232. ISBN 1-59593-010-8.
[44] E. Mezura-Montes, J. Velazquez-Reyes, C.A. Coello Coello, Comparing
differential evolution models for global optimization, in: 2006 Genetic
and Evolutionary Computation Conference (GECCO2006), volume 1,
pp. 485492.
[45] E. Mezura-Montes, J. Velazquez-Reyes, C.A. Coello Coello, Modified differential evolution for constrained optimization, in: 2006 IEEE
Congress on Evolutionary Computation (CEC2006), IEEE, Vancouver,
BC, Canada, 2006, pp. 332339.
[46] Z. Michalewicz, M. Schoenauer, Evolutionary algorithms for constrained parameter optimization problems, Evolutionary Computation 4 (1)
(1996) 132.
[47] A.E. Mu
noz-Zavala, A. Hernandez-Aguirre, E.R. Villa-Diharce,
S. Botello-Rionda, PESO+ for constrained optimization, in: 2006 IEEE
Congress on Evolutionary Computation (CEC2006), IEEE, Vancouver,
BC, Canada, 2006, pp. 935942.
[48] A. Oyama, K. Shimoyama, K. Fujii, New constraint-handling method for
multi-objective and multi-constraint evolutionary optimization, Transactions of the Japan Society for Aeronautical and Space Sciences 50
(167) (2007) 5662.
[49] A.I. Oyman, K. Deb, H.G. Beyer, An alternative constraint handling
method for evolution strategies, in: Proceedings of the Congress on Evolutionary Computation 1999 (CEC99), volume 1, IEEE Service Center,
Piscataway, New Jersey, 1999, pp. 612619.
[50] K. Price, R. Storn, J. Lampinen, Differential Evolution: A Practical
Approach to Global Optimization, Natural Computing Series, SpringerVerlag, 2005.
[51] K.V. Price, J.I. Ronkkonen, Comparing the uni-modal scaling performance of global and local selection in a mutation-only differential evolution algorithm, in: 2006 IEEE Congress on Evolutionary Computation
(CEC2006), IEEE Press, Vancouver, Canada, 2006, pp. 73877394.
51
52
53
[69] S. Zeng, H. Shi, H. Li, G. Chen, L. Ding, L. Kang, A lower-dimensionalsearch evolutionary algorithm and its application in constrained optimization problem, in: 2007 IEEE Congress on Evolutionary Computation (CEC2007), IEEE Press, Singapore, 2007, pp. 12551260.
[70] M. Zhang, W. Luo, X. Wang, Differential evolution with dynamic
stochastic selection for constrained optimization, Information Sciences
178 (15) (2008) 30433074.
[71] Q. Zhang, S. Zeng, R. Wang, H. Shi, G. Chen, L. Ding, L. Kang, Constrained optimization by the evolutionary algorithm with lower dimensional crossover and gradient-based mutation, in: 2008 Congress on Evolutionary Computation (CEC2008), IEEE Service Center, Hong Kong,
2008, pp. 273279.
[72] Y. Zhou, J. He, A runtime analysis of evolutionary algorithms for
constrained optimization problems, IEEE Transactions on Evolutionary
Computation 11 (5) (2007) 608619.
[73] K. Zielinski, R. Laur, Constrained single-objective optimization using
differential evolution, in: 2006 IEEE Congress on Evolutionary Computation (CEC2006), IEEE, Vancouver, BC, Canada, 2006, pp. 927934.
[74] K. Zielinski, R. Laur, Stopping criteria for differential evolution in constrained single-objective optimization, in: U.K. Chakraborty (Ed.), Advances in Differential Evolution, Springer, Berlin, 2008, pp. 111138.
ISBN 978-3-540-68827-3.
[75] K. Zielinski, S.P. Vudathu, R. Laur, Influence of different deviations allowed for equality constraints on particle swarm optimization
and differential evolution, in: N. Krasnogor, G. Nicosia, M. Pavone,
D. Pelta (Eds.), Nature Inspired Cooperative Strategies for Optimization, Springer, Berlin, 2008, pp. 249259. ISBN 978-3-540-78986-4.
[76] K. Zielinski, X. Wang, R. Laur, Comparison of adaptive approaches for
differential evolution, in: G. Rudolph, T. Jansen, S. Lucas, C. Poloni,
N. Beume (Eds.), Parallel Problem Solving from NaturePPSN X,
Springer. Lecture Notes in Computer Science Vol. 5199, Dortmund, Germany, 2008, pp. 641650.
54
Table Captions
Table 1: DE variants used in this study. jrand is a random integer number
generated between [1, n], where n is the number of variables of the problem.
randj [0, 1] is a real number generated at random between 0 an 1. Both numbers are generated using a uniform distribution. ~ui,g+1 is the trial vector
(child vector), ~xr0 ,g is the base vector chosen at random from the current population, ~xbest,g is the best vector in the population, ~xi,g is the target vector
(parent vector), and ~xr1 ,g and ~xr2 ,g are used to generate the difference vector.
Table 2: Details of the 24 test problems [28]. n is the number of decision variables, = |F | / |S| is the estimated ratio between the feasible region
and the search space, LI is the number of linear inequality constraints, NI
the number of nonlinear inequality constraints, LE is the number of linear
equality constraints, and NE is the number of nonlinear equality constraints.
a is the number of active constraints at the optimum.
Table 3: Classification of problems for the first experiment based on the
number of decision variables.
Table 4: Classification of problems for the first experiment based on the
type of constraints.
Table 5: F P values obtained by each DE variant on each test problem.
Best results are remarked with boldface.
Table 6: P values obtained by each DE variant on each test problem.
Best results are remarked with boldface.
Table 7: AF ES values obtained by each DE variant on each test problem.
Best results are remarked with boldface. - means that the performance
measure was not defined for this problem/variant.
Table 8: SP values obtained by each DE variant on each test problem.
Best results are remarked with boldface.- means that the performance
measure was not defined for this problem/variant.
55
Table 9: a) Test problems with a different dimensionality. b) Test problems with different type of constraints.
Table 10: Statistical results (B: Best, M: Mean, W: Worst, SD: Standard
Deviation) obtained by DECV with respect to those provided by state-ofthe-art approaches on 13 benchmark problems. Values in boldface mean
that the global optimum or best know solution was reached, values in italic
mean that the obtained result is better (but not the optimal or best known)
with respect to the approaches compared.
Table 11: Statistical results B: Best, M: Mean, W: Worst, SD: Standard
Deviation) obtained by DECV and A-DDE in the last 11 benchmark problems. Values in boldface mean that the global optimum or best know
solution was reached.
Table 12: Statistical results B: Best, M: Mean, W: Worst, SD: Standard
Deviation) obtained by the alternative DECV (called A-DECV) in those problems where the original DECV provided good but not competitive results.
Values in boldface mean that the global optimum or best know solution was
reached.
56
57
Variant
DE/rand/1/bin:
xj,r0 ,g + F (xj,r1,g xj,r2 ,g ) if randj [0, 1] < CR or j = jrand
uj,i,g+1 =
xj,i,g
otherwise
DE/best/1/bin:
xj,best,g + F (xj,r1,g xj,r2,g ) if randj [0, 1] < CR or j = jrand
uj,i,g+1 =
xj,i,g
otherwise
DE/target-to-rand/1:
uj,i,g+1 = xj,i,g + F (xj,r0 ,g xj,i,g ) + F (xj,r1,g xj,r2 ,g )
DE/target-to-best/1:
uj,i,g+1 = xj,i,g + F (xj,best,g xj,i,g ) + F (xj,r1 ,g xj,r2 ,g )
58
Prob.n
g01
g02
g03
g04
g05
g06
g07
g08
g09
g10
g11
g12
g13
g14
g15
g16
g17
g18
g19
g20
g21
g22
g23
g24
13
20
10
5
4
2
10
2
7
8
2
3
5
10
3
5
6
9
15
24
7
22
9
2
Type
of
function
quadratic
nonlinear
polynomial
quadratic
cubic
cubic
quadratic
nonlinear
polynomial
linear
quadratic
quadratic
nonlinear
nonlinear
quadratic
nonlinear
nonlinear
quadratic
nonlinear
linear
linear
linear
linear
linear
LI
NI
LE
NE
0.0111%
99.9971%
0.0000%
52.1230%
0.0000%
0.0066%
0.0003%
0.8560%
0.5121%
0.0010%
0.0000%
4.7713%
0.0000%
0.0000%
0.0000%
0.0204%
0.0000%
0.0000%
33.4761%
0.0000%
0.0000%
0.0000%
0.0000%
79.6556%
9
0
0
0
2
0
3
0
0
3
0
0
0
0
0
4
0
0
0
0
0
0
0
0
0
2
0
6
0
2
5
2
4
3
0
1
0
0
0
34
0
12
5
6
1
1
2
2
0
0
0
0
0
0
0
0
0
0
0
0
0
3
1
0
0
0
0
2
0
8
3
0
0
0
1
0
3
0
0
0
0
0
1
0
3
0
1
0
4
0
0
12
5
11
1
0
6
1
1
2
3
2
6
0
2
6
1
0
3
3
2
4
4
6
0
16
6
19
6
2
59
Class
High
Medium
Low
Number of
variables
10 - 20
5-9
2-4
Problems
g01, g02, g03, g07, g14, g19, g20, g22
g04, g09, g10, g13, g16, g17, g18, g21, g23
g05, g06, g08, g11, g12, g15, g24
60
Type of
constraints
Only inequalities
Only equalities
Inequalities and Equalities
Problems
g01, g02, g04, g06, g07, g08, g09, g10, g12,
g16, g18, g19, g24
g03, g11, g13, g14, g15, g17
g05, g20, g21, g22, g23
61
Problem
DE/rand/1/bin
DE/best/1/bin
DE/target-to-rand/1
DE/target-to-best/1
g01
g02
g03
g04
g05
g06
g07
g08
g09
g10
g11
g12
g13
g14
g15
g16
g17
g18
g19
g21
g23
g24
1
1
1
1
0.97
1
1
1
1
1
1
1
0.87
1
1
1
1
1
1
1
0.90
1
1
1
0.9
1
0.93
1
1
1
1
1
1
1
0.87
0.93
1
1
0.93
1
1
0.97
0.90
1
1
1
0.83
1
0.77
1
1
1
1
1
1
1
0.3
0.43
1
1
0.87
1
1
0.97
0.17
1
1
1
1
1
1
1
1
1
1
1
1
1
0.97
1
1
1
1
1
1
1
0.97
1
62
Problem
DE/rand/1/bin
DE/best/1/bin
DE/target-to-rand/1
DE/target-to-best/1
g01
g02
g03
g04
g05
g06
g07
g08
g09
g10
g11
g12
g13
g14
g15
g16
g17
g18
g19
g21
g23
g24
1
0.03
0
1
1
1
1
1
1
1
1
1
0
0.93
1
1
0
1
0
0.9
0.5
1
0.8
0
0.03
1
1
1
0.37
1
0.93
0.2
0.97
1
0.27
0.67
1
1
0
0.8
0.93
0.5
0.5
1
1
0.13
0
1
0.6
1
1
1
1
1
1
1
0
0.1
0.8
1
0.03
1
0
0.63
0
1
0.87
0
0
1
0.27
1
1
1
1
0.67
1
1
0.03
0.43
0.3
1
0
0.97
1
0.43
0.43
1
63
Problem
DE/rand/1/bin
DE/best/1/bin
DE/target-to-rand/1
DE/target-to-best/1
g01
g02
g03
g04
g05
g06
g07
g08
g09
g10
g11
g12
g13
g14
g15
g16
g17
g18
g19
g21
g23
g24
361679.33
401419.00
41756.13
233141.67
16902.00
298298.50
2597.90
77154.70
205181.10
22051.90
7976.63
229520.11
101919.40
36011.43
221071.67
105550.63
416715.00
7165.30
37135.17
104859.00
22949.40
88544.47
11886.30
59669.55
1732.83
28459.21
75590.50
9649.76
3903.03
169700.63
70108.05
69464.63
16112.70
53699.04
122569.11
44098.13
170003.73
4780.37
311840.07
472004.25
40342.37
340144.89
16677.87
237150.83
2553.07
62958.03
170220.43
76508.90
11003.67
233873.33
221034.54
32605.73
266434.00
229835.00
143494.00
6994.37
33770.04
21687.03
64530.75
18429.37
59828.50
1832.67
27866.17
51686.05
31771.77
7330.00
316734.00
95154.08
184372.00
15506.43
42043.41
86005.70
47643.38
182492.77
4669.67
64
Problem
DE/rand/1/bin
DE/best/1/bin
DE/target-to-rand/1
DE/target-to-best/1
g01
g02
g03
g04
g05
g06
g07
g08
g09
g10
g11
g12
g13
g14
g15
g16
g17
g18
g19
g21
g23
g24
3.62E+05
1.20E+07
4.18E+04
2.33E+05
1.69E+04
2.98E+05
2.60E+03
7.72E+04
2.05E+05
2.21E+04
7.98E+03
2.46E+05
1.02E+05
3.60E+04
2.21E+05
1.17E+05
8.33E+05
7.17E+03
4.64E+04
3.15E+06
2.29E+04
8.85E+04
1.19E+04
1.63E+05
1.73E+03
3.05E+04
3.78E+05
9.98E+03
3.90E+03
6.36E+05
1.05E+05
6.95E+04
1.61E+04
6.71E+04
1.31E+05
8.82E+04
3.40E+05
4.78E+03
3.12E+05
3.54E+06
4.03E+04
5.67E+05
1.67E+04
2.37E+05
2.55E+03
6.30E+04
1.70E+05
7.65E+04
1.10E+04
2.34E+06
2.76E+05
3.26E+04
7.99E+06
2.30E+05
2.27E+05
6.99E+03
3.90E+04
2.17E+04
2.42E+05
1.84E+04
5.98E+04
1.83E+03
2.79E+04
7.75E+04
3.18E+04
7.33E+03
9.50E+06
2.20E+05
6.15E+05
1.55E+04
4.35E+04
8.60E+04
1.10E+05
4.21E+05
4.67E+03
65
Problem
g02
g21
g06
Dimensionality
High
Medium
Low
a)
Problem
g10
g13
g23
Type of constraints
Inequalities
Equalities
Both of them
b)
66
Problem
BKS
Stat
RDE
[38]
EXDE
[24]
DDE
[43]
A-DDE
[42]
DSS-MDE
[70]
DECV
g01
-15.000
B
M
W
SD
B
M
W
SD
B
M
W
SD
B
M
W
SD
B
M
W
SD
B
M
W
SD
B
M
W
SD
B
M
W
SD
B
M
W
SD
B
M
W
SD
B
M
W
SD
B
M
W
SD
B
M
W
SD
-15.000
-14.792
-12.743
NA
-0.803619
-0.746236
-0.302179
NA
-1.000
-0.640
-0.029
NA
-30665.539
-30592.154
-29986.214
NA
5126.497
5218.729
5502.410
NA
-6961.814
-6367.575
-2236.950
NA
24.306
104.599
1120.541
NA
-0.095825
-0.091292
-0.027188
NA
680.630
692.472
839.780
NA
7049.248
8842.660
15580.370
NA
0.75
0.76
0.87
NA
-1.000
-1.000
-1.000
NA
0.053866
0.747227
2.259875
NA
-15.000
-15.000
-15.000
NA
NA
NA
NA
NA
-1.025
-1.025
-1.025
NA
-31025.600
-31025.600
-31025.600
NA
5126.484
5126.484
5126.484
NA
-6961.814
-6961.814
-6961.814
NA
24.306
24.306
24.307
NA
-0.095825
-0.095825
-0.095825
NA
680.630
680.630
680.630
NA
7049.248
7049.248
7049.248
NA
0.75
0.75
0.75
NA
NA
NA
NA
NA
NA
NA
NA
NA
-15.000
-15.000
-15.000
1.00E-09
-0.803619
-0.798079
-0.751742
1.01E-02
-1.000
-1.000
-1.000
0
-30665.539
-30665.539
-30665.539
0
5126.497
5126.497
5126.497
0
-6961.814
-6961.814
-6961.814
0
24.306
24.306
24.306
8.22E-09
-0.095825
-0.095825
-0.095825
0
680.630
680.630
680.630
0
7049.248
7049.266
7049.617
4.45E-02
0.75
0.75
0.75
0
-1.000
-1.000
-1.000
0
0.053941
0.069336
0.438803
7.58E-02
-15.000
-15.000
-15.000
7.00E-06
-0.803605
-0.771090
-0.609853
3.66E-02
-1.000
-1.000
-1.000
9.30E-12
-30665.539
-30665.539
-30665.539
3.20E-13
5126.497
5126.497
5126.497
2.10E-11
-6961.814
-6961.814
-6961.814
2.11E-12
24.306
24.306
24.306
4.20E-05
-0.095825
-0.095825
-0.095825
9.10E-10
680.630
680.630
680.630
1.15E-10
7049.248
7049.248
7049.248
3.23E-4
0.75
0.75
0.75
5.35E-15
-1.000
-1.000
-1.000
4.10E-9
0.053942
0.079627
0.438803
1.00E-13
-15.000
-15.000
-15.000
1.30E-10
-0.803619
-0.786970
-0.728531
1.50E-02
-1.005
-1.005
-1.005
1.90E-08
-30665.539
-30665.539
-30665.539
2.70E-11
5126.497
5126.497
5126.497
0
-6961.814
-6961.814
-6961.814
0
24.306
24.306
24.306
7.50E-07
-0.095825
-0.095825
-0.095825
4.00E-17
680.630
680.630
680.630
2.90E-13
7049.248
7049.249
7049.255
1.40E-03
0.749
0.749
0.749
0
-1.000
-1.000
-1.000
0
0.053942
0.053942
0.053942
1-00E-13
-15.000
-14.855
-13.000
4.59E-01
-0.704009
-0.569458
-0.238203
9.51E-02
-0.461
-0.134
-0.002
1.17E-01
-30665.539
-30665.539
-30665.539
1.56E-06
5126.497
5126.497
5126.497
0
-6961.814
-6961.814
-6961.814
0
24.306
24.794
29.511
1.37E+00
-0.095825
-0.095825
-0.095825
4.23E-17
680.630
680.630
680.630
3.45E-07
7049.248
7103.548
7808.980
1.48e+02
0.75
0.75
0.75
1.12E-16
-1.000
-1.000
-1.000
0
0.059798
0.382401
0.999094
2.68E-01
g02
-0.803619
g03
-1.000
g04
-30665.539
g05
5126.497
g06
-6961.814
g07
24.306
g08
-0.095825
g09
680.630
g10
7049.248
g11
0.75
g12
-1.000
g13
0.053942
67
Problem/BKS
g14
-47.764888
g15
961.715022
g16
-1.905155
g17
8853.539675
g18
-0.866025
g19
32.655593
g20
NA
g21
193.724510
g22
236.430976
g23
-400.0551
g24
-5.508013
Statistic
B
M
W
SD
B
M
W
SD
B
M
W
SD
B
M
W
SD
B
M
W
SD
B
M
W
SD
B
M
W
SD
B
M
W
SD
B
M
W
SD
B
M
W
SD
B
M
W
SD
A-DDE
-47.764888
-47.764131
-47.764064
9.0E-06
961.715022
961.715022
961.715022
0
-1.905155
-1.905155
-1.905155
0
8853.540000
8854.664
8858.874
1.43E+00
-0.866025
-0.866025
-0.866025
0
32.655593
32.658000
32.665000
1.72E-03
NA
NA
NA
NA
193.724510
193.724510
193.726000
2.60E-04
NA
NA
NA
NA
-400.055052
-391.415000
-367.452000
9.13E+00
-5.508013
-5.508013
-5.508013
3.12E-14
68
DECV
-47.764888
-47.722542
-47.036510
1.62E-01
961.715022
961.715022
961.715022
2.31E-13
-1.905155
-1.905155
-1.905149
1.10E-06
8853.541289
8919.936362
8938.571060
2.59E+01
-0.866025
-0.859657
-0.674981
3.48E-02
32.655593
32.660587
32.785360
2.37E-02
NA
NA
NA
NA
193.724510
198.090578
324.702842
2.39E+01
NA
NA
NA
NA
-400.055093
-392.029610
-342.524522
1.24E+01
-5.508013
-5.508013
-5.508013
2.71E-15
Problem/BKS
g01
-15.000
g02
-0.803619
g03
-1.000
g07
24.306
g10
7049.248
g13
0.053942
Statistic
B
M
W
SD
B
M
W
SD
B
M
W
SD
B
M
W
SD.
B
M
W
SD
B
M
W
S
69
A-DECV
-15.000
-14.999
-14.999
1.00E-06
-0.803592
-0.785055
-0.748354
0.012178
-1.000
-0.331
-0.000
3.45E-01
24.306
24.306
24.306
0
7049.248
7049.248
7049.248
4.63E-12
0.053942
0.336336
0.443497
1.73E-01
Figure captions
Figure 1: DE/rand/1/bin pseudocode. randj [0, 1] is a function that
returns a real number between 0 and 1. randint[min,max] is a function that
returns an integer number between min and max. NP , MAX GEN, CR,
and F are user-defined parameters. n is the dimensionality of the problem.
Steps indicated with may be changed from variant to variant as indicated
in Table 1.
Figure 2: DE/rand/1/bin graphical example. ~xi is the target vector, ~xr0
is the base vector chosen at random, ~xr1 and ~xr2 (also chosen at random) are
used to generate the difference vector as to define a search direction. The
black square represents the mutant vector, which can be the location of the
trial vector generated after performing recombination. The two filled squares
represent the other two possible locations for the trial vector after recombination.
Figure 3: DE/best/1/bin graphical example. ~xi is the target vector, ~xbest
is the base vector (the best vector so far in the population), ~xr1 and ~xr2
(chosen at random) are used to generate the difference vector as to define a
search direction. The black square represents the mutant vector, which can
be the location of the trial vector generated after performing recombination.
The two filled squares represent the other two possible locations for the trial
vector after recombination.
Figure 4: DE/target-to-rand/1 graphical example. ~xi is the target vector,
~xr0 is the base vector chosen at random, and the difference between them
defines a first search direction. ~xr1 and ~xr2 (also chosen at random) are used
to generate the difference vector as to define a second search direction. The
trial vector will be located in the black square.
Figure 5: DE/target-to-best/1/ graphical example. ~xi is the target vector, ~xbest is the base vector (the best vector so far in the population), and
the difference between them defines a first search direction. ~xr1 and ~xr2
(chosen at random) are used to generate the difference vector as to define
a second search direction. The trial vector will be located in the black square.
70
Figure 6: Radial graphic for those test problems where the AF ES values
were less than 80,000 for all variants.
Figure 7: Radial graphic for those test problems where the SP values
were less than 80,000 for all variants.
Figure 8: Results obtained in the three performance measures by DE/rand/1/bin
in problem g02.
Figure 9: Results obtained in the three performance measures by DE/best/1/bin
in problem g02.
Figure 10: Results obtained in the three performance measures by DE/rand/1/bin
in problem g21.
Figure 11: Results obtained in the three performance measures by DE/best/1/bin
in problem g21.
Figure 12: Results obtained in the three performance measures by DE/rand/1/bin
in problem g06.
Figure 13: Results obtained in the three performance measures by DE/best/1/bin
in problem g06.
Figure 14: Results obtained in the three performance measures by DE/rand/1/bin
in problem g10.
Figure 15: Results obtained in the three performance measures by DE/best/1/bin
in problem g10.
Figure 16: Results obtained in the three performance measures by DE/rand/1/bin
in problem g13.
Figure 17: Results obtained in the three performance measures by DE/best/1/bin
in problem g13.
Figure 18: Results obtained in the three performance measures by DE/rand/1/bin
in problem g23.
71
72
73
Begin
g=0
Create a random initial population ~xi,g i, i = 1, . . . , NP
Evaluate f (~xi,g ) i, i = 1, . . . , NP
For g=1 to MAX GEN Do
For i=1 to NP Do
Select randomly r0 6= r1 6= r2 6= i
jrand = randint[1, n]
For j=1 to n Do
If (randj [0, 1] < CR or j = jrand ) Then
uj,i,g+1 = xj,r0,g + F (xj,r1 ,g xj,r2 ,g )
Else
uj,i,g+1 = xj,i,g
End If
End For
If (f (~ui,g+1) f (~xi,g )) Then
~xi,g+1 = ~ui,g+1
Else
~xi,g+1 = ~xi,g
End If
End For
g =g+1
End For
End
74
11
00
00
11
x2
~xr1
~xr2
~xi
11
00
00
11
~xr0 + F (~xr1 ~xr2 )
~xbest
F (~xr1 ~xr2 ) ~xr0
x1
75
~xi
01
~xbest + F (~xr1 ~xr2 )
01
x2
~xr1
~xr2
~xbest
F (~xr1 ~xr2 ) ~xr0
x1
76
~xi
~xr2
F (~xr0 ~xi )
x1
77
~xr0
F (~xbest ~xi )
~xbest
F (~xr1 ~xr2 ) ~xr0
x1
78
79
80
(a)
(b)
(c)
81
(a)
(b)
(c)
82
(a)
(b)
(c)
83
(a)
(b)
(c)
84
(a)
(b)
(c)
85
(a)
(b)
(c)
86
(a)
(b)
(c)
87
(a)
(b)
(c)
88
(a)
(b)
(c)
89
(a)
(b)
(c)
90
(a)
(b)
(c)
91
(a)
(b)
(c)
92