Documenti di Didattica
Documenti di Professioni
Documenti di Cultura
DOI 10.1007/s00521-017-2988-6
ORIGINAL ARTICLE
Received: 15 November 2016 / Accepted: 27 March 2017 / Published online: 25 April 2017
© The Natural Computing Applications Forum 2017
Abstract Crow search algorithm (CSA) is a new natural show that CCSA is superior compared to CSA and the
inspired algorithm proposed by Askarzadeh in 2016. The other algorithms. In addition, the experiments show that sine
main inspiration of CSA came from crow search mecha- chaotic map is the appropriate map to significantly boost the
nism for hiding their food. Like most of the optimization performance of CSA.
algorithms, CSA suffers from low convergence rate and
entrapment in local optima. In this paper, a novel meta- Keywords Crow search algorithm · Feature selection ·
heuristic optimizer, namely chaotic crow search algorithm Optimization algorithm · Chaos theory
(CCSA), is proposed to overcome these problems. The
proposed CCSA is applied to optimize feature selection
problem for 20 benchmark datasets. Ten chaotic maps are 1 Introduction
employed during the optimization process of CSA. The per-
formance of CCSA is compared with other well-known and Recently, evolutionary algorithms (EAs) have a great attrac-
recent optimization algorithms. Experimental results reveal tion and proved their efficiency for solving optimization
the capability of CCSA to find an optimal feature subset problems. Some of these algorithms are genetic algorithm
which maximizes the classification performance and mini- (GA) [15], particle swarm optimization (PSO) [26], many
mizes the number of selected features. Moreover, the results objective particle swarm optimization (MOPSO) [9], and
differential evolution (DE) [44]. Despite the different struc-
tures of EAs, they usually set a random population. Then,
they evaluate it through the iterations. EAs are similar in
Scientific Research Group in Egypt http://www.egyptscience.net dividing the search space into two main phases: exploitation
and exploration. In many cases, EAs get stuck in local min-
ima. This is due to improper balancing between exploitation
Gehad Ismail Sayed
and exploration and the stochastic nature of EAs. Several
gehad.ismail@egyptscience.net
studies are presented in the literature to overcome these
Aboul Ella Hassanien problems and to improve the performance of EAs. Chaos is
aboitcairo@gmail.com
one of the mathematical approaches recently employed to
Ahmad Taher Azar boost the performance of EAs.
ahmad.azar@fci.bu.edu.eg; ahmad t azar@ieee.org
In the last decade, mathematics and system science
1 Faculty of Computers and Information, Cairo University,
branch, namely chaos, has been developed. It has been
Cairo, Egypt applied intensively in different sciences’s fields such as
synchronization [60], chaos control [57], different optimiza-
2 Faculty of Computers and Information, Benha University, tion researches [45, 51], and so on. Chaos has three main
Banha, Egypt
dynamic properties; (1) quasi stochastic property, (2) sen-
3 Nanoelectronics Integrated Systems Center (NISC), sitivity against initial conditions, and (3) ergodicity. The
Nile University, Giza, Egypt chaos application in optimization research disciplines has
172 Neural Comput & Applic (2019) 31:171–188
attracted a great attention in recent years. Chaotic opti- memory. In addition, it can help in better data understanding
mization algorithm (COA) is one of the chaos applications, and interpretation. Several studies on applying feature selec-
which makes a use of chaos sequence’s nature [29]. It has tion before clustering are found in the literature [6, 53]. It
been proved that replacing random variables with chaotic has been proven that clusters built by a subset of the salient
variable can enhance the performance of COA [29]. Thus, features are more practical and interpretable than clusters
several studies combine chaos with other algorithms in order built using all of the features, as some of these features
to improve the performance of COA. Some of them are include noise. Moreover, feature selection plays an impor-
chaotic ant swarm optimization (CASO) [5], chaotic genetic tant role in multi-label learning. Class label dependence in
algorithm (CGA) [1, 49], chaotic particle swarm (CPSO) multi-label learning is considered another aspect that can
[20, 21, 50], chaotic pattern search (CPS) [54], chaotic sim- be noisy and incomplete. Thus, the high dimensionality of
ulated annealing algorithm (CSA) [33], chaotic differential multi-labeled data not only increases computational costs
evolution algorithm (CDEA) [22], chaotic biogeography- and the memory storage requirements for many learning
based optimization (CBBO) [39], chaotic krill herd algo- algorithms but also, in real applications, limits the usage of
rithm (CKHA) [47], and chaotic firefly algorithm (CFA) these learning algorithms. Feature selection can be used to
[13]. These algorithms have been proposed and in literature improve the performance of multi-label ELM-RBF through
and applied on different domains. reducing the feature dimensionality while performing clus-
Feature selection is considered as a preprocessing step in tering analysis [58]. Authors in [24] proposed multi-label
machine learning. Selecting the most relevant feature sub- informed feature selection approach. They used label corre-
set is one of the challenging tasks for complex or large lations for selecting the most discriminating features across
dataset. Discovering hidden patterns or valuable knowl- multiple labels.
edge in large scale data has become a pressing matter Feature selection algorithms are divided in two main cat-
of the moment [23]. Feature selection has been proven egories, namely wrapper-based algorithms and filter-based
to effectively remove irrelevant and redundant features. In algorithms [27]. The wrapper-based algorithms depend on
addition, it can improve the performance of classifiers, using machine learning algorithms for their evaluation. On
reduce the computational cost, and reduce the required stor- the other hand, filter-based algorithms use statistical meth-
age [28]. In recent years, data became increasingly larger ods for selecting a feature subset. However, wrapper-based
in both a number of attributes/features and a number of algorithms obtain better results; they are computationally
instances. Feature selection has been applied successfully in expensive. Thus, an intelligent search algorithm is needed
many applications such as text categorization [52], genome to reduce the computational time [17]. Several studies have
projects [4], customer relationship management [35], and been proposed to introduce feature selection algorithms
image retrieval [37]. Thus, nowadays, feature selection for such as sequential backward selection (SBS) and sequential
high-dimensional data becomes very necessary needed for forward selection (SFS). However, these algorithms suf-
machine learning tasks. fer from stacking in local optima and high computational
Feature selection emerged as an effective medical clas- time. EAs using their search agents can search the feature
sification and diagnostic support system. Microarray tech- space adaptively to find the optimal solution. Some of these
nology is one of the recent breakthroughs in experimental algorithms are gradient descent algorithm [10], discrete par-
molecular biology. This technology helps scientists in mon- ticle swarm optimization (DPSO) [46], tabu search [56],
itoring the gene expression on a genomic scale. Such a elephant herding optimization (EHO) [11], firefly algo-
technology at the gene expression level can significantly rithm (FA) [12], harmony search (HS) [14], charged system
increase the possibility of cancer classification and diagno- search (CSS) [25], bird swarm algorithm (BSA) [32], ani-
sis. However, many factors can degrade the outcome of the mal migration optimization (AMO) [30], teaching–learning-
analysis. One of these factors is in the huge number of genes based optimization (TLBO) [36], moth-flame optimization
founded in the original data. Thus, determining the most dis- (MFO) [55], and gray wolf optimizer (GWO) [34].
criminatory genes is considered a critical task to improve the Although selecting the best feature subset has been sig-
speed and accuracy of prediction systems [16, 43]. Authors nificantly improved, there is still a need to motivate to
in [43] present a hybrid forward selection algorithm for push these presented work further. This work tries a novel
cardiovascular disease diagnosis. The experimental results hybrid approach, where chaos embedded with CSA. The
demonstrate that the proposed approach finds smaller fea- main contribution of this paper is that a chaotic binary ver-
ture subsets with high diagnosis’s accuracy compared to sion of CCSA is proposed to enhance the performance of
back elimination and forward inclusion algorithms. CSA. In this hybrid approach, the chaotic search method-
Feature selection can have a great impact on the cluster- ology is adopted to select the optimal feature subset which
ing performance, as selecting relevant features can improve maximizes the classification accuracy and minimizes the
the learning quality and consume computational time and feature subset length. Ten one-dimensional chaotic maps are
Neural Comput & Applic (2019) 31:171–188 173
adopted and replaced with random movement parameters of at iteration t for crow j . N j,t defined as the best position
CSA. In this study, CCSA is used as a feature selection algo- obtained so far by crow j .
rithm. The performance of the proposed approach is tested At iteration t, crow j wants to follow the hiding place of
on 20 benchmark datasets. In addition, the performance of crow z. In this case, there are two cases that may happen:
CCSA is compared with seven other meta-heuristic algorithms.
The organization of this paper is as follows. A brief
overview of the basic CSA algorithm and ten chaotic maps Case 1: Crow z does not know that crow j follows it
is described in Section 2. The detailed description of the Crow j approaches to the hiding place of crow z. The
proposed CCSA approach is provided in Section 3. Exper- updating position of crow j is defined as follows:
imental results and discussions of the proposed CCSA are
introduced in Section 4. Finally, a summary of the proposed
work is represented in Section 5. y j,t+1 = y j,t+1 + Rj × f l j,t × (N z,t − y j,t ) (1)
position is reported as the optimal solution. The pseudo code algorithms. The main idea of it is to transform param-
of CSA is defined at Algorithm 1. eters/variables from the chaos to the solution space. It
depends for searching of global optimum on chaotic motion
properties such as ergodicity, regularity, and stochastic
properties. The main privileges of COA are fast convergence
rate and their capability for avoiding local minima. All of
these privileges can significantly improve the performance
of evolutionary algorithms [59].
Chaotic maps have a form of determinate, where no
random factors are used. In this work, ten distinguished
non-invertible one-dimensional maps are adopted to obtain
chaotic sets. The adopted chaotic maps are defined in
Table 1. In this table, q denotes the index of the chaotic
sequence p, and pq is the qth number in the chaotic
sequence. The rest of the parameters including d, c, and μ
are the control parameters, which determine the chaotic
behavior of the dynamic system. The initial point p0 is set to
0.7 for all chaos maps, as the initial values for chaotic map
may have a great influence of fluctuation patterns on chaotic
maps. We used the initial values as in [39]. Figure 1 shows
the visualization of these maps for C chaotic map value.
Yes No
Yes No
The new position
is feasible?
Yes Fn of new No
position Fn of
memory position
No
Is Iteration
Yes
Terminate
value for a feature given class is replaced with the missing 4.2 Performance metrics
value.
In this subsection, six different statistical measurements
s i,j = mediani:si,j ∈Wr si,j (8) are adopted. These measurements are the worst, best, and
178 Neural Comput & Applic (2019) 31:171–188
ID Dataset No. of features No. of instances No. of classes Missing values Type
Software
tMax Operating system Windows 7
Worst fitness = min BSi (11) Language MATLAB R2012R
i=1
Neural Comput & Applic (2019) 31:171–188 179
D1 Worst Best Mean SD ASS P val. D2 Worst Best Mean SD ASS P val.
CSA 1 1.39 1.23 0.07 0.96 CSA 0.77 1.05 0.97 0.07 1
CCSA1 1.04 1.32 1.28 0.06 0.53 2.05E-07 CCSA1 0.74 1.12 1.07 0.06 0.49 1.29E-05
CCSA2 1.09 1.38 1.31 0.07 0.92 1.25E-07 CCSA2 0.81 1.08 1.04 0.07 0.93 2.76E-06
CCSA3 1.02 1.32 1.27 0.07 0.45 7.23E-07 CCSA3 0.81 1.07 1.01 0.07 0.45 0.517
CCSA4 1.04 1.43 1.31 0.1 0.8 1.03E-05 CCSA4 0.81 1.07 1.03 0.06 0.8 9.20E-04
CCSA5 1.03 1.32 1.28 0.06 0.12 8.18E-07 CCSA5 0.82 1.12 1.04 0.06 0.86 4.78E-06
CCSA6 0.97 1.38 1.28 0.06 0.35 1.56E-04 CCSA6 0.69 1.12 1.04 0.07 0.55 4.00E-04
CCSA7 1.07 1.35 1.29 0.05 0.63 4.33E-07 CCSA7 0.82 1.13 1.08 0.06 0.43 3.04E-07
CCSA8 1.04 1.35 1.29 0.07 0.21 1.90E-06 CCSA8 0.77 1.12 1.07 0.07 0.62 1.06E-09
CCSA9 1.03 1.34 1.29 0.06 0.2 8.55E-08 CCSA9 0.76 1.13 1.06 0.08 0.59 1.07E-06
CCSA10 1.1 1.35 1.28 0.07 0.02 4.70E-02 CCSA10 0.82 1.12 1.04 0.07 0.41 1.19E-14
D3 Worst Best Mean SD ASS P val. D4 Worst Best Mean SD ASS P val.
CSA 0.97 1.4 1.33 0.12 0.36 CSA 1.14 1.27 1.24 0.04 1
CCSA1 1.03 1.47 1.42 0.08 0.54 1.57E-08 CCSA1 1.08 1.37 1.33 0.05 0.73 2.05E-14
CCSA2 0.96 1.47 1.41 0.11 0.8 2.28E-09 CCSA2 1.02 1.42 1.34 0.08 0.99 6.89E-10
CCSA3 1.02 1.45 1.35 0.14 0.57 0.002 CCSA3 1.02 1.43 1.33 0.09 0.95 1.85E-08
CCSA4 0.97 1.47 1.42 0.12 0.73 3.77E-12 CCSA4 1 1.42 1.33 0.09 0.99 1.44E-05
CCSA5 1 1.47 1.4 0.12 0.44 3.64E-09 CCSA5 0.99 1.39 1.33 0.08 0.8 3.59E-11
CCSA6 0.94 1.47 1.4 0.13 0.08 3.01E-08 CCSA6 1.15 1.48 1.37 0.08 0.94 1.25E-07
CCSA7 1.04 1.47 1.41 0.1 0.04 3.52E-05 CCSA7 1.05 1.4 1.31 0.05 0.68 1.20E-07
CCSA8 0.99 1.47 1.42 0.1 0.07 2.11E-09 CCSA8 1 1.35 1.31 0.07 0.32 5.22E-10
CCSA9 1.01 1.47 1.41 0.1 0.09 3.78E-06 CCSA9 1 1.39 1.32 0.08 0.48 1.55E-08
CCSA10 0.98 1.47 1.37 0.14 0.13 1.00E-02 CCSA10 1.06 1.37 1.32 0.06 0.12 4.29E-17
D5 Worst Best Mean SD ASS P val. D6 Worst Best Mean SD ASS P val.
CSA 1.01 1.37 1.3 0.11 0.21 CSA 0.46 0.52 0.77 0.1 0.5
CCSA1 1.03 1.4 1.36 0.08 0.07 1.11E-09 CCSA1 0.47 0.9 0.86 0.09 0.12 5.24E-09
CCSA2 1 1.45 1.38 0.09 0.26 9.86E-09 CCSA2 0.48 0.9 0.84 0.1 0.78 1.64E-05
CCSA3 0.99 1.44 1.32 0.14 0.75 6.00E-03 CCSA3 0.47 0.88 0.82 0.09 0.59 1.37E-08
CCSA4 0.98 1.45 1.37 0.1 0.82 4.38E-06 CCSA4 0.48 0.89 0.87 0.07 0.18 9.13E-14
CCSA5 1.01 1.39 1.34 0.08 0.46 1.26E-04 CCSA5 0.48 0.9 0.84 0.09 0.22 1.88E-09
CCSA6 1 1.41 1.34 0.1 0.46 8.48E-06 CCSA6 0.48 0.9 0.85 0.09 0.8 4.56E-08
CCSA7 1.02 1.45 1.34 0.08 0.18 1.27E-07 CCSA7 0.55 0.9 0.87 0.07 0.16 6.60E-13
CCSA8 1.02 1.41 1.36 0.08 0.02 6.00E-07 CCSA8 0.49 0.9 0.87 0.07 0.13 6.66E-13
CCSA9 1.01 1.41 1.36 0.08 0.06 5.43E-07 CCSA9 0.49 0.9 0.87 0.07 0.13 6.66E-13
CCSA10 1 1.45 1.4 0.09 1 1.24E-09 CCSA10 0.4 0.9 0.82 0.1 0.38 4.03E-07
D7 Worst Best Mean SD ASS P val. D8 Worst Best Mean SD ASS P val.
CSA 1.01 1.66 1.58 0.16 0.28 CSA 1.08 1.3 1.25 0.06 0.97
CCSA1 0.95 1.72 1.63 0.14 0.37 1.38E-05 CCSA1 1.13 1.3 1.25 0.05 0.66 5.14E-04
CCSA2 1.04 1.73 1.59 0.18 0.37 2.68E-02 CCSA2 1.07 1.36 1.29 0.07 0.99 1.08E-04
CCSA3 1.05 1.72 1.49 0.19 0.34 5.62E-02 CCSA3 1.09 1.28 1.24 0.04 1 4.79E-05
CCSA4 1.32 1.73 1.63 0.12 0.36 7.43E-04 CCSA4 0.09 1.3 1.25 0.05 0.69 1.70E-05
CCSA5 1.25 1.73 1.62 0.16 0.48 1.28E-04 CCSA5 1.05 1.34 1.26 0.07 0.83 5.40E-01
CCSA6 1.07 1.73 1.63 0.12 0.3 1.90E-02 CCSA6 1.06 1.35 1.28 0.07 0.95 2.50E-05
CCSA7 1.13 1.73 1.63 0.12 0.22 2.10E-05 CCSA7 1.13 1.36 1.28 0.05 0.79 1.20E-05
CCSA8 1.15 1.73 1.62 0.12 0.31 9.50E-03 CCSA8 1.3 1.33 1.25 0.68 0.68 1.13E-01
CCSA9 1.07 1.66 1.62 0.12 0.25 8.05E-04 CCSA9 1.06 1.29 1.24 0.05 0.89 9.13E-05
CCSA10 1.06 1.52 1.39 0.11 0.99 5.60E-03 CCSA10 0.98 1.36 1.27 0.09 1 3.40E-03
180 Neural Comput & Applic (2019) 31:171–188
Table 5 (continued)
D9 Worst Best Mean SD ASS P val. D10 Worst Best Mean SD ASS P val.
CSA 1.05 1.3 1.25 0.07 0.55 CSA 1.18 1.49 1.39 0.1 0.54
CCSA1 1.06 1.36 1.29 0.07 0.68 1.99E-04 CCSA1 1.18 1.53 1.45 0.08 0.7 2.34E-04
CCSA2 1.05 1.38 1.32 0.08 0.98 5.47E-09 CCSA2 1.21 1.54 1.46 0.09 0.9 7.20E-04
CCSA3 1.01 1.36 1.31 0.08 0.97 6.17E-08 CCSA3 1.19 1.54 1.41 0.08 0.44 7.47E-02
CCSA4 1.04 1.38 1.33 0.08 0.94 6.46E-07 CCSA4 1.13 1.43 1.43 0.09 0.83 2.13E-04
CCSA5 1.07 1.37 1.29 0.08 0.78 4.23E-05 CCSA5 1.2 1.47 1.41 0.06 0.37 8.20E-01
CCSA6 1.07 1.38 1.31 0.07 0.84 7.04E-07 CCSA6 1.17 1.45 1.4 0.07 0.41 2.64E-04
CCSA7 1.08 1.38 1.32 0.07 0.65 4.46E-08 CCSA7 1.2 1.55 1.46 0.06 0.18 9.56E-05
CCSA8 1.04 1.36 1.31 0.07 0.74 2.06E-08 CCSA8 1.24 1.5 1.44 0.07 0.26 3.13E-05
CCSA9 1.03 1.4 1.3 0.08 0.78 1.21E-05 CCSA9 1.21 1.49 1.45 0.08 0.45 2.34E-05
CCSA10 1.03 1.36 1.32 0.07 1 3.35E-06 CCSA10 1.26 1.49 1.44 0.07 1 4.31E-02
D11 Worst Best Mean SD ASS P val. D12 Worst Best Mean SD ASS P val.
CSA 1.16 1.62 1.52 0.13 0.15 CSA 1.27 1.7 1.59 0.14 0.21
CCSA1 1.27 1.69 1.64 0.09 0.36 3.77E-08 CCSA1 1.26 1.75 1.66 0.1 0.2 9.90E-04
CCSA2 1.2 1.69 1.61 0.1 0.72 8.05E-06 CCSA2 1.27 1.73 1.63 0.09 0.18 4.15E-03
CCSA3 1.22 1.66 1.57 0.13 0.9 4.21E-01 CCSA3 1.31 1.58 1.5 0.07 0.33 9.19E-02
CCSA4 1.28 1.66 1.58 0.1 0.1 8.76E-05 CCSA4 1.29 1.7 1.61 0.1 0.26 2.79E-03
CCSA5 1.21 1.68 1.59 0.09 0.51 4.60E-02 CCSA5 1.27 1.77 1.68 0.13 0.25 1.23E-05
CCSA6 1.22 1.68 1.63 0.09 0.5 3.90E-07 CCSA6 1.25 1.77 1.67 0.13 0.19 2.62E-04
CCSA7 1.25 1.69 1.65 0.08 0.2 2.13E-08 CCSA7 1.3 1.76 1.68 0.09 0.27 6.14E-05
CCSA8 1.25 1.66 0.09 0.09 0.15 3.21E-02 CCSA8 1.33 1.77 1.67 0.1 0.3 9.81E-04
CCSA9 1.17 1.69 1.63 0.1 0.53 8.25E-08 CCSA9 1.26 1.74 1.65 0.11 0.12 3.90E-03
CCSA10 1.23 1.67 1.6 0.12 0.16 2.36E-04 CCSA10 1.3 1.62 1.54 0.09 1 1.65E-03
D13 Worst Best Mean SD ASS P val. D14 Worst Best Mean SD ASS P val.
CSA 1.07 1.52 1.4 0.12 0.39 CSA 1.17 1.5 1.42 0.1 0.33
CCSA1 1.18 1.64 1.59 0.1 0.04 2.86E-13 CCSA1 1.23 1.55 1.51 0.07 0.44 2.95E-02
CCSA2 1.09 1.64 1.57 0.11 0.25 5.64E-10 CCSA2 1.15 1.57 1.53 0.09 0.98 2.74E-08
CCSA3 1.15 1.58 1.51 0.12 0.51 1.78E-08 CCSA3 1.16 1.57 1.48 0.1 0.93 7.31E-02
CCSA4 1.13 1.64 1.55 0.13 0.18 9.12E-08 CCSA4 1.15 1.58 1.53 0.09 0.07 1.00E-06
CCSA5 1.16 1.64 1.57 0.11 0.14 4.72E-11 CCSA5 1.19 1.57 1.49 0.08 0.69 2.15E-02
CCSA6 1.16 1.64 1.55 0.15 0.21 3.31E-08 CCSA6 1.19 1.57 1.5 0.08 0.43 4.33E-03
CCSA7 1.18 1.64 1.59 0.1 0.2 2.04E-11 CCSA7 1.19 1.58 1.53 0.07 0.67 1.20E-06
CCSA8 1.2 1.63 1.56 0.09 0.09 3.95E-11 CCSA8 1.22 1.58 1.51 0.07 0.31 1.84E-03
CCSA9 1.18 1.64 1.58 0.1 0.08 1.16E-12 CCSA9 1.16 1.58 1.51 0.08 0.19 6.26E-03
CCSA10 1.14 1.63 1.52 0.14 0.47 3.70E-08 CCSA10 1.16 1.58 1.5 0.1 0.69 4.70E-04
D15 Worst Best Mean SD ASS P val. D16 Worst Best Mean SD ASS P val.
CSA 1.14 1.72 1.52 0.22 0.3 CSA 0.69 1.83 1.52 0.36 0.1
CCSA1 1.18 1.76 1.7 0.11 0.14 7.71E-04 CCSA1 0.71 1.84 1.64 0.25 0.1 4.24E-02
CCSA2 1.13 1.8 1.66 0.18 0.17 9.80E-04 CCSA2 0.9 1.84 1.49 0.31 0.32 9.12E-04
CCSA3 1.18 1.8 1.62 0.17 0.21 1.48E-04 CCSA3 0.83 1.86 1.58 0.3 0.06 4.55E-04
CCSA4 1.1 1.8 1.68 0.19 0.97 3.28E-06 CCSA4 0.81 1.85 1.59 0.36 0.16 2.27E-04
CCSA5 1.19 1.8 1.68 0.21 0.3 6.22E-07 CCSA5 0.89 1.84 1.62 0.28 0.1 4.55E-03
CCSA6 1.18 1.76 1.65 0.15 0.13 7.72E-03 CCSA6 0.75 1.85 1.62 0.3 0.15 3.32E-04
CCSA7 1.18 1.8 1.69 0.11 0.18 1.20E-04 CCSA7 0.91 1.85 1.64 0.24 0.11 2.58E-04
CCSA8 1.08 1.8 1.68 0.15 0.09 9.45E-03 CCSA8 0.59 1.84 1.64 0.29 0.09 6.06E-04
CCSA9 1.04 1.76 1.65 0.17 0.1 5.76E-03 CCSA9 0.77 1.84 1.69 0.29 0.11 9.52E-04
CCSA10 1.02 1.8 1.65 0.2 0.23 5.14E-05 CCSA10 0.62 1.85 1.66 0.34 0.1 2.27E-05
Neural Comput & Applic (2019) 31:171–188 181
Table 5 (continued)
D17 Worst Best Mean SD ASS P val. D18 Worst Best Mean SD ASS P val.
CSA 1.04 1.73 1.5 0.2 0.12 CSA 1.09 1.61 1.51 0.15 0.37
CCSA1 1.06 1.81 1.63 0.2 0.49 6.21E-05 CCSA1 1.11 1.76 1.66 0.14 0.11 4.49E-08
CCSA2 1.03 1.81 1.66 0.21 0.06 6.66E-05 CCSA2 1.2 1.76 1.62 0.15 0.4 9.26E-05
CCSA3 1.08 1.77 1.43 0.15 0.43 1.31E-02 CCSA3 1.17 1.61 1.52 0.12 0.28 9.69E-02
CCSA4 1.08 1.81 1.65 0.2 0.17 1.99E-04 CCSA4 1.16 1.76 1.64 0.15 0.78 4.10E-07
CCSA5 1.08 1.81 1.66 0.19 0.11 6.29E-05 CCSA5 1.09 1.76 1.62 0.15 0.94 4.77E-05
CCSA6 1.12 1.81 1.69 0.19 0.07 3.38E-06 CCSA6 1.12 1.76 1.62 0.15 0.15 6.92E-05
CCSA7 1.14 1.81 1.69 0.17 0.07 5.36E-06 CCSA7 1.19 1.76 1.7 0.12 0.64 9.80E-09
CCSA8 1.09 1.53 1.49 0.09 0.04 7.82E-04 CCSA8 1.2 1.76 1.66 0.15 0.2 1.15E-08
CCSA9 1.1 1.75 1.65 0.16 0.07 2.81E-04 CCSA9 1.17 1.76 1.69 0.14 0.88 8.30E-09
CCSA10 1.04 1.81 1.65 0.21 0.15 5.25E-04 CCSA10 1.18 1.76 1.64 0.15 0.47 2.62E-06
D19 Worst Best Mean SD ASS P val. D20 Worst Best Mean SD ASS P val.
CSA 0.91 1.37 1.3 0.13 0.27 CSA 0.86 1.33 1.27 0.15 0.25
CCSA1 0.99 1.45 1.39 0.1 0.74 8.47E-08 CCSA1 0.98 1.44 1.39 0.13 0.19 3.02E-11
CCSA2 0.95 1.45 1.37 0.12 0.76 1.30E-05 CCSA2 0.94 1.44 1.37 0.14 0.29 2.63E-09
CCSA3 1 1.36 1.3 0.09 0.41 6.65E-05 CCSA3 0.96 1.41 1.32 0.14 0.19 7.11E-08
CCSA4 0.96 1.45 1.38 0.12 0.73 2.47E-08 CCSA4 0.92 1.44 1.4 0.11 0.33 3.81E-10
CCSA5 0.94 1.45 1.36 0.13 0.39 1.77E-05 CCSA5 0.99 1.44 1.38 0.11 0.18 4.34E-10
CCSA6 0.92 1.45 1.37 0.12 0.6 1.07E-07 CCSA6 0.93 1.44 1.38 0.12 0.45 2.18E-08
CCSA7 0.95 1.45 1.39 0.1 0.84 1.87E-08 CCSA7 0.94 1.44 1.41 0.11 0.17 1.98E-13
CCSA8 0.94 1.44 1.39 0.12 0.6 6.10E-10 CCSA8 0.95 1.44 1.39 0.1 0.59 1.99E-09
CCSA9 0.95 1.45 1.4 0.12 0.42 1.94E-10 CCSA9 1.03 1.44 1.41 0.1 0.15 2.66E-13
CCSA10 0.9 1.43 1.36 0.13 0.76 3.33E-05 CCSA10 0.97 1.44 1.38 0.12 0.26 4.42E-08
algorithms. All these experiments are performed on the results compared with the others. In addition, the obtained
same PC with the same specification. The detailed settings P value proves this improvement, which means sine chaotic
are presented in Table 4. map can improve the performance of CSA. Thus, these
results prove that sine map is statistically significant than
4.3.1 The performance of CCSA with different chaotic maps CSA, as it obtains good classification performance with
the minimum number of features. The highest classification
The main objective of this experiment is to evaluate the performance with small size of features is highly desired
performance of CCSA with different chaotic maps and in biology and medicine. This is due to fewer experiments
determine the optimal chaotic map. needed for patients to detect a disease or cancer. Also, it
Table 5 compares CCSA with different chaotic maps with can be useful for reducing the cost involved in each exper-
the original CSA in terms of mean, best, and worst fit- iment. However, CCSA3 in most of the cases obtains the
ness, ASS, SD, and P value of Wilcoxon’s rank sum test. worst results. Moreover, P value shows that, too, there is no
P value is used to compared CSA with different versions huge difference between CSA and CCSA3 because P value
of CCSA. The best results of worst, mean, and best fitness, exceeds 0.05. According to these results, sine chaotic map is
standard deviation, and average selection size are under- selected to be the appropriate map of CCSA. In the follow-
lined. Despite, the worst results of P value where P < 0.05 ing section, CCSA with sine chaotic map will be evaluated
are underlined. It must be pointed out that CCSA1, CCSA2,. further in more details.
. ., CCSA10 in the Table 5 refer to the ten adopted chaotic For further comparison of CCSA with different chaotic
maps as shown in Table 1. Also it must be pointed out maps, graphical representation of the convergence curve of
that the marks D1, D2, . . ., D20 in Table 5 refer to the CCSA’s versions are analyzed as well. The convergence
adopted benchmarks as shown in Table 3. As it can be curve of CCSA with different chaotic maps on 20 bench-
observed from Table 5, CCSA with different chaotic maps mark datasets is shown in Fig. 3. In this figure, the number
overtakes the standard CSA. Moreover, it can be noticed of iterations is equal to 50. As it can be observed from this
that in most cases, CCSA7 obtains the highest statistical figure, almost all CCSA with different chaotic maps obtain
182 Neural Comput & Applic (2019) 31:171–188
D1 D2 D9 D10
D3 D4
D11 D12
D5 D6
D13 D14
D7 D8
D15 D16
better results. This is due to that their curves are higher than
the original one “colored with black.” In addition, it can 4.3.2 CCSA vs. other optimization algorithms
be noticed that in most cases, CCSA algorithms converge
faster towards the global optima than CSA. Additionally, it In this subsection, the performance of CCSA with sine
can be observed that CCSA7 curve at most cases obtains chaotic map is compared with other optimization algo-
the highest results. However, CCSA3 obtains the lowest rithms. These algorithms are proposed in the literature for
results. These results are consistent with the obtained results solving the feature selection problem. These algorithms
in Table 5. Such an improvement of the results came from are particle swarm optimization (PSO) [31], artificial bee
embedding chaotic sequence in the searching iterations of colony (ABC) [42], chicken swarm optimization (CSO)
CSA. This helps the algorithm to avoid local optima and [18], flower pollination algorithm (FPA) [41], moth-flame
reach the global optima more faster. optimization (MFO) [55], gray wolf optimizer (GWO) [8],
Neural Comput & Applic (2019) 31:171–188 183
PSO ABC CSO FPA MFO GWO WOA SCA CSA CCSA
D1 1.24 1.16 1.31 1.4 1.24 1.28 1.32 1.3 1.23 1.29
D2 1.04 1.03 0.98 1.07 0.95 0.99 0.99 1.04 0.97 1.08
D3 1.14 1.16 1.2 1.35 1.14 1.16 1.38 1.3 1.33 1.41
D4 1.35 1.34 1.29 1.38 1.35 1.28 1.38 1.34 1.24 1.31
D5 1.2 1.09 1.2 1.39 1.12 1.12 1.35 1.24 1.3 1.34
D6 0.76 0.74 0.77 0.81 0.64 0.69 0.81 0.77 0.77 0.87
D7 1.43 1.51 1.57 1.66 1.43 1.5 1.66 1.56 1.58 1.63
D8 1.24 1.18 1.15 1.19 1.19 1.24 1.28 1.18 1.25 1.28
D9 1.28 1.17 1.17 1.3 1.23 1.24 1.29 1.23 1.25 1.32
D10 1.4 1.35 1.32 1.47 1.28 1.36 1.48 1.37 1.39 1.46
D11 1.37 1.36 1.4 1.57 1.3 1.35 1.58 1.48 1.52 1.65
D12 1.42 1.26 1.42 1.59 1.28 1.32 1.61 1.5 1.59 1.68
D13 1.25 1.35 1.31 1.52 1.23 1.29 1.56 1.42 1.4 1.59
D14 1.35 1.35 1.36 1.5 1.21 1.3 1.52 1.4 1.42 1.53
D15 1.37 1.4 1.32 1.68 1.26 1.39 1.67 1.5 1.52 1.69
D16 1 1.12 1.18 1.42 0.93 1.02 1.7 1.3 1.52 1.64
D17 1.12 1.15 1.27 1.61 1.13 1.15 1.67 1.37 1.5 1.69
D18 1.42 1.48 1.37 1.6 1.35 1.41 1.47 1.42 1.51 1.7
D19 1.19 1.14 1.13 1.29 1.07 1.12 1.39 1.27 1.3 1.39
D20 1.24 1.26 1.22 1.36 1.14 1.2 1.37 1.28 1.27 1.41
PSO ABC CSO FPA MFO GWO WOA SCA CSA CCSA
D1 1.28 1.17 1.34 1.4 1.26 1.35 1.35 1.32 1.39 1.35
D2 1.05 1.08 1.05 1.09 0.96 1.05 1.08 1.08 1.05 1.13
D3 1.16 1.31 1.23 1.43 1.17 1.25 1.47 1.34 1.4 1.47
D4 1.41 1.43 1.35 1.46 1.36 1.36 1.46 1.4 1.27 1.4
D5 1.28 1.15 1.25 1.44 1.14 1.15 1.39 1.28 1.37 1.45
D6 0.82 0.8 0.8 0.87 0.66 0.71 0.88 0.78 0.52 0.9
D7 1.5 1.57 1.61 1.75 1.45 1.55 1.71 1.6 1.7 1.73
D8 1.28 1.23 1.15 1.21 1.2 1.28 1.34 1.2 1.33 1.36
D9 1.32 1.22 1.2 1.34 1.24 1.33 1.36 1.25 1.3 1.38
D10 1.44 1.39 1.37 1.53 1.3 1.42 1.57 1.4 1.51 1.55
D11 1.41 1.47 1.45 1.62 1.31 1.38 1.71 1.51 1.68 1.7
D12 1.45 1.3 1.45 1.69 1.29 1.38 1.74 1.55 1.73 1.76
D13 1.28 1.4 1.35 1.6 1.24 1.32 1.62 1.45 1.52 1.64
D14 1.38 1.41 1.38 1.55 1.29 1.35 1.58 1.45 1.5 1.58
D15 1.4 1.47 1.39 1.8 1.29 1.42 1.81 1.55 1.72 1.8
D16 1.02 1.18 1.25 1.78 1.02 1.07 1.8 1.38 1.86 1.85
D17 1.19 1.15 1.32 1.79 1.14 1.2 1.78 1.4 1.73 1.81
D18 1.48 1.53 1.42 1.7 1.36 1.42 1.58 1.48 1.66 1.76
D19 1.24 1.22 1.18 1.32 1.09 1.17 1.43 1.32 1.37 1.45
D20 1.3 1.32 1.24 1.4 1.18 1.3 1.45 1.3 1.33 1.44
Neural Comput & Applic (2019) 31:171–188 185
PSO ABC CSO FPA MFO GWO WOA SCA CSA CCSA
PSO ABC CSO FPA MFO GWO WOA SCA CSA CCSA
D1 0.03 0.01 0.05 0.01 0.01 0.03 0.03 0.05 0.07 0.05
D2 0.01 0.05 0.05 0.03 0.03 0.04 0.05 0.06 0.07 0.06
D3 0.02 0.1 0.09 0.1 0.04 0.06 0.15 0.09 0.12 0.1
D4 0.09 0.08 0.1 0.08 0.03 0.06 0.13 0.06 0.04 0.05
D5 0.06 0.07 0.07 0.05 0.03 0.05 0.1 0.08 0.11 0.08
D6 0.04 0.06 0.04 0.05 0.03 0.04 0.06 0.05 0.1 0.07
D7 0.06 0.07 0.08 0.07 0.02 0.05 0.09 0.09 0.16 0.12
D8 0.05 0.06 0 0.02 0.02 0.05 0.04 0.02 0.06 0.05
D9 0.06 0.06 0.04 0.04 0.02 0.09 0.06 0.05 0.07 0.07
D10 0.07 0.06 0.05 0.05 0.04 0.06 0.09 0.05 0.1 0.06
D11 0.06 0.08 0.07 0.09 0.01 0.04 0.09 0.08 0.13 0.08
D12 0.06 0.05 0.06 0.05 0.01 0.06 0.1 0.08 0.14 0.09
D13 0.05 0.09 0.04 0.07 0.02 0.06 0.1 0.08 0.12 0.1
D14 0.04 0.08 0.06 0.07 0.05 0.06 0.09 0.07 0.1 0.07
D15 0.05 0.06 0.06 0.07 0.02 0.08 0.18 0.1 0.22 0.11
D16 0.05 0.1 0.12 0.13 0.04 0.08 0.23 0.18 0.36 0.24
D17 0.06 0 0.08 0.21 0.01 0.07 0.24 0.1 0.2 0.17
D18 0.06 0.11 0.07 0.11 0.02 0.01 0.09 0.07 0.15 0.12
D19 0.09 0.1 0.06 0.05 0.02 0.08 0.1 0.08 0.13 0.1
D20 0.09 0.1 0.06 0.07 0.03 0.11 0.1 0.09 0.15 0.11
186 Neural Comput & Applic (2019) 31:171–188
Table 11 Comparison between before applying CCSA and after applying CCSA
the CCSA for finding the feature subset in the feature space feature selection algorithm. These selected features can be
better than the other evolutionary algorithms. This is due used further for improving the clustering performance. As
to the adopted evolutionary algorithms that employ random it was mentioned before, clusters built by a subset of salient
parameters in their searching process that can significantly features are more practical and interpretable than clusters
influence their performance. Moreover, the random behav- built using all of the features, which include noise [6]. Thus,
ior of agents such as particle, ant, and firefly can cause the it can help in better data understanding and interpretation.
optimization process stagnation with high probability.
In this subsection, the selected features of CCSA with sine In this paper, a novel hybridization of chaos with CSA algo-
chaotic map are evaluated using three criteria. These criteria rithm, namely CCSA, is proposed. Ten chaotic maps are
are classification accuracy, number of selected features, and used in this study to enhance the performance and con-
CPU processing time. A comparison between using whole vergence speed of CSA. CCSA is applied on one of the
feature length and using subset of it is shown in Table 11. challenge problems, namely feature selection. The proposed
In this table, Lt indicates the dataset feature length and Lf CCSA feature selection algorithm has been validated on
indicates the feature subset of CCSA length. Tenfold cross 20 benchmark datasets. Six different evaluation criteria are
validation method is employed. In this study, KNN classifier adopted in this study. These criteria are best, worst, and
with k = 3 and Euclidean distance are adopted. The mean mean fitness value, SD, ASS, and P value. In addition, the
accuracy of ten folds with CPU processing time in seconds performance of CCSA is compared with the popular and
and number of features are recorded. As it can be seen, using most recent meta-heuristic algorithms. These algorithms are
only a subset of features can highly increase the classifi- PSO, ABC, CSO, FPA, MFO, GWO, WOA, SCA, and CSA.
cation performance and reduce the computational time and The experimental results show that CCSA outperforms the
memory storage. In addition, it can be observed that in some other algorithms in terms of best and mean fitness value.
cases, the accuracy reaches 100% with less computational Moreover, the results show that the CCSA with sine map
time. These obtained results prove the superiority of CCSA can significantly enhance CSA in terms of classification
Neural Comput & Applic (2019) 31:171–188 187
performance, stability quality, number of selected features, 18. Hafez AI, Zawbaa HM, Emary E, Mahmoud HA, Hassanien AE
and convergence speed. Further work on embedding chaotic (2015) An innovative approach for feature selection based on
chicken swarm optimization. In: 7th international conference of
maps with other meta-heuristic algorithms will be consid-
soft computing and pattern recognition (SoCPaR), pp 19–24
ered. For future verification, the performance of CCSA will 19. Hafez AI, Zawbaa HM, Emary E, Hassanien AE (2016) Sine
be applied on more complex science and real-world engi- cosine optimization algorithm for feature selection. In: Inter-
neering problems. Moreover, other chaotic maps are also national symposium on inovations in intelligent systems and
worth applying to CSA. applications (INISTA), pp 1–5
20. He YY, Zhou JZ, Zhou XQ (2009) Comparison of different
chaotic maps in particle swarm optimization algorithm for long
Compliance with Ethical Standards term cascaded hydroelectric system scheduling. Chaos Solitons
Fractals 42:3169–1376
21. He YY, Zhou JZ, Li CS (2008) A precise chaotic particle swarm
Conflict of interest The authors declare that they have no conflict
optimization algorithm based on improved tent map. ICNC 7:569–
of interest.
573
22. He Y, Zhou J, Lu N, Qin H, Lu Y (2010) Differential evolu-
tion algorithm combined with chaotic pattern search. Kybernetika
References 46(4):684–696
23. Jia H, Ding S, Du M, Xue Y (2016) Approximate normalized cuts
without Eigen-decomposition. Inf Sci 374:135–150
1. Abdullah A, Enayatifa R, Lee M (2012) A hybrid genetic algo- 24. Jian L, Li J, Shu K, Liu H (2016) Multi-label informed feature
rithm and chaotic function model for image encryption. Journal of selection. In: Proceedings of the twenty-fifth international joint
Electronics and Communication 66(1):806–816 conference on artificial intelligence, pp 1627–1633
2. Askarzadeh A (2016) A novel metaheuristic method for solv- 25. Kaveh A, Talatahari S (2010) A novel heuristic optimization
ing constrained engineering optimization problems: crow search method: charged system search. Acta Mech 213:267–289
algorithm. Comput Struct 169:1–12 26. Kennedy J, Eberhart R (1995) Particle swarm optimization. In:
3. Bache K, Lichman M UCI Machine learning repository. http:// IEEE international conference on neural networks, vol 4, pp 1942–
archive.ics.uci.edu/ml. Retrieved July 19, 2016 1948
4. Blum A, Langley P (1997) Selection of relevant features and 27. Kohavi R, John G (1997) Wrappers for feature subset selection.
examples in machine learning. Artif Intell 97:245–271 Artif Intell 97(1):273–324
5. Cai JJ, Ma XQ, Li X (2007) Chaotic ant swarm optimization to 28. Lei Y, Huan L (2003) Feature selection for high-dimensional data:
economic dispatch. Electr Power Syst Res 77(10):1373–1380 a fast correlation-based filter solution. In: Proceedings of the 20th
6. Chen CH (2014) A hybrid intelligent model of analyzing clin- international conference on machine learning (ICML-03), pp 856–
ical breast cancer data using clustering techniques with feature 863
selection. Appl Soft Comput 20:4–14 29. Li B, Jiang W (1998) Optimizing complex functions by chaos
7. Derrac J, Garcı́a S, Molina D, Herrera F (2011) A practical tuto- search. Journal of Cybernetics and Systems 29:409–419
rial on the use of nonparametric statistical tests as a methodology 30. Li X, Zhang J, Yin M (2013) Animal migration optimization:
for comparing evolutionary and swarm intelligence algorithms. an optimization algorithm inspired by animal migration behavior,
Swarm Evol Comput 1(1):3–18 Neural Comput Applic, pages=1–11
8. Emary E, Zawbaa H, Hassanien A (2016) Binary gray wolf 31. Lin S, Ying KS-C, Lee Z (2008) Particle swarm optimization for
optimization approaches for feature selection. Neurocomputing parameter determination and feature selection of support vector
172:371–381 machines. Expert Syst Appl 35(4):1817–1824
9. Figueiredo E, Ludermir T, Bastos C (2016) Many objective parti-
32. Meng X, Gao XZ, Lu L, Liu Y, Zhang H (2016) A new bio-
cle swarm optimization. Inf Sci 374:115–134
inspired optimisation algorithm: bird swarm algorithm. J Exp
10. Gadat S, Younes L (2007) A stochastic algorithm for feature
Theor Artif Intell 28(4):673–687
selection in pattern recognition. Journal of Machine Learning
33. Mingjun J, Tang HW (2004) Application of chaos in simulated
8:509–547
annealing optimization. Chaos Solitons Fractals 21:933–941
11. Gai-Ge W, Suash D, Leandro D, Coelho S (2015) Elephant
34. Mirjalili S, Seyed M, Lewis A (2014) Grey wolf optimizer. Adv
herding optimization. In: 3rd international symposium on compu-
Eng Softw 69:46–61
tational and business intelligence (ISCBI), Bali, pp 1–5
12. Gandomi A, Yang X, Alavi A (2011) Mixed variable structural 35. Ng K, Liu H (2000) Customer retention via data mining. AI
optimization using firefly algorithm. Comput Struct 89:2325– Review 14:569–590
2336 36. Repinsek M, Liu S, Mernik L (2012) A note on teaching–learning-
13. Gandomi AH, Yang XS, Talatahari S, Alavi AH (2013) Fire- based optimization algorithm. Inf Sci 212:79–93
fly algorithm with chaos. Commun Nonlinear Sci Numer Simul 37. Rui Y, Huang TS, Chang S (1999) Image retrieval: current tech-
18(1):89–98 niques, promising directions and open issues. J Vis Commun
14. Geem Z, Kim J, Loganathan G (2001) A new heuristic optimiza- Image Represent 10:39–62
tion algorithm: harmony search. Simulation 76(2):60–68 38. Sarafrazi S (2013) Facing the classification of binary problems
15. Goldberg D (1989) Genetic algorithms in search, optimization and with a gsa-svm hybrid system. Math Comput Model 57:270–278
machine learning, 1st edn. Addison-Wesley Longman Publishing 39. Saremi S, Mirjalili S, Lewis A (2014) Biogeography-based opti-
Co., Inc., Boston, MA, USA. ISBN 0201157675 mization with chaos. Neural Comput & Applic 25(5):1077–1097
16. Golub TR (1999) Molecular classification of cancer: class discov- 40. Sayed G, Darwish A, Hassanien A, Pan S (2016) Breast cancer
ery and class prediction by gene expression monitoring. Science diagnosis approach based on meta-heuristic optimization algo-
286:531–537 rithm inspired by bubble-net hunting strategy of whales. In: 10th
17. Guyon I, Elisseeff A (2003) An introduction to variable and international conference on genetic and evolutionary computing
attribute selection. Machine Learning Research 3:1157–1182 (ICGEC), Fujian, China, pp 306–313
188 Neural Comput & Applic (2019) 31:171–188
41. Sayed S, Nabil E, Badr A (2016) A binary clonal flower pollina- 51. Yang DX, Li G, Cheng GD (2007) On the efficiency of chaos
tion algorithm for feature selection. Pattern Recogn Lett 77:21–27 optimization algorithms for global optimization. Chaos Solitons
42. Schiezaro M, Pedrini H (2013) Data feature selection based on Fractals 34:1366–1375
artificial bee colony algorithm. EURASIP Journal on Image and 52. Yang Y, Pederson JO (1997) A comparative study on feature
Video Processing 2013(1):1–8 selection in text categorization. In: Proceedings of the fourteenth
43. Shilaskar S, Ghatol A (2013) Feature selection for medical diag- international conference on machine learning, pp 412–420
nosis: evaluation for cardiovascular diseases. Expert Syst Appl 53. Yu Z (2014) Hybrid clustering solution selection strategy. Pattern
40(10):4146–4153 Recogn 47:3362–3375
44. Storn R, Price K (1997) Differential evolution -a simple and effi- 54. Yuan XF, Wang YN, Wu LH (2007) Pattern search algorithm
cient heuristic for global optimization over continuous spaces. J using chaos and its application. Journal of Hunan University,
Glob Optim 11(4):341–359 Natural Sciences 34(9):30–33
45. Tavazoei MS, Haeri M (2007) Comparison of different one- 55. Zawbaa H, Emary E, Parv B, Shaarawi M (2016) Feature selec-
dimensional maps as chaotic search pattern in chaos optimization tion approach based on moth-flame optimization algorithm. In:
algorithms. Appl Math Comput 187:1076–1085 IEEE congress on evolutionary computation, Vancouver, Canada,
46. Unler A, Murat A (2010) A discrete particle swarm optimiza- pp 24–29
tion method for feature selection in binary classification problems. 56. Zhang H, Sun G (2002) Feature selection using tabu search
Journal of Operation Research 206:528–539 method. Pattern Recogn 35:701–711
47. Wanga G, Guo L, Gandomi A, Hao G, Wangb H (2014) Chaotic 57. Zhang L, Zhang CJ (2008) Hopf bifurcation analysis of some
krill herd algorithm. Inf Sci 274:17–34 hyperchaotic systems with time-delay controllers. Kybernetika
48. Wilcoxon F (1945) Individual comparisons by ranking methods. 44(1):35–42
Biom Bull 1:80–83 58. Zhang N, Ding S, Zhang J (2016) Multi layer elm-rbf for multi-
49. Yuan XH, Yuan YB, Zhang YC (2002) A hybrid chaotic genetic label learning. Appl Soft Comput 43:535–545
algorithm for short-term hydro system scheduling. Math Comput 59. Zhang Q, Li Z, Zhou CJ, Wei XP (2013) Bayesian network struc-
Simul 59(4):319–327 ture learning based on the chaotic particle swarm optimization
50. Xiang T, Liao XF, Wong KW (2007) Comparison of differ- algorithm. Genet Mol Res 12(4):4468–4479
ent chaotic maps in particle swarm optimization algorithm for 60. Zhu ZL, Li SP, Yu H (2008) A new approach to generalized
long term cascaded hydroelectric system scheduling. Appl Math chaos synchronization based on the stability of the error system.
Comput 190:1637–1645 Kybernetika 44(4):492–500