Documenti di Didattica
Documenti di Professioni
Documenti di Cultura
a r t i c l e
i n f o
Keywords:
Evolutionary Algorithms
Memetic search
Rule extraction
Articial immune systems
a b s t r a c t
In this paper, an Evolutionary Memetic Algorithm (EMA), which uses a local search intensity scheme to
complement the global search capability of Evolutionary Algorithms (EAs), is proposed for rule extraction. Two schemes for local search are studied, namely EMA-lGA, which uses a micro-Genetic Algorithm-based (lGA) technique, and EMA-AIS, which is inspired by Articial Immune System (AIS) and
uses the clonal selection for cell proliferation. The evolutionary memetic algorithm is complemented
with the use of a variable-length chromosome structure, which allows the exibility to model the number of rules required. In addition, advanced variation operators are used to improve different aspects of
the algorithm. Real world benchmarking problems are used to validate the performance of EMA and
results from simulations show the proposed algorithm is effective.
2009 Elsevier Ltd. All rights reserved.
1. Introduction
Emphasis on information technology has created a greater reliance by experts on the knowledge extracted using automated systems. In recent years, the eld of automated data mining has
emerged as an important area of applied research in order to deal
with the voluminous amount of data collected in various industries
due to the low cost and availability of larger storage devices.
Hence, this paper focuses on one of the major data mining tasks,
i.e. classication. Classication is inherently evident in our daily
life, in the elds of medical diagnosis, criminal investigation, nance, etc. (Bojarczuk, Lopes, Freitas, & Michalkiewicz, 2004;
Doumpos & Zopounidis, 2002; Tan, Yu, Heng, & Lee, 2003). The
use of automated systems for classication provides the advantages of lower processing cost, prompt analysis, providing a second
opinion as human decisions might be bias, etc. In areas such as
medical diagnosis and management decision support, where human experts play a critical role, information derived from automated systems are particularly useful to compliment and
expedite their decisions.
Several automated and statistical techniques for classication
have been studied in the literature, including K-nearest neighbour
(Lepist, Kunttu, & Visa, 2006), discriminate analysis, computational intelligence methods like Neural Networks (NNs) and EAs,
decision trees, etc. Among these, though methods like neural networks (Guan & Li, 2001; Stathakis & Vasilakos, 2006) are able to
provide high predictive accuracies, they are not able to give any
comprehensible knowledge which is of utmost importance to decision makers.
* Corresponding author.
E-mail address: eletankc@nus.edu.sg (K.C. Tan).
0957-4174/$ - see front matter 2009 Elsevier Ltd. All rights reserved.
doi:10.1016/j.eswa.2009.06.028
have shown that EA-Local Search (LS) hybrids or hybrid EAs are
capable of more efcient search capabilities (Merz & Freisleben,
1999; Ong & Keane, 2004).
With these in mind, this paper proposes an Evolutionary
Memetic Algorithm (EMA) for rule extraction. The main EA global
search algorithm evolves the architecture of the rule set while
the LS is applied at every generation to ne-tune the rule parameters. This scheme is achievable by the use of variable-length chromosome representation which gives exibility in the
representation of the rule set and allows easy manipulations by
the advanced variation operators, i.e. structural mutation and
structural crossover (Tan, Chew, & Lee, 2006). Two local improvement search algorithms are used in this paper; the rst scheme
uses a micro-Genetic Algorithm (lGA). This simple method provides efcient and guided improvement over the generations.
The second method incorporates ideas from AIS (Alves, Delgado,
Lopes, & Freitas, 2004; Tan, Goh, Mamun, & Ei, 2008). AIS have been
widely used for data analysis in unsupervised machine learning
algorithms (Timmis & Neal, 2001; Timmis, Neal, & Hunt, 2000).
Watkins and Boggress built on further and developed Articial Immune Recognition Systems (AIRS) (Watkins & Boggress, 2002a) as a
supervised classier and these systems follow a K-nearest neighbor
scheme. AIRS as a classication tool have shown to be very competitive to the existing algorithms in the literature in terms of classier performance (Freitas & Timmis, 2007; Goodman, Boggess, &
Watkins, 2002; Watkins & Boggress, 2002b).
The major evaluation metric for classication rule sets in the literature has been classication accuracy. However, there are other
aspects which are equally important when considering the performance of rule sets. Apart from how accurate the rule sets are able
to classify a dataset, the coverage on the dataset is also important.
The ideas of support and condence level borrowed from association rule mining (Savasere, Omiecinski, & Navathe, 1995) are incorporated into tness function evaluation. The effects of these two
factors are analyzed and discussed.
The necessary background information and fundamental
knowledge to appreciate the signicance of the work presented
in this paper are provided in Section 2. The details of the proposed
features and operators of EMA are described in Section 3 while the
algorithm overview is given in Section 4. Section 5 presents the local search algorithm. Experimental setup and datasets used are described in Section 6. Results from simulations on real-world
benchmarking datasets are given and analyzed in Section 7. Conclusions, together with possible future work, are subsequently
drawn in Section 8.
2. Background information
2.1. Rule-based knowledge
The discovered knowledge in rule-based systems is represented
in the form of IF-THEN prediction rules or commonly known as the
Michigan encoding (Michalewicz, 1994). These rules have the
advantage of being high-level symbolic statements. Such rules
state that the presence of one or more observations (antecedents)
implies or predicts a certain consequence. A typical rule has the
form,
1303
3. Algorithm operators
3.1. Variable length chromosome
The rule set chromosome structure representation often used in
the literature is the xed length structure representation. Fixed
1304
Rule 1
L11
M11
O11
U11
M12
O12
Rule M
Class
Lm1
Mm1
O11
Um1
Mm2
O12
L12
Lm2
U12
Um2
M1n
O1n
Mm4n Om4n
L1n
Lmn
U1n
Umn
Fig. 1. Genotype representation of a rule set chromosome.
Class
1305
metrics for association rules are the support and condence level
(Savasere et al., 1995). Support of A or sup A is a measure of
the number of transactions that contain the item set within the
database whereas condence of an association rule is condence
, where sup AUC is the item set containing A = anteR supAUC
supA
cedent as well as C = consequent.
In the context of this paper, instead of using support and condence level for a rule, it is applied on a Pittsburgh rule set. The support would then be modied as measuring the coverage of the
entire rule set, which is the ratio of the number of instances covered by the rule set to the total number of instances in the dataset:
supRSi
F Ai
N
RSi is the ith rule set, i81; . . . M where M is the number of rule sets
to be evaluated. F Ai is the number of instances that t the antecedents of the rule set and N is the total number of instances.
The condence factor will measure how accurate a rule set is on
those instances that are captured by its antecedents
confidenceRSi
CRi
F Ai
fi
Ci
100%
N
Rule
11
Rule
24
Rule
12
Rule
14
Rule
13
Rule
13
Rule
14
Rule
22
Rule
15
Rule
16
Rule
16
Children
Parents
Rule
11
Rule
21
Rule
22
Rule
23
Rule
24
Rule
21
Rule
12
Rule
15
Rule
23
1306
cess starts with combining the rules of both selected parents into a
common pool. The rst selected rule will be assigned to the rst
child while the second selected rule to the second child. The
remaining rules are then distributed randomly between the two
children. Since rules are considered sequentially within a rule
set, rules at the bottom might not have a chance to be considered
even though they are good rules as those rules on top would have
already classied the instance. One major advantage and necessity
of doing structural shufing crossover is to bring those rules that
are at the bottom to the top of the rule set, hence having the opportunity of classifying the instances. Fig. 2 shows an example of the
crossover process. It can be seen that after the crossover process,
the rules are randomly ordered and the number of rules of a rule
set may vary too.
late into a very different rule set, ensuring that new areas of the
search space are constantly being explored. Fig. 3a shows an example of rule addition while Fig. 3b shows an example of rule deletion. The position of insertion and deletion of the rules is
randomly chosen.
The algorithm needs to decide whether addition or deletion is
to be done at each generation. After which, the number of rules
to be added or deleted is based on a Gaussian distribution (Ang,
Tan, & Mamun, 2008). Using the Gaussian distribution has the
advantage of allowing more exibility on the number of rules to
be deleted or added, while keeping the distribution centered at
the zero mean. Fig. 4 shows the decision based on the Gaussian
distribution.
3.6. Probability of mutation
Insertion
Rule 1
Rule 2
Rule 3
Rule 4
Rule 5
Rule 1
Rule 2
Rule 3
Rule 4
Rule 1
Rule 2
Rule 4
Rule 5
(a)
Deletion
Rule 1
Rule 2
Rule 3
Rule 4
Rule 5
(b)
Fig. 3. Structural mutation (a) addition and (b) deletion.
-5
-4
-3
-2
-1
New
Rule
Rule 5
1307
cient. (Holland, 1992) proposed a time-dependent and deterministic rate scheduled mutation operator where the mutation probability decreases in value over time. (Fogarty, 1989) used a varying
mutation rate and showed the effectiveness of using mutation rate
that decreases exponentially over the generations. Similarly in
(Tan, Goh, Yang, & Lee, 2006), the mutation rate starts off with a
high value to produce a diverse set of solutions for an effective
genetic exploration search and this value decreases as a function
of generation to meet the exploitation requirements of local netuning. This mutation scheme is different from the other adaptive
mutation operators where the mutation rate decreases proportionally as the generation increases.
In this paper, the probability of mutation P m changes based on
the scheme of (Tan et al., 2006), however it is modied to suit
the problem discussed in this paper. The equation used in this paper is given in Eq. (5)
Pm
8
2
>
n
>
;
< 0:7 1 genNum
0 6 n 6 0:2 genNum;
2
>
>
: 0:2 ngenNum 0:05; 0:2 genNum 6 n 6 genNum;
genNum
5
Initialization
Offspring
Creation
Binary Tournament
Selection
Structural
Mutation
Structural
Crossover
0.7
Evaluate and
Archive
Probability of Mutation
0.6
0.5
Local Search
0.4
Elitism
0.3
Parent
Population
0.2
No
0.1
0
20
40
60
Generations
Fig. 5. Probability of mutation.
80
100
Terminate
Fig. 6. Evolutionary memetic rule extraction algorithm overview.
1308
Archive
Genetic
Crossover
Micro-Mutate
End
Rule Set
Selection
Yes
Stopping Criterion Met?
The rule set that has the highest tness on the training dataset would be applied on the test dataset. Each instance in the
test dataset would be presented to the rule set for classication.
If the rst rule does not capture the instance, this instance will
be considered by the subsequent rules. If none of the rules is
able to classify this instance, the class will be classied by the
default class, which is the major class as specied in this paper.
Hence, it is important that rule sets have high support level during the training phase, as high support level on the training data
would most probably translate to higher coverage on the testing
dataset.
5. Local search algorithm
Evolutionary Algorithms (EAs) have the ability of escaping from
local optima due to their inherent global search capability. Their
concurrent search enables them to promptly explore and identify
new promising regions of the solution space. Though EAs are able
to see the macro situation well they do not exploit the search space
thoroughly. Hence, local search are often used as a complement to
EAs optimization that concentrate mainly on global exploration.
Many researchers complemented the global exploration ability of
EAs by incorporating dedicated learning or local search heuristics.
Experimental studies have shown that EA-LS hybrids or memetic
EAs are capable of more efcient search capabilities (Merz & Freisleben, 1999; Ong & Keane, 2004).
This section introduces two variants of local search operators
that are incorporated into the main rule extraction algorithm.
The main difference between these two local search operators lies
in the way the children are created. The rst local search operator
uses the lGA which involves crossover of chromosome alleles. The
second local search operator, inspired by AIS, uses clonal selection
for proliferation.
5.1. Micro-genetic algorithm (lGA) local search
The framework of the lGA is similar to a normal evolution
process; however it differs in its purpose, implementation and
operation. The target of the main EA is to optimize the structure
of rule sets and exchange good rules among the rule sets
through the use of structural mutation and structural crossover.
In micro-genetic algorithm local search, the structure of the rule
sets is maintained throughout the evolution process while the
parameters of the rules are being exchanged and mutated. In
this manner, the parameter values are optimized to t the rule
set structure.
Fig. 7 shows the overview of the lGA Local Search. Each individual from the archive of the main algorithm would go through one
round of local search process. A parent goes through genetic crossover to produce the children population. After the crossover process, all the children undergo micro-mutation (Section 5.3).
No
These mutated children are evaluated and the ttest child is selected as the parent to be used for crossover process in the next
generation. This process is repeated until the overall stopping criterion, which is the maximum number of generations of the local
search, is met. When the local search process ends, the ttest child
would be returned to the main algorithm to be the parent for the
next generation in the main algorithm.
5.1.1. Genetic crossover
The genetic crossover is done among the rules of a rule set as
depicted in Fig. 8. Using one rule set, it is able to create multiple
different children by choosing different rules and different crossover points. In exceptional cases, where the chosen rule set contains only one rule, no crossover can be done and it is replaced
by the cloning local search. If the chosen rule set contains two
rules, different children are created by choosing random points
for crossover among the two parent rules. The allowed crossover
points of the boundary strings are at the intersection of each attribute to prevent separating the lower and upper boundaries of an
attribute. This constraint is applied to avoid any infeasible solution
after the crossover. For the crossover to proceed, two random rules
and one crossover point are chosen. Once the crossover point is selected, this point is consistent for both the rules. Fig. 8 shows a rule
set containing three rules for a four input feature problem. Rules 1
and 3 are selected for crossover process and the selected crossover
point is between feature 1 and feature 2. No change is made to the
class eld of each rule.
5.2. AIS local search
Though AIS are widely used for classication, they are seldom
used to provide high level linguistic rules. A number of works,
including Carvalho and Freitas (Carvalho & Freitas, 2000; Carvalho
& Freitas, 2001), use immunological algorithm to classify examples
belonging to small disjuncts. The set of small disjuncts would then
be able to cover a large set of examples. In (Timmis & Neal, 2001),
AIS is used to discover fuzzy classication rules whereas in (Castro,
Coelho, Caetano, & von Zuben, 2005) an antibody represents a set
of fuzzy classication rules. In addition, AIS are usually used as a
main algorithm for classication, hence the construction of an
AIS local search operator to ne tune the rules in a rule set would
open up new interesting avenues on the hybridization of EA with
AIS.
Inspired by AIS, the local search here incorporates the idea of
clonal selection. The principle of clonal selection forms the core
of AIS where the antibodies that t the antigens most are cloned
in order to replicate good individuals. In this paper, the immuno-
1309
Parent
Rule 1
L11
M11
O11
U11
M12
L12
U12
Rule 2
Class1
L21
M21
O21
O12
U21
M22
M13
O13
L22
M14
O14
U22
Rule 3
Class2
L31
M31
O31
O22
U31
M32
O32
M23
O23
L32
M33
O33
M24
O24
U32
M34
O34
L13
L23
L33
U13
U23
U33
L14
L24
L34
U14
U24
U34
Rule 1
L11
M11
O11
U11
M32
L32
U32
Rule 2
Class1
Class3
Rule 3
L21
M21
O21
O32
U21
M22
M33
O33
L22
M34
O34
U22
Class2
L31
M31
O31
O22
U31
M12
O12
M23
O23
L12
M13
O13
M24
O24
U12
M14
O14
L33
L23
L13
U33
U23
U13
L34
L24
U34
U24
Offspring
Class3
L14
U14
logical memory cell is the best antibody solution from the cloned
cell. The antigen is regarded as the optimal solution for the training
dataset in terms of accuracy. The analogous of the antibodies of the
natural immune systems is the solution to the problem, i.e. the rule
sets. The objective is for the antibodies to t the antigen as close
as possible and in this case, is for the evolving rule sets to get as
close to the optimal solution as possible.
The owchart of the AIS local search is given in Fig. 9. Similar to
lGA, a chromosome from the archive of the main algorithm would
be taken as a good antibody. This antibody proliferates by cloning
based on a clone rate, C. In cloning, exact replication of the antibody is done. Hyper-mutation in terms of micro-mutation is then
applied on the cloned cells. At every encounter/iteration, the afnity of the antibodies and antigen is evaluated. Afnity measurement is calculated based on their Euclidean distance in the
objective space. Smaller Euclidean distance means higher matching, hence higher afnity. The cell with the highest afnity is selected as the antibody to be cloned in the next encounter/
iteration. No crossover operation is done here.
5.3. Micro-mutation
By using the mutation operator, the algorithm is able to explore
new areas of the search space and has the possibility of getting a
Archive
Cloning
Micro-Mutate
End
Yes
Clonal
Selection
No
1310
Pm 0:2
1
0:05;
n
fn 2 Z : 1; genNumg
vq
1; if
0; if
vq > 1
vq < 0
6.1. Datasets
Datasets from the medical and botanical elds are being used to
validate the proposed algorithm. The cancer, diabetes and iris datasets represent both binary and multi-class classication problems.
These datasets are taken from the University of California Irvine
Machine Learning Repository (Blake & Merz, 1998) benchmark collection and these are data collected from real world problems. For
each dataset, 75% of the data is used for training while the remaining 25% is used for testing. Values of the datasets are normalized in
the range of [0, 1].
The objective of the cancer problem is to diagnose breast cancer
in patients by classifying a tumor as benign or malignant. The
Breast Cancer Wisconsin problem dataset was originally collected in the University of Wisconsin Hospitals, Madison from Dr.
William H. Wolberg (Mangasarian & Wolberg, 1990). 458(65.5%)
of the patterns in the datasets are benign while 241(34.5%) of the
patterns are malignant. There are nine attributes/inputs (clump
thickness, uniformity of cell size and shape, marginal adhesion,
single epithelial cell size, bare nuclei, bland chromatin, normal
nucleoli and mitoses).
The diabetes problem is to diagnose a Pima Indian individual
based on personal data and medical examination. There are eight
attributes/inputs (no. of times pregnant, plasma glucose concentration, diastolic blood pressure, triceps skin fold thickness, serum
insulin, BMI and age) and two output classes (diabetes positive or
diabetes negative).
Table 1
Parameter settings for EMA.
Populations
Archive size
Crossover prob.
Crossover type
Generations (G)
Mutation prob.
Mutation type
Elitism
Cancer
Diabetes
Iris
Parent: 30
Offspring: 120
30
0.7
Chromosome
50
Generational
Structural
Yes
Parent: 25
Offspring: 150
25
0.7
Chromosome
100
Generational
Structural
Yes
Parent: 30
Offspring: 120
30
0.7
Chromosome
50
Generational
Structural
Yes
Table 2
Parameter settings for local search.
Populations
Crossover Prob.
Crossover Type
Generations (G)/
Iterations
Cloning
Mutation Prob.
Mutation Type
lGA
AIS
Parent: 1 offspring: 5
0.9
Rule
10
No
Generational
Micro-Mutation
Yes
Iteration-Based
Micro-Mutation
1311
(a)
140
100
(b)
135
Fitness
130
EMA-GA
EMA-AIS
No Local Search
125
Accuracy
95
EMA-GA
EMA-AIS
No Local Search
90
85
120
80
115
110
10
20
30
40
75
50
10
Generations
Support
40
50
98
(d)
100
EMA-GA
EMA-AIS
No Local Search
95
30
Generations
96
Confidence
(c)
20
90
85
80
94
EMA-GA
EMA-AIS
No Local Search
92
90
88
86
75
70
84
10
20
30
Generations
40
50
82
0
10
20
30
Generations
Fig. 10. Cancer dataset (a) tness level, (b) accuracy, (c) support and (d) condence.
40
50
1312
(a)
(b)
120
84
82
80
Accuracy
Fitness
115
110
EMA-GA
EMA-AIS
No Local Search
105
78
76
74
EMA-GA
EMA-AIS
No Local Search
72
70
100
68
95
20
40
60
80
66
0
100
10
20
Generations
(c)
(d)
50
60
70
80
90 100
84
82
100
80
99
EMA-GA
EMA-AIS
No Local Search
98
Confidence
Support
40
Generations
101
97
96
78
76
74
70
94
68
0
20
40
60
80
66
100
EMA-GA
EMA-AIS
No Local Search
72
95
93
30
20
Generations
40
60
80
100
Generations
Fig. 11. Diabetes dataset (a) tness level, (b) accuracy, (c) support and (d) condence.
(a)
(b)
140
135
100
95
125
Accuracy
Fitness
130
EMA-GA
EMA-AIS
No Local Search
120
115
90
EMA-GA
EMA-AIS
No Local Search
85
80
110
75
105
100
10
20
30
40
70
50
10
Generations
(d)
100
98
Support
94
40
50
100
95
EMA-GA
EMA-AIS
No Local Search
96
30
Generations
Confidence
(c)
20
92
90
90
EMA-GA
EMA-AIS
No Local Search
85
80
88
86
75
84
82
10
20
30
Generations
40
50
70
10
20
30
Generations
Fig. 12. Iris dataset (a) tness level, (b) accuracy, (c) support and (d) condence.
40
50
1313
if
petal length<=2.9575
elseif
elseif
elseif
petal length>=4.9726
(4.7692 <= sepal length<=6.6939) and (petal
width>=1.9455)
elseif
else
class is Versicolour
Fig. 13. Example rule set on iris dataset.
(b)
35
55
Rule Number
50
30
25
20
15
45
40
35
30
25
20
15
10
10
EMA-GA
EMA-AIS
No LS
EMA-GA
Algorithms
(c)
EMA-AIS
Algorithms
20
18
Rule Number
Rule Number
(a)
median (line within the box) and the lower quartile and upper
quartile given by the lower and upper boundaries of the box.
The smallest and largest observations are also given, with possible outliers marked by a cross. Generally, EMA-lGA and EMAAIS have lesser rules in a rule set for all problems compared to
No LS. In connection with the tness graphs, an appropriate number of rules is required rather than having a larger rule set that
does not guarantee better classication abilities. For all the datasets, EMA-lGA and EMA-AIS evolve rule sets that are more consistent in terms of the number of rules, while No LS has larger
deviations from the median. In addition, without using local
search, outliers are evident for all the three datasets. The differences between the median number of rules for EMA-lGA and
EMA-AIS are not very great; 13.25 vs. 12, 21.8 vs. 21 and 7.15
vs. 7.8 for the cancer, diabetes and iris datasets, respectively.
Using local search results in less complex rule sets but yet achieving better performance.
16
14
12
10
8
6
4
EMA-GA
EMA-AIS
No LS
Algorithms
Fig. 14. Average number of rules in a rule set (a) cancer, (b) diabetes and (c) iris.
No LS
1314
Table 3
Support for dataset.
Mean support (%)
Standard deviation
Cancer
EMAlGA
EMAAIS
No LS
99.71
99.86
99.60
0.659
0.255
0.497
Diabetes
EMAlGA
EMAAIS
No LS
99.87
99.95
99.84
0.373
0.160
0.382
Iris
EMAlGA
EMAAIS
No LS
100
100
100
0.00
0.00
0.00
Table 4
Generalization accuracy.
Mean
accuracy (%)
Maximum
accuracy (%)
Minimum
accuracy (%)
Standard
deviation
EMAlGA
EMAAIS
No LS
PART
97.414
97.557
96.236
94.9
99.425
99.425
99.425
95.402
95.402
93.678
0.942
1.002
1.532
0.4
Diabetes
EMAlGA
EMAAIS
No LS
PART
75.052
75.235
73.229
74.0
80.208
77.604
77.083
71.875
72.396
68.75
2.292
1.439
2.133
0.5
Iris
EMAlGA
EMAAIS
No LS
PART
97.297
97.027
94.595
93.7
100
97.297
100
94.595
94.595
83.784
0.877
0.832
3.921
1.6
Cancer
be noted that the minimum accuracies for EMA-lGA and EMAAIS are still higher than the mean accuracy of PART. Hence, for
EMA-lGA and EMA-AIS, the deviation from the mean accuracy will
not give results that are lower than PART.
PART obtained higher mean accuracy than No LS and has the
lowest standard deviation for the diabetes problem. The algorithms presented in this paper have rather large standard deviations. The best algorithm among those presented in this paper is
EMA-AIS, which has the highest mean accuracy yet lowest standard deviation. EMA-lGA is able to achieve the highest maximum
accuracy at 80.2% which is a large difference from the mean accuracy of the rest of the algorithms.
The best algorithm on the iris dataset is EMA-lGA where it obtained the highest mean accuracy and the highest maximum and
minimum accuracies. It also has lower standard deviation compared to No LS and PART. Both EMA-lGA and EMA-AIS have outperformed PART signicantly since they have higher mean
accuracies with lower standard deviations.
Generally, across all datasets, the following observations are
made:
No LS has the lowest minimum accuracies and large standard
deviations, hence local search is important for a more robust
algorithm.
EMA-lGA and EMA-AIS have the best mean accuracies.
1315
Luh, G. C., Chueh, C. H., & Liu, W. W. (2003). MOIA: multi-objective immune
algorithm. Engineering Optimization, 35(2), 143164.
Mangasarian, O. L., & Wolberg, W. H. (1990). Cancer diagnosis via linear
programming. SIAM News, 23(5), 118.
Merz, P. & Freisleben, B. (1999). A comparison of memetic algorithms, Tabu search,
and ant colonies for the quadratic assignment problem. In Proceedings of the
congress on evolutionary computation (Vol. 1, pp. 20632070).
Michalewicz, Z. (1994). Genetic algorithms + data structures = evolution programs.
London: Kluwer Academic Publisher.
Noda, E., Freitas, A.A., & Lopes, H.S. (1999). Discovering interesting prediction rules
with a genetic algorithm. In Proceedings of IEEE congress on evolutionary
computation (pp. 13221329). Washington, DC, USA.
Ong, Y. S., & Keane, A. J. (2004). Meta-Lamarckian learning in memetic algorithms.
IEEE Transactions on Evolutionary Computation, 8(2), 99110.
Quinlan, J. R. (1993). C.45: Programs for machine learning. Morgan Kaufmann
Publishers.
Stathakis, D., & Vasilakos, A. (2006). Satellite image classication using granular
neural networks. International Journal of Remote Sensing, 27(18), 39914003.
Savasere, A., Omiecinski, E., & Navathe, S. (1995). An efcient algorithm for mining
association rules in large databases. In Proceedings of the 21st International
Conference on Very Large Data Bases (pp. 43244).
Tan, K. C., Goh, C. K., Yang, Y. J., & Lee, T. H. (2006). Evolving better population
distribution and exploration in evolutionary multi-objective optimization.
European Journal of Operational Research, 171, 463495.
Tan, K. C., Chew, Y. H., & Lee, T. H. (2006). A hybrid multi-objective evolutionary
algorithm for solving vehicle routing problem with time windows.
Computational Optimization and Applications, 34, 115151.
Tan, K. C., Yu, Q., Heng, C. M., & Lee, T. H. (2003). Evolutionary computing for
knowledge discovery in medical diagnosis. Articial Intelligence in Medicine,
27(2), 129154.
Tan, K. C., Yu, Q., & Ang, J. H. (2006a). A coevolutionary algorithm for rules discovery
in data mining. International Journal of Systems Science, 37(12), 835864.
Tan, K. C., Yu, Q., & Ang, J. H. (2006b). A dual-objective evolutionary algorithm for
rules extraction in data mining. Computational Optimization and Applications, 34,
115151.
Tan, K. C., Goh, C. K., Mamun, A. A., & Ei, E. Z. (2008). An evolutionary articial
immune system for multi-objective optimization. European Journal of
Operational Research, 187(2), 371392.
Timmis, J., & Neal, M. (2001). A resource limited articial immune system for data
analysis. Knowledge-Based Systems, 14, 121130.
Timmis, J., Neal, M., & Hunt, J. (2000). An articial immune system for data analysis.
BioSystems, 55, 150413.
Watkins, A.B., & Boggress, L.C. (2002a). A resource limited articial immune
classier. In Proceedings of Congress on Evolutionary Computation (Vol. 1, pp.
926931).
Watkins, A.B., & Boggress, L.C. (2002b). A new classier based on resource limited
articial immune systems. In Proceedings of the Congress on Evolutionary
Computation (Vol. 2, pp. 15461551).