Sei sulla pagina 1di 6

Evolving Agent-based Simulations in the Clouds

James Decraene, Yong Yong Cheng, Malcolm Yoke Hean Low Suiping Zhou, Wentong Cai and Chwee Seng Choo
Abstract Evolving agent-based simulations enables one to automate the difcult iterative process of modeling complex adaptive systems to exhibit pre-specied/desired behaviors. Nevertheless this emerging technology, combining research advances in agent-based modeling/simulation and evolutionary computation, requires signicant computing resources (i.e., high performance computing facilities) to evaluate simulation models across a large search space. Moreover, such experiments are typically conducted in an infrequent fashion and may occur when the computing facilities are not fully available. The user may thus be confronted with a computing budget limiting the use of these evolvable simulation techniques. We propose the use of the cloud computing paradigm to address these budget and exibility issues. To assist this research, we utilize a modular evolutionary framework coined CASE (for complex adaptive system evolver) which is capable of evolving agent-based models using nature-inspired search algorithms. In this paper, we present an adaptation of this framework which supports the cloud computing paradigm. An example evolutionary experiment, which examines a simplied military scenario modeled with the agent-based simulation platform MANA, is presented. This experiment refers to Automated Red Teaming: a vulnerability assessment tool employed by defense analysts to study combat operations (which are regarded here as complex adaptive systems). The experimental results suggest promising research potential in exploiting the cloud computing paradigm to support computing intensive evolvable simulation experiments. Finally, we discuss an additional extension to our cloud computing compliant CASE in which we propose to incorporate a distributed evolutionary approach, e.g., the island-based model to further optimize the evolutionary search.

I. I NTRODUCTION

XAMINING complex adaptive systems (CAS) remains problematic as the traditional analytical and statistical modeling methods appear to limit the study of CAS [1]. To overcome these issues, Holland proposed the use of evolutionary agent-based simulations to examine the emergent and complicated phenomena characterizing CAS. In evolutionary agent-based simulations, multiple and interacting evolvable agents (e.g., neurones, traders, soldiers, etc.) determine, as a whole, the behavior of the system (e.g., brain, nancial market, warfare, etc.). The evolution of agents is conducted through the use of evolutionary computation techniques (e.g., learning classier systems, genetic programming, evolution strategies, etc.). The evolution of CAS can be
James Decraene, Yong Yong Cheng, Malcolm Yoke Hean Low, Suiping Zhou and Wentong Cai are with the Parallel and Distributed Computing Center at the School of Computer Engineering, Nanyang Technological University, Singapore (email: jdecraene@ntu.edu.sg). Chwee Seng Choo is with DSO National Laboratories, 20 Science Park Drive, Singapore. This R&D work was supported by the Defence Research and Technology Ofce, Ministry of Defence, Singapore under the EVOSIM Project (Evolutionary Computing Based Methodologies for Modeling, Simulation and Analysis).

driven to exhibit pre-specied and desired system behaviors (e.g., to identify critical conditions leading to the emergence of specic system-level phenomena such as a nancial crisis or battleeld outcomes). Although this method appears to be satisfactory for studying CAS, it is limited by the requirement of signicant computational resources. Indeed in evolvable simulation experiments, many simulation models are iteratively generated and evaluated. Due to the stochastic nature of both evolutionary algorithms and agent-based simulations, experiment replications are also required to account for statistical uctuations. As a result, the experimental process is computationally highly demanding. Moreover, such experiments are typically conducted occasionally when the computing facilities may not be fully available. To address these computing budget issues, involving both scalability and exibility constraints, we examine the cloud computing paradigm [2]. This distributed computing paradigm has recently been introduced to specically address such computing budget issues where large dataset and considerable computational requirements are dealt with. To assist this research, we propose to modify a modular evolutionary framework, coined CASE for complex adaptive system evolver to support cloud computing facilities. In the remainder of this paper, we rst provide introductions to both evolutionary agent-based simulations and cloud computing. Following this, we present the CASE framework. The latter is then extended to support the cloud computing paradigm. A series of experiments is described to evaluate our cloud computing compliant framework in terms of scalability. The experiments involve a simplied military simulation which is modeled with the agent-based simulation platform MANA [3]. Finally we discuss an additional extension to CASE which would incorporate a distributed evolutionary approach [4] to further optimize the search process. II. E VOLUTIONARY AGENT- BASED S IMULATIONS Agent-based systems (ABSs) are computational methods which can model the intricate and non-linear dynamics of complex adaptive systems. ABSs are commonly implemented with object-oriented programming environments in which agents are instantiations of object classes. ABSs typically involve a large number of autonomous agents which are executed in a concurrent or pseudo-concurrent manner (i.e., using a time-slicing algorithm). Each agent possesses its own distinct state variables, can be dynamically deleted and is capable of interacting with the other agents. The agents

computational methods may include stochastic processes resulting in a stochastic behavior at the system level. To study ABS, the data-farming method was proposed as a means to identify the landscape of possibilities [5], i.e., the spectrum of possible simulation outcomes. In data farming experiments, specic simulation model parameters are selected and varied (according to pre-specied boundary values). This exploratory analysis of parameters enables one to examine the effects of the parameters over the simulation outcomes. Several techniques [6] have been introduced to reduce the search space where each solution/design point is a distinct simulation model. The search space can be reduced even further when one is interested in a single (or target) system behavior. Evolutionary computation (EC) techniques can here be used to drive the generation/evaluation of simulation models. In this paper, we examine such an objectivebased data farming approach using evolutionary agent-based simulations [7]. In evolutionary ABS, EC techniques are utilized to evolve simulation models to exhibit a desirable output/behavior. This method differs from simulation optimization techniques [8] as it relies on the simulation of autonomous and concurrent agents whose (inter)actions may include stochastic elements. Therefore the evaluation of the simulation models is also stochastic by nature. III. C LOUD C OMPUTING Cloud computing [2] is a novel high performance computing (HPC) paradigm which has recently attracted considerable attention. The computing capabilities (i.e., compute and storage clouds) are typically provided as a service via the Internet. This web approach enables users to access HPC services without requiring expertise in the technology that supports them. In other words, the user does not need expertise in mainframe administration and maintenance, distributed systems, networking, etc. The key benets of cloud computing are identied as follows: Reduced Cost: Cloud computing infrastructures are provided by a third-party and do not need to be purchased for potentially infrequent computing tasks. Users pay for the resources on a utility computing basis. This enables users with limited nancial and computing resources to exploit high performance computing facilities (e.g., the Amazon Elastic Compute Cloud, the Sun Grid) without having to invest into personal and expensive computing facilities. Scalability: Multiple computing clouds (which can be distant from each other) can be aggregated to form a single virtual entity enabling users to conduct very large scale experiments. The computing resources are dynamically provided and self-managed by the cloud computing server. Cloud computing is a HPC paradigm, in others words, it aims at enabling users to exploit large amounts of computing power in a short period of time (in minutes or hours). Thus, cloud computing differs from High Throughput Computing approaches, such

as Condor [9]1 , which aim at provisioning large amounts of computing power over longer periods of time (in days or weeks). One of the core technology underlying cloud computing, enabling the above benets, is the MapReduce programming model [11]. This model is composed of two distinct phases: Map: The input data is partitioned into subsets and distributed across multiple compute nodes. The data subsets are processed in parallel by the different nodes. A set of intermediate les results from the Map phase and is processed during the Reduce phase. Reduce: Multiple compute nodes process the intermediate les which are then collated to produce the output les. Similarly to the Map processes, the Reduce operations are distributed (and executed in parallel) over multiple compute nodes. The relative simplicity of the MapReduce programming model facilitates the efcient parallel distribution of computationally expensive jobs. This parallelism also enables the recovery from failure during the operations (this is particularly relevant when considering a distributed environment where some nodes may fail during a run). Map/Reduce operations may be replicated (if a distinct operation fails, its replication is retrieved). Also, failed operations may automatically be rescheduled. These faulttolerant features are inherent properties of cloud computing frameworks such as the Apache Hadoop. Thus the user is not required to handle such issues. We suggest that evolutionary agent-based simulations can be expressed as MapReduce computations, and consequently, may exploit the benets provided by the cloud computing paradigm. In the next section we briey present some related studies which examined the combination of the MapReduce programming model with evolutionary algorithms. IV. R ELATED STUDIES Recent studies have combined evolutionary computation and the MapReduce programming model. In [12], Jin et al. claimed that, as devised, the MapReduce model cannot directly support the implementation of parallel genetic algorithms (i.e., a specic island-based model). As a result, MapReduce was extended and included an additional Reduce process. The iterative cycle is as follows. During the Map phase, multiple instances of the genetic algorithms are executed in parallel. The local optimal solutions of each population are collected during the rst Reduce phase. An additional collection and sorting of the local optimal solutions is conducted during the second Reduce phase. The resulting set of global optimal solutions is then utilized to initiate the next generation. Llora et al. [13] presented a different approach where several evolutionary algorithms were adapted to support the MapReduce model (in contrast with Jin et al. who adapted the MapReduce model and not the evolutionary algorithm
1 Note

that Condor is being adapted to support cloud computing [10].

itself). The parallelization of the evolutionary algorithms was here conducted using a decentralized and distributed selection approach [14]. This method avoided the requirement of a second Reduce process (i.e., a single selection operation is conducted over the aggregation of the different pools of solutions). The above studies provide guidance for translating evolutionary algorithms for MapReduce operations. The approach proposed by Llora et al. is further examined in Section VI. Note that in contrast with Jin et al. and Llora et al.s approaches, the objective function is here the simulation of stochastic agent-based models. The resolution (i.e., level of abstraction) of the simulations is the key factor (i.e., the bulk of the work) determining the computational requirements of the evolutionary experiments. In the next section, a description of the CASE framework is provided. V. T HE CASE FRAMEWORK CASE is a recently developed framework which enables one to evolve simulation models using nature-inspired search algorithms. This system was constructed in a modular manner (using the Ruby programming language to accommodate the users specic requirements (e.g., use of different simulation engines or evolutionary algorithms, etc.). This framework can be regarded as a simplication of the Automated Red Teaming framework [15] which was developed by the DSO National Laboratories of Singapore. CASE is composed of three main components which are distinguished as follows: 1) The model generator: This component takes as inputs a base simulation model specied in the eXtended Markup Language and a set of model specication text les. According to these inputs, novel XML simulation models are generated and sent to the simulation engine for evaluation. Thus, as currently devised, CASE only supports simulation models specied in XML. Moreover, the model generator may consider constraints over the evolvable parameters (this feature is optional). These constraints are specied in a text le by the user. These constraints (due for instance to interactions between evolvable simulation parameters) aim at increasing the plausibility of generated simulation models (e.g., through introducing cost trade-off for specic parameter values). 2) The simulation engine: The set of XML simulation models is received and executed by the stochastic simulation engine. Each simulation model is replicated a number of times to account for statistical uctuations. A set of result les detailing the outcomes of the simulations (in the form of numerical values for instance) are generated. These measurements are used to evaluate the generated models, i.e., these gures are the tness (or cost) values utilized by the evolutionary algorithm (EA) to direct the search. 3) The evolutionary algorithm: The set of simulation results and associated model specication les are

received by the evolutionary algorithm, which in turns, processes the results and produce a new generation of model specication les. The generation of these new model specications is driven by the userspecied (multi)objectives (e.g., maximize/minimize some quantitative values capturing the target system behavior). The algorithm iteratively generates models which would incrementally, through the evolutionary search, best exhibit the desired outcome behavior. The model specication les are sent back to the model generator; this completes the search iteration. This component is the key module responsible for the automated analysis and modeling of simulations. Communications between the three components are conducted via text les for simplicity and exibility. Note that the exible nature of CASE allows one to develop and integrate different simulation platforms (using models specied in XML), and search algorithms. In the next section, we propose a cloud computing compliant version of CASE. VI. M AP R EDUCE CASE We present our adaptation of the CASE framework to support the MapReduce programming model. This adaptation is conducted using the Apache Hadoop framework which relies on the Map and Reduce functions devised in functional programming languages such as Lisp. During initialization, the CASE modules (simple Ruby scripts and the simulation engine executable) are sent to the compute nodes. Then, at each search iteration, only the model specication les are transmitted to the compute nodes, where, locally the generation and evaluation of simulation models are conducted. The motivation of this approach is to decrease the network trafc and distribute the computational effort (moving computation is cheaper than moving data). Also, note that only a single Reduce process is conducted to retrieve the intermediate result les. Future work will consider exploiting the Reduce phase through analyzing intermediate result les (to assist the evolutionary algorithm) using multiple compute nodes. This relatively straightforward implementation illustrates the simplicity of the MapReduce programming model. VII. E XPERIMENT We present an example experiment in which the CASE framework is utilized for Automated Red Teaming (ART), a simulation-based military methodology utilized to uncover weaknesses of operation plans. Here, combat is conceptually regarded as a complex adaptive system which outcomes result from complex non-linear dynamics [16]. The agent-based simulation platform MANA [3], developed by the New Zealand Defense and Technology Agency, is employed to model and perform the simulations. A. Automated Red Teaming Automated Red Teaming (ART) was originally proposed by the defense research community as a vulnerability assessment tool to automatically uncover critical weaknesses

of operational plans [7]. Using this computer/simulationbased approach, defense analysts may subsequently resolve the identied tactical plan loopholes. A stochastic agent-based simulation is typically used to model and simulate the behavioral and dynamical features of the environment/agents. The agents are specied with a set of properties which denes their intrinsic capabilities and personality such as sensor range, re range, movement range, communications range, aggressiveness, response to injured teammates and cohesion. A review of ABS systems applied to various military applications is provided by Cioppa et al. [17]. In ART experiments, a defensive Blue team (a set of agents) is subjected to repeated attacks, where multiple scenarios may be examined, from a belligerent Red team. Thus, ART aims at anticipating the adversary behaviour through the simulation of various potential scenarios. B. Setting A maritime anchorage protection scenario is examined. In this scenario, a Blue Team (composed of 7 vessels) conducts patrols to protect an anchorage (in which 10 Green commercial vessels are anchored) against threats. Red forces (5 vessels) attempt to break Blues defense strategy and inict damages to anchored vessels. The aim of the study is to discover Reds strategies that are able to breach through Blues defensive tactic. We detail the model, evolutionary algorithm and cloud computing facilities utilized in the experiments: The model: Figure 1 depicts the scenario which was modeled using the ABS platform MANA.

Red behavioral parameters (Table I). As the number of decision variables increases, the search space becomes signicantly larger. According to the number of evolvable properties and associated ranges given for this experiment, the search space contains 1.007 distinct candidate solutions (i.e., variants of the original simulation model).
TABLE I E VOLVABLE R ED PARAMETERS Red property Team 1 initial position (x,y) Team 2 initial position (x,y) Intermediate waypoints (x,y) Team 1 nal position (x,y) Team 2 nal position (x,y) Aggressiveness Cohesiveness Determination Min (0,0) (0,160) (0,40) (0,160) (0,0) -100 -100 20 Max (399,39) (399,199) (399,159) (399,199) (399,39) 100 100 100

The home and nal positions together with the intermediate waypoint dene the trajectory of each distinct Red vessel. Three of the Red crafts (Team 1) were set up to initiate their attack from the north while the remaining two attack (Team 2) from the south. This allows Red to perform multi-directional attack at the anchorage. In addition, the nal positions of the Red crafts are constrained to the opposite region (with respect to initial area) to simulate escapes from the anchorage following successful attacks. Psychological elements are included in the decision variables to address the potential effects on the Red force. The aggressiveness determines the reaction of individual Red crafts upon detecting a Blue patrol. Cohesiveness inuences the propensity of Red to maneuver as a group or not, whereas determination stands for the Reds willingness to follow the dened trajectories. The Red crafts aggressiveness against the Blue force are varied from unaggressive (-100) to very aggressive (100). Likewise, the cohesiveness of the Red crafts are varied from independent (-100) to very cohesive (100). Finally, a minimum value of 20 is set for determination to prevent inaction from occurring.

Fig. 1. MANA model of the maritime anchorage protection scenario adapted from [18]. The map covers an area of 100 by 50 nautical miles (1 nm = 1.852km). The dashed lines depict the patrolling paths of the different Blue vessels.

The evolutionary algorithm: The Non-dominated Sorting Algorithm II (NSGA-II) [19] is employed to conduct the evolutionary search using the parameter values listed in Table II:
TABLE II E VOLUTIONARY A LGORITHM S ETTING Parameter Population size Number of search iteration Mutation probability Mutation index Crossover rate Crossover index Value 100 50 0.1 20 0.9 20

The Blue patrolling strategy is composed of two layers: an outer (with respect to the anchorage area, 30 by 10 nm) and inner patrol. The outer patrol consists of four smaller but faster boats. They provide the rst layer of defence whereas the larger and heavily armored ships inside the anchorage are the second defensive layer. In CASE, each candidate solution is represented by a vector of real values dening the different evolvable

The NSGA-II population size and number of search iteration indicate that 5000 distinct MANA simulation

models are generated and evaluated for each experimental run. Each individual simulation model is executed/replicated 30 times to account for statistical uctuations. The efciency of the algorithm is measured by the number of Green casualties with respect to the number of Red casualties. In other words, the objectives are: To minimize the number of Green (commercial) vessels alive. To minimize the number of Red casualties.

14 12 10 Time (hours) 8 6 4 2 0 1 220 200 180 160 Time (hours) 140 120 100 80 60 40 20 0 1 2 4 5 10 20 Number of distributed compute nodes 25 2 4 5 10 20 Number of distributed compute nodes 25

The cloud computing facilities: The cloud computing cluster is composed of 30 laboratory workstations located at the Parallel and Distributed Computing Center, Nanyang Technological University. Note that the hardware of the workstations may vary from each others, thus a heterogeneous environment is considered. Moreover, as these workstations may also occasionally be utilized by students, the performance of workstations may also be affected during experiments. This exemplies the hazards (e.g., a student may reboot a compute node) that may occur in a distributed environment. We purposely utilize such a computing environment to test the fault tolerant features of Hadoop.

C. Results Figure 2 presents the running times of two experiments where we incrementally increased the number of available compute nodes. In the rst experimental run, a relatively fast version of the simulation model is employed (requires 5 seconds to execute 30 replications on a compute node). In the second case, the model execution time is increased from 5 to 90 seconds to reect real life military simulation models which typically require such an amount of time. It can be observed that as the number of available compute nodes increases, the time required to perform the experiment decreases accordingly. Nevertheless, we note that this relationship (i.e., number of nodes/time) is not exactly scalable (most remarkable when the number of compute nodes is higher than 10) in the rst model. Whereas in the second experimental run, the running time scales with the number of utilized compute nodes. The results suggest that, according to the execution time of the simulation model, an optimal (from a computing cost point of view) number of compute node exists. A number of issues causing overheads were identied: 1) The iterative nature of the evolutionary algorithm requires the synchronization of the search iterations. As a result, compute nodes equipped with a relatively slower CPU (or having a higher computational load due to external factors such as students using the computer) may cause a delay. 2) Delays may also occur due to network trafc. The latter may lead the model evaluations to occur with differing start times (this issue may thus aggravate the previous one).

Fig. 2. Running times of MapReduce CASE experiments with increasing number of computer nodes using a fast (top) and slow (bottom) variants of the base simulation model.

Future work will consider the utilization of an asynchronous model considering a heterogeneous computing environment to resolve the above issues. Also, note that some experiments were conducted while laboratory demonstrations were occurring. Nevertheless no signicant deteriorations upon the experiments were observed (apart from the occasional slow down of some model evaluations). All experiments were thus successfully achieved using this heterogeneous and relatively hazardous computing environment. This support the robustness qualities of the cloud computing paradigm. In the next section we discuss the integration of distributed evolutionary computation techniques within our CASE MapReduce model. VIII. F UTURE
WORK

Our simplistic adaption of CASE did not exploit some features (e.g., shufing process, multiple Reduce processes) of the MapReduce model. We discuss future directions, examining distributed evolutionary computation, which may potentially address this decit: Island-based model: The island-based model [4] is a popular and efcient way to implement evolutionary algorithms on distributed systems. In this model, each compute node executes an independent evolutionary algorithm over its own sub-population. The nodes work in consort by periodically exchanging solutions in a

process called migration. It has been reported that such models often exhibit better search performance in terms of both accuracy and speed. This approach may thus further optimize the evolutionary search given a limited computing budget. We may, for instance, devise Reduce processes that would carry out the computations required during the migrations (e.g., selection of most promising solutions to be transferred). Self-adaptive mechanisms: Similarly to the parameter setting of evolutionary algorithms, the performance of distributed evolutionary approaches may vary according to the specic migration scheme employed. Numerous parameters (as mentioned above) are to be pre-specied by the user and ultimately determine the efciency of the distributed evolutionary search. This parameter tuning process is thus a critical step which typically requires series of preliminary experiments to identify a satisfactory set of parameter values. Consequently, running such preliminary experiments conicts with our intention to resolve computing budget issues. Recent studies [20], [21] have addressed this issue where selfadaptive methods are used to automate this parameter tuning process. We suggest that these computations may be expressed as Reduce processes. The above directions are currently being investigated using our seminal work on combining CASE and the MapReduce model. IX. C ONCLUSION We rst briey presented the elds of evolutionary agentbased simulations and cloud computing. To date, the work reported here is among the very rst attempts to combine evolutionary agent-based simulations with the MapReduce programming model. To assist this research, we utilized the modular evolutionary framework CASE. The latter was adapted to support the MapReduce model. To test our novel framework, we presented an evolutionary experiment which involved Automated Red Teaming, a method originating from the defense research community where warfare is conceptually regarded as a complex adaptive system. The experimental results demonstrated the benets of the MapReduce approach in terms of both scalability and robustness. Finally we discussed a future research direction in which selfadaptive distributed evolutionary algorithms are considered to further optimize the evolutionary search. ACKNOWLEDGMENTS We would like to thank the following organizations that helped make this R&D work possible: Defence Research and Technology Ofce, Ministry of Defence, Singapore, for sponsoring the Evolutionary Computing Based Methodologies for Modeling, Simulation and Analysis project which is part of the Defence Innovative Research Programme FY08. Defence Technology Agency, New Zealand Defence Force, for sharing the Agent Based Model, MANA.

Parallel and Distributed Computing Center, School of Computer Engineering, Nanyang Technological University, Singapore. DSO National Laboratories, Singapore. R EFERENCES
[1] J. Holland, Studying complex adaptive systems, Journal of Systems Science and Complexity, vol. 19, no. 1, pp. 18, 2006. [2] A. Weiss, Computing in the Clouds, netWorker, vol. 11, no. 4, pp. 1625, 2007. [3] M. Lauren and R. Stephen, Map-aware Non-uniform Automata (MANA)-A New Zealand Approach to Scenario Modelling, Journal of Battleeld Technology, vol. 5, pp. 2731, 2002. [4] E. Cantu-Paz, Efcient and Accurate Parallel Genetic Algorithms. Kluwer Academic Pub, 2000. [5] P. Barry and M. Koehler, Simulation in Context; Using Data Farming for Decision Support, in Proceedings of the 36th Winter Simulation Conference, 2004, pp. 814819. [6] T. Cioppa and T. Lucas, Efcient Nearly Orthogonal and Space-lling Latin Hypercubes, Technometrics, vol. 49, no. 1, pp. 4555, 2007. [7] C. Chua, C. Sim, C. Choo, and V. Tay, Automated Red Teaming: an Objective-based Data Farming Approach for Red Teaming, in Proceedings of the 40th Winter Simulation Conference, 2008, pp. 14561462. [8] S. Olafsson and J. Kim, Simulation Optimization, in Proceedings of the 34th Winter Simulation Conference, vol. 1, 2002, pp. 7984. [9] M. Litzkow, M. Livny, and M. Mutka, Condor-a Hunter of Idle Workstations, in Proceedings of the 8th International Conference of Distributed Computing Systems, vol. 43, 1988, pp. 104111. [10] Thain, D. and Moretti, C., Abstractions for Cloud Computing with Condor, in Cloud Computing and Software Services, S. Ahson and M. Ilyas, Eds. CRC Press, 2010, To appear. [11] J. Dean and S. Ghemawat, MapReduce: Simplied Data Processing on Large Clusters, Commun. ACM, vol. 51, no. 1, pp. 107113, 2008. [12] C. Jin, C. Vecchiola, and R. Buyya, MRPGA: An Extension of MapReduce for Parallelizing Genetic Algorithms, in ESCIENCE 08: Proceedings of the 2008 Fourth IEEE International Conference on eScience. Washington, DC, USA: IEEE Computer Society, 2008, pp. 214221. [13] X. Llora, A. Verma, R. Campbell, and D. Goldberg, When Huge Is Routine: Scaling Genetic Algorithms and Estimation of Distribution Algorithms via Data-Intensive Computing, Parallel and Distributed Computational Intelligence, pp. 1141, 2010. [14] K. De Jong and J. Sarma, On Decentralizing Selection Algorithms, in Proceedings of the Sixth International Conference on Genetic Algorithms, 1995, pp. 1723. [15] C. S. Choo, C. L. Chua, and S.-H. V. Tay, Automated Red Teaming: a Proposed Framework for Military Application, in Proceedings of the 9th Annual Conference on Genetic and Evolutionary Computation. New York, NY, USA: ACM, 2007, pp. 19361942. [16] A. Ilachinski, Articial war: Multiagent-based Simulation of Combat. World Scientic Pub Co Inc, 2004. [17] T. Cioppa, T. Lucas, and S. Sanchez, Military Applications of Agentbased Simulations, in Proceedings of the 36th Winter Simulation Conference, 2004, pp. 171180. [18] M. Low, M. Chandramohan, and C. Choo, Multi-Objective Bee Colony Optimization Algorithm to Automated Red Teaming, in Proceedings of the 41th Winter Simulation Conference, 2009, pp. 17981808. [19] K. Deb, S. Agrawal, A. Pratap, and T. Meyarivan, A Fast Elitist Non-dominated Sorting Genetic Algorithm for Multi-objective Optimization: NSGA-II, Lecture Notes in Computer Science, pp. 849858, 2000. [20] K. Srinivasa, K. Venugopal, and L. Patnaik, A Self-adaptive Migration Model Genetic Algorithm for Data Mining Applications, Information Sciences, vol. 177, no. 20, pp. 42954313, 2007. [21] C. Leon, G. Miranda, and C. Segura, A Memetic Algorithm and a Parallel Hyperheuristic Island-based Model for a 2D Packing Problem, in GECCO 09: Proceedings of the 11th Annual Conference on Genetic and Evolutionary Computation. New York, NY, USA: ACM, 2009, pp. 13711378.

Potrebbero piacerti anche