SPRINGER BRIEFS IN OPERATIONS RESEARCH Silja Meyer-Nieberg Nadiia Leopold Tobias Uhlig Natural Computing for Simulation-Based Optimization and Beyond 123 SpringerBriefs in Operations Research SpringerBriefs present concise summaries of cutting-edge research and practical applications across a wide spectrum of fields Featuring compact volumes of 50 to 125 pages, the series covers a range of content from professional to academic Typical topics might include: • A timely report of state-of-the art analytical techniques • A bridge between new research results, as published in journal articles, and a contextual literature review • A snapshot of a hot or emerging topic • An in-depth case study or clinical example • A presentation of core concepts that students must understand in order to make independent contributions SpringerBriefs in Operations Research showcase emerging theory, empirical research, and practical application in the various areas of operations research, management science, and related fields, from a global author community Briefs are characterized by fast, global electronic dissemination, standard publishing contracts, standardized manuscript preparation and formatting guidelines, and expedited production schedules More information about this series at http://www.springer.com/series/11467 Silja Meyer-Nieberg Nadiia Leopold Tobias Uhlig • • Natural Computing for Simulation-Based Optimization and Beyond 123 Silja Meyer-Nieberg ITIS GmbH Neubiberg, Bayern, Germany Nadiia Leopold Bundeswehr University Munich Neubiberg, Bayern, Germany Tobias Uhlig Bundeswehr University Munich Neubiberg, Bayern, Germany ISSN 2195-0482 ISSN 2195-0504 (electronic) SpringerBriefs in Operations Research ISBN 978-3-030-26214-3 ISBN 978-3-030-26215-0 (eBook) https://doi.org/10.1007/978-3-030-26215-0 © The Author(s), under exclusive license to Springer Nature Switzerland AG 2020 This work is subject to copyright All rights are solely and exclusively licensed by the Publisher, whether the whole or part of the material is concerned, specifically the rights of translation, reprinting, reuse of illustrations, recitation, broadcasting, reproduction on microfilms or in any other physical way, and transmission or information storage and retrieval, electronic adaptation, computer software, or by similar or dissimilar methodology now known or hereafter developed The use of general descriptive names, registered names, trademarks, service marks, etc in this publication does not imply, even in the absence of a specific statement, that such names are exempt from the relevant protective laws and regulations and therefore free for general use The publisher, the authors and the editors are safe to assume that the advice and information in this book are believed to be true and accurate at the date of publication Neither the publisher nor the authors or the editors give a warranty, expressed or implied, with respect to the material contained herein or for any errors or omissions that may have been made The publisher remains neutral with regard to jurisdictional claims in published maps and institutional affiliations This Springer imprint is published by the registered company Springer Nature Switzerland AG The registered company address is: Gewerbestrasse 11, 6330 Cham, Switzerland Preface This brief bridges the gap between the areas of simulation studies on the one hand and optimization with natural computing on the other Most overviews concerning the connecting area of simulation-based or simulation optimization not focus on natural computing While they often mention the area shortly as one of the sources of potential techniques, they concentrate on methods stemming from classical optimization Since natural computing methods have been applied with great success in several application areas, a review concerning potential benefits and pitfalls for simulation studies is merited The brief presents such an overview and combines it with an introduction to natural computing and selected major approaches as well as a concise treatment of general simulation-based optimization As such it is the first review which covers both: the methodological background and recent application cases Therefore, it will be of interest to practitioners from either field as well as to people starting their research The brief is intended to serve two purposes: First, it can be used to gain more information concerning natural computing, its major dialects, and their usage for simulation studies Here, we also cover the areas of multi-objective optimization and neuroevolution While the latter is only seldom mentioned in connection with simulation studies, it is a powerful potential technique as it is pointed out below Second, the reader is provided with an overview of several areas of simulation-based optimization which range from logistic problems to engineering tasks Additionally, the brief focuses on the usage of surrogate and meta-models It takes two research directions into close consideration which are rarely considered in simulation-based optimization: (evolutionary) data farming and digital games Data farming is a relatively new and lively subarea of exploratory simulation studies As it often aims to find weaknesses in the simulated systems, it benefits from direct search and as such from natural computing The brief presents recent application examples Digital games which are also termed soft simulations are interesting from several vantage points First of all, they represent a vibrant and rapidly progressing research field in the area of natural computing So far, however, the communities are disjunct resulting in a slow migration of concepts and ideas from one area to the other v vi Preface Notwithstanding, both fields may profit from each other Therefore, the brief contains a concise review concerning natural computing and digital games Second, one of the major research directions in digital games focuses on the development of convincing non-player characters or in other words of deriving good controllers Often employed methods comprise, for example, genetic programming and neuroevolution Here, we arrive at another point where the brief diverts from traditional overviews: behavioral and controller learning Despite the abundance of approaches for games, it has only seldom been considered in the related area of simulation It is our belief that it offers great potential benefits especially if simulation-based optimization is used to identify weaknesses or to conduct stress tests Overall, the brief will appeal to two major research communities in operations research—optimization and simulation It is of interest to both experienced practitioners and newcomers to the field Neubiberg, Germany July 2019 Silja Meyer-Nieberg Nadiia Leopold Tobias Uhlig Contents Introduction to Simulation-Based Optimization 1.1 Natural Computing and Simulation 1.2 Simulation-Based Optimization 1.2.1 From Task to Optimization 1.2.2 A Brief Classification of Simulation-Based Optimization References 1 9 12 14 16 17 18 18 19 21 22 27 31 31 33 39 43 46 50 Conclusions 59 Natural Computing and Optimization 2.1 Evolutionary Algorithms 2.1.1 Genetic Algorithms 2.1.2 Evolution Strategies 2.1.3 Differential Evolution 2.1.4 Genetic Programming 2.2 Swarm-Based Methods 2.2.1 Ant Colony Optimization 2.2.2 Particle Swarm Optimization 2.3 Neuroevolution 2.4 Natural Computing and Multi-Objective Optimization References Simulation-Based Optimization 3.1 On Using Natural Computing 3.2 Simulation-Based Optimization: From Industrial Optimization to Urban Transportation 3.3 Simplifying Matters: Surrogate Assisted Evolution 3.4 Evolutionary Data Farming 3.5 Soft Simulations: Digital Games and Natural Computing References vii Chapter Introduction to Simulation-Based Optimization Abstract Natural computing techniques first appeared in the 1960s and gained more and more importance with the increase of computing resources Today they are among the established techniques for black-box optimization which characterizes tasks where an analytical model cannot be obtained and the optimization technique can only utilize the function evaluations themselves A classical application area is simulation-based optimization Here, natural computing techniques have been applied with great success But before we can focus on the application areas, we first have to take a closer look at what we mean when we refer to optimization, simulation, and natural computing The present chapter is devoted to a concise introduction to the field 1.1 Natural Computing and Simulation Natural computing (NC) comprises approaches that adopt principles found in nature mimicking evolutionary and other natural processes, e.g., implementing simple brain models or simulating swarm behavior [1] Methods belonging to natural computing are therefore quite diverse ranging across evolutionary algorithms, swarm-based techniques, and neural networks Further examples include artificial immune systems [2], DNA computing [3], quantum systems (e.g see the respective sections in [1]), or even slime moulds [4] Simulation-based analyses and simulation-based optimization (SBO) are among the earliest application areas Today, success stories of natural computing include examples from the engineering or industrial domain [5], computational red teaming, and evolutionary data farming [6] This book presents an overview of current natural computing techniques as well as their applications in the broad area of simulation We will refer to this area as simulation-based optimization but it should be noted that the term simulation optimization is also common In general, two main applications can be distinguished: The first uses natural computing to optimize control parameters of a simulated system, see Fig 1.1 Usually, this does not change the intrinsic structures or behavioral routines of the system itself Commonly used NC methods for this application scenario are genetic algorithms, evolution strategies, or particle swarm optimization The second approach transforms © The Author(s), under exclusive license to Springer Nature Switzerland AG 2020 S Meyer-Nieberg et al., Natural Computing for Simulation-Based Optimization and Beyond, SpringerBriefs in Operations Research, https://doi.org/10.1007/978-3-030-26215-0_1 Fig 1.1 Optimizing simulation parameters Introduction to Simulation-Based Optimization Parameters Simulation Output NC Method Fitness Fig 1.2 Behavioral learning Simulation Sensors Agent Output Actuators NC Method Fitness the system itself: For example, the task may be to find a suitable controller for an agent in the simulation This requires identifying appropriate behavior patterns, see Fig 1.2 This approach has even greater potential with vast application areas ranging across computer games, collision warning systems, and evolutionary robotics Concerning the area of natural computing, evolving neural networks and genetic programming are commonly used The present survey is structured as follows: Sect 1.2 provides a brief overview of simulation-based optimization in general The following sections cover the most common natural computation approaches for optimization and their fundamental working principles Here, we present the large field of evolutionary algorithms, swarm-based methods, and evolutionary neural networks Special attention is paid to the growing field of multi-objective optimization in Sect 2.4 Afterwards, exemplary applications of simulation-based optimization with natural computing are described in Sect The section in turn consists of five parts: First, we discuss the general applicability of NC approaches Afterwards, we display the spectrum of application cases in Sect 3.2 The third part zooms in on the use of meta-models or surrogate assisted approaches These approaches have been introduced to reduce the impact of the expensive evaluations As direct search methods, the NC methods require the computation of a performance measure, the so-called fitness, to assess the quality of a potential solution In the case of simulation-based optimization, evaluating an individual is based on conducting simulation runs Since nearly all approaches operate with several solutions at a time, using natural computing can be time-consuming especially when used together with stochastic multi-agent systems or finite element 3.4 Evolutionary Data Farming 45 and has for this application purposes strong similarities to evolution strategies Based on their work, Chow et al [77] developed the Automated Red Teaming framework (ART) which was used for instance in maritime and urban scenarios In [78], the framework was extended resulting in the modular evolutionary framework CASE The framework, written in Ruby, consists of three components: a model generator (XML), a simulation engine, and the evolutionary algorithm itself The case studies presented used the agent-based simulation system MANA and applied the multiobjective NSGA-II The basis scenario was taken from [77] It represents an anchorage scenario where the blue team is tasked with the protection of a commercial fleet The NSGA-II was used to evolve behavioral parameters of the attacking red team (e.g aggressiveness, determination) as well as waypoints for the path trajectory of the attacks The goal for the optimization was bi-objective: Maximize the casualties of the blue team and minimize the casualties of the red attackers In [79], the CASE framework was applied in an urban scenario Liang and Wang [80] used an evolutionary algorithm to learn successful antitorpedo tactics for submarines A tactic was represented as a mix-integer vector with real entries coding for instance the launch time of a decoy They used Gaussian mutations with fixed mutation strengths and applied discrete or dominant recombination of two parents Low et al developed a multi-objective bee colony optimization (MOBCO) and applied it to evolutionary data farming [81] The algorithm is based on the behavior of honey bees and the waggle dance used by the bees in communication The MOBCO is based on the concept of non-dominated solutions determining the rank within the nondominated set with the help of the crowding distances Only the best ranked solutions are allowed to “dance” The MOBCO was integrated into the ART framework After comparing the performance with that of the NSGA-II, the bee colony optimization was used to tune the parameters of the red team attackers in the maritime scenario of [77] Zeng et al [82] addressed high-dimensional evolutionary data farming Optimization so far considered only a subset of the group of the decision variables due to the fact that the dimensionality of the full search space can be quite large with over 100 variables In [83] the authors considered again multi-objective evolutionary algorithms for computational red teaming They compared their algorithm which applied a diversity enhancement scheme (DES) with several approaches among which were the SPEA2 and the NSGA-II The DES estimates the uniformity of the solution distribution and shall fulfill two main goals: exploitation of non-dominated solutions and enhancing the population diversity in function and solution space [83] The approach was tested using two scenarios: an urban scenario and a maritime anchorage scenario, both with two conflicting objectives Compared to the competitors (a parameter exploration was not performed), the DES method performed well reaching similar performance with respect to solution quality but with increased diversity Overviews of further approaches and research in the military application of CRT can be found in [84] However, CRT is not limited to the military sector An example in the area of air traffic control is presented in [85] The increasing traffic represents challenges for the human operators Therefore, there is considerable interest in sup- 46 Simulation-Based Optimization porting the air traffic controller with automated methods that alert him to critical situation A test was conducted showing good results However, it was found that in some cases the automated methods tended to missed alarms or wrongly raised alarms This led to research question tackled in [85] which aimed at a closer analysis of the underlying causes The authors used genetic algorithms to generate critical scenarios For each scenario, two conflict detection algorithms, fixed threshold conflict method and the covariance method, were applied As the aim was to investigate causes for failures (false alarms and missed conflicts) and to examine the robustness of the methods for medium term conflict detection, the evolution process was geared towards rewarding the respective candidate solutions The scenarios are described by a group of parameters and each individual of the GA population stands for a complete scenario whereas a gene denotes a possible conflict pair The scenarios were executed in an air traffic simulator and evolved with a genetic algorithm optimizing a fitness function derived from both goals Critical situations that could be identified are for instance planes in steep climb and a wide angle between possible conflicting planes Both situations lead towards an increase of false alarms The latter also goes along with more undetected conflicts 3.5 Soft Simulations: Digital Games and Natural Computing The use of evolutionary algorithms and related method in games has attracted more and more interest in recent years An indication is the introduction of dedicated journals and conferences, e.g the IEEE Transactions on Computational Intelligence and AI in Games and the IEEE conference Computational Intelligence in Games Natural computing methods allow the adaptation of the bots during the game and therefore to specific player behavior While there is increasing research interest, applications in commercial games are scarce Among the exceptions are e.g Black and White, see e.g [86] or Creatures, see e.g [87] The field of applicable methods is vast and includes nearly every variant of natural computing, single-objective as well as multiobjective Overviews can be found in [87] for the popular field of neuroevolution and in [88] for the general class of computational and artificial intelligence In the following, selected publications illustrate the variety of the methods applied and the tasks considered before special attention is given to the application of natural computing in car racing games and simulation Table 3.3 provides an overview of the publications concerned in the current section and methods used in them Doherty and Riordan [89] used genetic programming for evolving team tactics of agents in action games They used a 2D game engine and a simple environment The team consisting of five agents with distinct behavioral trees Each GP individual codes the complete team Perez et al [90] presented an application of evolutionary techniques in the field of general video game playing, a sub-field of game artificial intelligence seeking 3.5 Soft Simulations: Digital Games and Natural Computing 47 Table 3.3 Applications of natural computing in digital games (SO—single-objective optimization, MO—multi-objective optimization) Author Year Reference Method Optimization Doherty and O’Riordan Agapitos et al 2006 2007 [89] [105] Agapitos et al 2007 [106] Cardamone et al Ebner and Tiede Cardamone et al Quadflieg et al Keaveney and O’Riordan Quadflieg et al Othman et al Pena et al 2009 2009 2010 2010 2011 2011 2012 2012 [107–109] [104] [110] [111] [96] [112] [92] [91] Perez et al Perez et al Perez et al 2013 2015 2015 [98] [99] [90] Schmitt et al Andrade et al Martinez-Arellano et al Gaina et al Justesen and Risi 2015 2016 2016 2017 2017 [93] [114] [95] [100] [94] Genetic programming Genetic programming, neuroevolution Genetic programming, NSGA-II Neuroevolution Genetic programming Neuroevolution CMA-ES Genetic programming CMA-ES SPEA2 Differential evolution, other EA NSGA-II NSGA-II Hybrid (EA+game tree search) Genetic algorithms Genetic algorithms Genetic programming Genetic algorithms Online evolutionary planning MO SO MO SO SO SO SO MO SO MO SO MO MO SO SO SO SO SO MO algorithms that are able to play multiple real-time games—including unknown ones They proposed a combination of an evolutionary algorithm with a game tree search to find a better action plan (sequence of actions) of the playing agent while examining the candidate plans by means of a forward model The performance of the evolutionary algorithm was compared to other tree search approaches Pena et al [91] evolved combat game controllers with the help of hybrid approaches which combined evolutionary algorithms, mainly estimation of distribution algorithms and differential evolution, with algorithms stemming from reinforcement learning The evolutionary adapts the control parameters of these techniques An example for adapting the parameters of a controller or a bot is provided by [92] which used multi-objective optimization The authors improved a tactical artificial intelligence (AI) for a real-time strategy game The game considered was StarCraft in which two teams construct buildings and compete against each other The realtime strategy game is interesting since it includes fog-of-war like effects where the 48 Simulation-Based Optimization players cannot see the complete map Thus, decision making under uncertainty is required Furthermore, the simulation contains random effects The authors used the CASE framework for the evolutionary algorithm and developed their own simulation framework for StarCraft based on the Brood War API [92] Two case studies were conducted In the first, the starting point of the evolution was based on a well-performing bot with a total of 28 adjustable parameters, of which twelve were subjected to the evolution The evolving bots competed against the original version of the AI The objectives were set to maximize the casualties of the blue team and to decrease the losses of the red units The algorithm used was the SPEA2 The success rate of the resulting non-dominated solution was measured showing that for the test scenario considered, the evolved versions resulted in an increase success rate (mean 58.5% in comparison to 50%) [92] In the second, the goal was to find a viable attack path which maximizes the losses of the blue team, while the path length remained as short as possible Again, the SPEA2 served as the multi-objective evolutionary algorithm Using the blue casualties for the final assessment of the solution quality, the authors found that the light units of the blue team had been completely eliminated in the majority of cases Schmitt et al [93] applied evolutionary algorithms to optimize the behavior of opposing groups in a real-time strategy game StarCraft II They used a singleobjective genetic algorithm to obtain the optimal set of parameter values that define the movement strategy of each opposing unit Another example of an evolutionary algorithms application for StarCraft is presented by Justesen and Risi in [94] They argued that existing bots only switch between predefined strategies, but are not able to adapt to in-game situations Therefore, they introduced a variation of online evolutionary planning for dynamic change of a build-order to adapt to the opponent’s strategy and showed the bot’s ability to outperform others as well as to compete against some scripted opening strategies Martinez-Arellano et al in [95] proposed an approach to generate a playing character for a fighting game using genetic programming The advantage of this method is that no prior knowledge on coding of strategies for such characters is required The authors present and analyze testing results of such player against standard AI characters and against humans The characters developed using evolutionary processes appeared to be significantly better in tests against hand coded artificial intelligence characters Although the developed characters were not able to outperform humans, they ended up with a much better rating than hand coded characters in the games against humans Keaveney and O’Riordan [96] also considered real-time strategy games, although their approach focused on coordination and instead of adapting control parameters they applied genetic programming to modify the behavior routines directly They used an abstract real-time strategy game with imperfect information as a test bed For more information concerning real time strategy games, the reader is referred to [97] Perez et al [98, 99] introduced and analyzed a multi-objective algorithm that is based on Monte Carlo tree search (MCTS) for reinforcement learning and compared the performance with the results of a NSGA-II In reinforcement learning, the goal 3.5 Soft Simulations: Digital Games and Natural Computing 49 is to identify a good decision policy that applies potential actions of an agent to particular situations optimizing the reward of the agent The state of the typically stochastic system the agent resides in is influenced by his actions According to [98] a Monte Carlo tree search represents a combination of Monte Carlo simulations with a search tree Based on a tree selection policy, the method moves from the current state in the tree root towards a leaf which is then expanded Here a new node is spawned and evaluated with the help of Monte Carlo runs The results are used to update the information and with it the policy decision parameters Gaina et al [100] also referred to MCTS-based controllers as well as to those based on a genetic algorithm, the Rolling Horizon Evolutionary Algorithm, and other techniques in their paper They give a comprehensive overview of the controllers participated in the first Two-Player General Video Game AI competition and point out possible directions for improvement in this area A very interesting test case is the Simulated Racing Car Championship which has been conducted for several years usually hosted by some of the main conferences in the area of natural computing, e.g GECCO and CEC For participation in this competition, a controller for a racing car bot must be developed using methods from artificial intelligence or natural computing The resulting bots can be entered into the competition The best performing bots are determined with races against time and then tested against each other in several races The competition requires the controller to deal with various different tracks and necessitates several capabilities: steering, accelerating, braking, gear shifting, recovering from leaving the track, and overtaking Many methods have been applied in recent years [101] which include among others evolutionary neural networks [102, 103] and genetic programming [104, 105] Agapitos et al [105] focused on the question of a good controller representation noting that many approaches use neural networks, either in their feed-forward or recurrent form Therefore, the authors raised the question why genetic programming was not used as often as neurocontrollers Therefore, [105] provides a comparison of genetic programming and neuroevolution finding advantages for neuroevolution In [106], the authors considered multi-objective variants based on the principles of the NSGA-II The results were found to be encouraging However, they used a different racing car simulator Ebner and Tiede [104] also used genetic programming to evolve a controller focusing on steering and acceleration/deacceleration of a car racing bot They conducted several experiment series They aimed at gaining insights at whether genetic programming may improve upon a human-designed bot which was possible They stress their findings that as typically for learning tasks, safeguards have to be implemented that prevent overfitting In their case, the bot should be evaluated on several track types 50 Simulation-Based Optimization In a series of papers, [107–110] the usage of neuroevolution was investigated The focus lay on on-line approaches for stochastic simulation problems which requires the adaptation of the evaluation measures since the objective is to improve the performance during the learning process Aside from racing car simulations, neuroevolution has been applied for various tasks, see [87] for an overview Quadflieg et al [111, 112] argued that incorporating an estimate of the curvature may improve the driver’s performance considerably They fitted logistic model of the curvature to the optimal target speed and use this value to control acceleration and brake For this, they use simple rules The model contains several free parameters which are optimized with a CMA-ES for two track models which included several different types of curves This should increase the ability of the bot to generalized The approach [112] is another example where the natural computing method is used for parameter adaptation—so far offline However, in some cases, the offline learning was found to be insufficient if the track differed too strongly from the learned example Therefore, the authors considered an online learning model with several stages The resulting bots were compared to the best performing drivers from the 2009 and 2010 competitions The comparison was performed for seven demanding tracks following the competition rules The results were mixed While the controller outperformed other controllers and a human player on the tracks that had been used for offline learning, it is not the best driver on the other tracks While it performed well on most, further research is seen as necessary Natural computing is also used in the area of serious games which focus on additional goals aside from the entertainment factor see [113] Serious games may focus on training some cognitive capabilities as e.g problem solving or may be even tasked with rehabilitation training For example, Andrade et al [114] focused on dynamic difficulty adaptation in the area of rehabilitation robotics with the aim of training hands, wrists, and arms They used a generic evolutionary algorithm together with a player model in order to demonstrate the applicability of the approach An overview concerning the application of artificial intelligence in serious games can be found in [115] References Eiben, A.E., Michalewicz, Z., Schoenauer, M., Smith, J.E.: Parameter control in evolutionary algorithms In: Parameter Setting in Evolutionary Algorithms, pp 19–46 (2007) Springer Meyer-Nieberg, S., Beyer, H.G.: Self-adaptation in evolutionary algorithms In: Lobo, F., Lima, C., Michalewicz, Z (eds.) Parameter Setting in Evolutionary Algorithms, pp 47–76 Springer, Heidelberg (2007) Smit, S., Eiben, A.: Comparing parameter tuning methods for evolutionary algorithms In: IEEE Congress on Evolutionary Computation, 2009 CEC ’09, pp 399–406 (2009) https:// doi.org/10.1109/CEC.2009.4982974 Santner, T.J., Williams, B.J., Notz, W.I.: The Design and Analysis of Computer Experiments Springer Series in Statistics Springer (2003) Kleijnen, J.: Design and Analysis of Simulation Experiments Springer (2008) References 51 Bartz-Beielstein, T., Lasarczyk, C.W., Preuß, M.: Sequential parameter optimization In: The 2005 IEEE Congress on Evolutionary Computation, 2005, vol 1, pp 773–780 IEEE (2005) Bartz-Beielstein, T., Lasarczyk, C., Preuss, M.: The sequential parameter optimization toolbox In: Bartz-Beielstein, T., Chiarandini, M., Paquete, L., Preuss M (eds.) Experimental Methods for the Analysis of Optimization Algorithms, pp 337–362 Springer, Berlin, Heidelberg (2010) https://doi.org/10.1007/978-3-642-02538-9_14 López-Ibánez, M., Stützle, T.: Automatically improving the anytime behaviour of optimisation algorithms Eur J Oper Res 235(3), 569–582 (2014) Clerc, M.: Discrete Particle Swarm Optimization, illustrated by the Traveling Salesman Problem, pp 219–239 Springer, Berlin, Heidelberg (2004) https://doi.org/10.1007/978-3-54039930-8_8 10 Strasser, S., Goodman, R., Sheppard, J., Butcher, S.: A new discrete particle swarm optimization algorithm In: Proceedings of the Genetic and Evolutionary Computation Conference 2016, GECCO ’16, pp 53–60 ACM, New York, NY, USA (2016) https://doi.org/10.1145/ 2908812.2908935 11 Ant colony optimization for continuous domains: Ant colony optimization for continuous domains Eur J Oper Res 185, 1155–1173 (2009) 12 Oduguwa, V., Tiwari, A., Roy, R.: Evolutionary computing in manufacturing industry: an overview of recent applications Appl Soft Comput 5, 281–299 (2005) 13 Montagna, S., Viroli, M., Roli, A.: A framework supporting multi-compartment stochastic simulation and parameter optimisation for investigating biological system development Simulation 91(7), 666–685 (2015) https://doi.org/10.1177/0037549715585569 http://sim sagepub.com/content/91/7/666.abstract 14 Syberfeldt, S., Grimm, H., Ng, A., Andersson, M., Karlsson, I.: Simulation-based optimization of a complex mail transportation network In: Simulation Conference, 2008 WSC 2008 Winter, pp 2625–2631 (2008) https://doi.org/10.1109/WSC.2008.4736377 15 Kuo, R., Yang, C.: Simulation optimization using particle swarm optimization algorithm with application to assembly line design Appl Soft Comput 11(1), 605–613 (2011) https://doi.org/10.1016/j.asoc.2009.12.020 http://www.sciencedirect.com/science/ article/pii/S1568494609002749 16 Vonolfen, S., Affenzeller, M., Beham, A., Wagner, S., Lengauer, E.: Simulation-based evolution of municipal glass-waste collection strategies utilizing electric trucks In: 2011 3rd IEEE International Symposium on Logistics and Industrial Informatics (LINDI), pp 177– 182 (2011) https://doi.org/10.1109/LINDI.2011.6031142 17 Lässig, J., Hochmuth, C.A., Thiem, S.: Simulation-based evolutionary optimization of complex multi-location inventory models In: Chiong, R., Weise, T., Michalewicz, Z (eds.) Variants of Evolutionary Algorithms for Real-World Applications, pp 95–141 Springer, Berlin, Heidelberg (2012) https://doi.org/10.1007/978-3-642-23424-8_4 18 Korytkowski, P., Wisniewski, T., Rymaszewski, S.: An evolutionary simulation-based optimization approach for dispatching scheduling Simul Model Pract Theory 35, 69–85 (2013) https://doi.org/10.1016/j.simpat.2013.03.006 http://www.sciencedirect.com/science/article/ pii/S1569190X13000427 19 Ammeri, A., Dammak, M., Chabchoub, H., Hachicha, W., Masmoudi, F.: A simulation optimization approach-based genetic algorithm for lot sizing problem in a MTO sector In: 2013 International Conference on Advanced Logistics and Transport (ICALT), pp 476–481 (2013) https://doi.org/10.1109/ICAdLT.2013.6568505 20 Reehuis, E., Bäck, T.: Mixed-integer evolution strategy using multiobjective selection applied to warehouse design optimization In: Proceedings of the 12th Annual Conference on Genetic and Evolutionary Computation, GECCO ’10, pp 1187–1194 ACM, New York, NY, USA (2010) https://doi.org/10.1145/1830483.1830700 21 Vonolfen, S., Affenzeller, M., Beham, A., Lengauer, E., Wagner, S.: Simulation-based evolution of resupply and routing policies in rich vendor-managed inventory scenarios Cent.L Eur J Oper Res 21(2), 379–400 (2013) https://doi.org/10.1007/s10100-011-0232-5 52 Simulation-Based Optimization 22 Kaufmann, P., Shen, C.: Generator start-up sequences optimization for network restoration using genetic algorithm and simulated annealing In: Proceedings of the 2015 Annual Conference on Genetic and Evolutionary Computation, GECCO ’15, pp 409–416 ACM, New York, NY, USA (2015) https://doi.org/10.1145/2739480.2754647 23 Xanthopoulos, I., Goulas, G., Gogos, C., Alefragis, P., Housos, E.: Highway rest areas simultaneous energy optimization and user satisfaction In: Proceedings of the 20th Pan-Hellenic Conference on Informatics, PCI ’16, pp 6:1–6:4 ACM, New York, NY, USA (2016) https:// doi.org/10.1145/3003733.3003793 24 Nguyen, S., Mei, Y., Zhang, M.: Genetic programming for production scheduling: a survey with a unified framework Complex Intell Syst 3(1), 41–66 (2017) https://doi.org/10.1007/ s40747-017-0036-x 25 Kroll, J., Friboim, S., Hemmati, H.: An empirical study of search-based task scheduling in global software development In: Proceedings of the 39th International Conference on Software Engineering: Software Engineering in Practice Track, ICSE-SEIP ’17, pp 183– 192 IEEE Press, Piscataway, NJ, USA (2017) https://doi.org/10.1109/ICSE-SEIP.2017.30 26 Clerc, M.: Standard particle swarm optimization http://hal.archives-ouvertes.fr/hal00764996 (2012) Accessed 19 Nov 2013 27 Gửkỗe, M.A., ệner, E., I¸sık, G.: Traffic signal optimization with particle swarm optimization for signalized roundabouts Simulation 91(5), 456–466 (2015) https://doi.org/10.1177/ 0037549715581473 28 Ripon, K.S.N., Dissen, H., Solaas, J.: Real Time Traffic Intersection Management Using Multiobjective Evolutionary Algorithm, pp 110–121 Springer International Publishing, Cham (2016) https://doi.org/10.1007/978-3-319-49001-4_9 29 Kitak, P., Pihler, J., Ticar, I., Stermecki, A., Biro, O., Preis, K.: Potential control inside switch device using FEM and stochastic optimization algorithm IEEE Trans Magn 43(4), 1757– 1760 (2007) https://doi.org/10.1109/TMAG.2007.892511 30 Marˇciˇc, T., Štumberger, G., Štumberger, B., Hadžiselimovi´c, M., Virtiˇc, P.: Determining parameters of a line-start interior permanent magnet synchronous motor model by the differential evolution IEEE Trans Magn 44(11), 4385–4388 (2008) https://doi.org/10.1109/ TMAG.2008.2001530 31 Glotic, A., Pihler, J., Ribic, J., Stumberger, G.: Determining a gas-discharge arrester model’s parameters by measurements and optimization IEEE Trans Power Deliv 25(2), 747–754 (2010) https://doi.org/10.1109/TPWRD.2009.2038386 32 Marˇciˇc, T., Štumberger, B., Štumberger, G.: Differential-evolution-based parameter identification of a line-start IPM synchronous motor IEEE Trans Ind Electron 61(11), 5921–5929 (2014) https://doi.org/10.1109/TIE.2014.2308160 33 Vasan, A., Simonovic, S.: Optimization of water distribution network design using differential evolution J Water Resour Plan Manag 136(2), 279–287 (2010) https://doi.org/10.1061/ (ASCE)0733-9496 http://ascelibrary.org/doi/abs/10.1061/ 34 Tosi, G., Mucchi, E., d’Ippolito, R., Dalpiaz, G.: Dynamic behavior of pumps: an efficient approach for fast robust design optimization Meccanica 50(8), 2179–2199 (2015) https:// doi.org/10.1007/s11012-015-0142-z 35 Li, R., Emmerich, M.T.M., Eggermont, J., Bäck, T., Schütz, M., Dijkstra, J., Reiber, J.H.C.: Mixed integer evolution strategies for parameter optimization Evol Comput 21(1), 29–64 (2013) https://doi.org/10.1162/EVCO_a_00059 36 Hansen, N., Niederberger, A.S.P., Guzzella, L., Koumoutsakos, P.: A method for handling uncertainty in evolutionary optimization with an application to feedback control of combustion IEEE Trans Evol Comput 13(1), 180–197 (2009) 37 Clarke, J., McLay, L., McLesky Jr., J.T.: Comparison of genetic algorithm to particle swarm for constrained simulation-based optimization of a geothermal power plant Adv Eng Inform 28, 81–90 (2014) 38 Duzinkiewicz, K., Piotrowski, R., Brdys, M., Kurek, W.: Genetic hybrid predictive controller for optimized dissolved-oxygen tracking at lower control level IEEE Trans Control Syst Technol 17(5), 1183–1192 (2009) https://doi.org/10.1109/TCST.2008.2004499 References 53 39 Santarelli, S., Yu, T.L., Goldberg, D.E., Altshuler, E., O’Donnell, T., Southall, H., Mailloux, R.: Military antenna design using simple and competent genetic algorithms Math Comput Model 43(9-10), 990–1022 (2006) https://doi.org/10.1016/j.mcm.2005.05.024 http:// www.sciencedirect.com/science/article/pii/S0895717705005315 Optimization and Control for Military Applications 40 Khattak, A., Yangsheng, J., Lu, H., Juanxiu, Z.: Width design of urban rail transit station walkway: a novel simulation-based optimization approach Urban Rail Transit (2017) https:// doi.org/10.1007/s40864-017-0061-5 41 Filippone, G., D’ambrosio, D., Marocco, D., Spataro, W.: Morphological coevolution for fluid dynamical-related risk mitigation ACM Trans Model Comput Simul 26(3), 18:1– 18:26 (2016) https://doi.org/10.1145/2856694 42 Foli, K., Okabe, T., Olhofer, M., Jin, Y., Sendhoff, B.: Optimization of micro heat exchanger: CFD, analytical approach and multi-objective evolutionary algorithms Int J Heat Mass Transf 49(5), 1090–1099 (2006) 43 Liu, X., Li, F., Ding, Y., Wang, L., Hao, K.: Mechanical modeling with particle swarm optimization algorithm for braided bicomponent ureteral stent In: Proceedings of the 2016 on Genetic and Evolutionary Computation Conference Companion, GECCO ’16 Companion, pp 129–130 ACM, New York, NY, USA (2016) https://doi.org/10.1145/2908961.2908983 44 Meier, C., Yassine, A.A., Browning, T.R., Walter, U.: Optimizing time-cost trade-offs in product development projects with a multi-objective evolutionary algorithm Res Eng Des 27(4), 347–366 (2016) https://doi.org/10.1007/s00163-016-0222-7 45 Atilgan, E., Hu, J.: A combinatorial genetic algorithm for computational doping based material design In: Proceedings of the Companion Publication of the 2015 Annual Conference on Genetic and Evolutionary Computation, GECCO Companion ’15, pp 1349–1350 ACM, New York, NY, USA (2015) https://doi.org/10.1145/2739482.2764700 46 Schwartz, Y., Raslan, R., Mumovic, D.: Implementing multi objective genetic algorithm for life cycle carbon footprint and life cycle cost minimisation: a building refurbishment case study Energy 97, 58–68 (2016) https://doi.org/10.1016/j.energy.2015.11.056 http://www sciencedirect.com/science/article/pii/S0360544215016199 47 Khadka, S., Tumer, K., Colby, M., Tucker, D., Pezzini, P., Bryden, K.: Neuroevolution of a hybrid power plant simulator In: Proceedings of the Genetic and Evolutionary Computation Conference 2016, GECCO ’16, pp 917–924 ACM, New York, NY, USA (2016) https://doi org/10.1145/2908812.2908948 48 Arias-Montano, A., Coello Coello, C.A., Mezura Montes, E.: Multiobjective evolutionary algorithms in aeronautical and aerospace engineering IEEE Trans Evol Comput 16(5), 662–694 (2012) https://doi.org/10.1109/TEVC.2011.2169968 49 Gazzola, M., Vasilyev, O.V., Koumoutsakos, P.: Shape optimization for drag reduction in linked bodies using evolution strategies Comput Struct 89(11–12), 1224–1231 (2011) https://doi.org/10.1016/j.compstruc.2010.09.001 50 Iuliano, E., Quagliarella, D.: Efficient aerodynamic optimization of a very light jet aircraft using evolutionary algorithms and RANS flow models In: Proceedings of the IEEE Congress on Evolutionary Computation, CEC 2010, Barcelona, Spain, 18–23 July 2010, pp 1–10 IEEE (2010) https://doi.org/10.1109/CEC.2010.5586171 51 Arias-Montano, A., Coello, C.A.C., Mezura-Montes, E.: Evolutionary algorithms applied to multi-objective aerodynamic shape optimization In: Computational Optimization, Methods and Algorithms, pp 211–240 Springer (2011) 52 Cohen, B., Legge, R.: Optimization of a small satellite tridyne propulsion system In: Aerospace Conference, 2014 IEEE, pp 1–20 (2014) https://doi.org/10.1109/AERO.2014 6836182 53 Noilublao, N., Bureerat, S.: Simultaneous topology, shape, and sizing optimisation of plane trusses with adaptive ground finite elements using MOEAs Math Probl Eng 2013, (2013) 54 Varcol, C.M., Emmerich, M.M.T.: Metamodel-assisted evolution strategies applied in electromagnetic compatibility design In: Evolutionar and Determinitsic Methods for Design, Optimization and Control with Applications to Industrial and Societal Problems, EUROGEN 2005 FLM (2005) 54 Simulation-Based Optimization 55 Yan, S., Minsker, B.: Applying dynamic surrogate models in noisy genetic algorithms to optimize groundwater remediation designs J Water Resour Plan Manag 137(3), 284–292 (2011) https://doi.org/10.1061/(ASCE)WR.1943-5452.0000106 http://ascelibrary.org/doi/ abs/10.1061/ 56 Kunakote, T., Bureerat, S.: Surrogate-assisted multiobjective evolutionary algorithms for structural shape and sizing optimisation Math Probl Eng 2013 (2013) 57 Liu, B., Zhang, Q., Gielen, G.: A gaussian process surrogate model assisted evolutionary algorithm for medium scale expensive optimization problems IEEE Trans Evol Comput 18(2), 180–192 (2014) https://doi.org/10.1109/TEVC.2013.2248012 58 Syberfeldt, S., Grimm, H., Ng, A., John, R.: A parallel surrogate-assisted multi-objective evolutionary algorithm for computationally expensive optimization problems In: IEEE Congress on Evolutionary Computation, 2008 CEC 2008 (IEEE World Congress on Computational Intelligence), pp 3177–3184 (2008) https://doi.org/10.1109/CEC.2008.4631228 59 Barton, R.R.: Simulation optimization using metamodels In: Winter Simulation Conference, WSC ’09, pp 230–238 Winter Simulation Conference (2009) http://dl.acm.org/citation cfm?id=1995456.1995494 60 Santana-Quintero, L.V., Montano, A.A., Coello, C.A.C.: A review of techniques for handling expensive functions in evolutionary multi-objective optimization In: Computational Intelligence in Expensive Optimization Problems, pp 29–59 Springer (2010) 61 Jin, Y.: Surrogate-assisted evolutionary computation: recent advances and future challenges Swarm Evol Comput 1(2), 61–70 (2011) 62 Jin, Y.: A comprehensive survey of fitness approximation in evolutionary computation Soft Comput 9(1), 3–12 (2005) https://doi.org/10.1007/s00500-003-0328-5 63 Emmerich, M.T., Giannakoglou, K.C., Naujoks, B.: Single- and multiobjective evolutionary optimization assisted by gaussian random field metamodels Trans Evol Comp 10(4), 421– 439 (2006) https://doi.org/10.1109/TEVC.2005.859463 64 Kern, S., Hansen, N., Koumoutsakos, P.: Local meta-models for optimization using evolution strategies In: Runarsson, T., Beyer, H.G., Burke, E., Merelo-Guervos, J., Whitley, L., Yao, X (eds.) Parallel Problem Solving from Nature—PPSN IX, Lecture Notes in Computer Science, vol 4193, pp 939–948 Springer, Berlin, Heidelberg (2006) https://doi.org/10.1007/ 11844297_95 65 Fonseca, L.G., Bernardino, H.S., Barbosa, H.J.C.: A genetic algorithm assisted by a locally weighted regression surrogate model In: Proceedings of the 12th International Conference on Computational Science and Its Applications—Volume Part I, ICCSA’12, pp 125–135 Springer, Berlin, Heidelberg (2012) https://doi.org/10.1007/978-3-642-31125-3_10 66 Loshchilov, I., Schoenauer, M., Sebag, M.: Self-adaptive surrogate-assisted covariance matrix adaptation evolution strategy In: Proceedings of the Fourteenth International Conference on Genetic and Evolutionary Computation Conference, pp 321–328 ACM (2012) 67 Bischl, B., Mersmann, O., Trautmann, H., Weihs, C.: Resampling methods for meta-model validation with recommendations for evolutionary computation Evol Comput 20(2), 249– 275 (2012) 68 Syberfeldt, A., Ng, A., John, R.I., Moore, P.: Evolutionary optimisation of noisy multiobjective problems using confidence-based dynamic resampling Eur J Oper Res 204(3), 533–544 (2010) 69 Pickl, S., Meyer-Nieberg, S., Wellbrink, J.: Reducing complexity with evolutionary data farming SCS M&S Mag 2, 47–53 (2012) 70 Brandstein, A.G., Horne, G.E.: Data farming: A meta-technique for research in the 21st century Maneuver warfare science 1988, US Marine Corps Combat Development Command Publication (1998) 71 Chua, C., Sim, W., Choo, C., Tay, V.: Automated red teaming: an objective-based data farming approach for red teaming In: Simulation Conference, 2008 WSC 2008 Winter, pp 1456–462 (2008) https://doi.org/10.1109/WSC.2008.4736224 72 Abbass, H., Bender, A., Gaidow, S., Whitbread, P.: Computational red teaming: past, present and future Comput Intell Mag IEEE 6(1), 30–42 (2011) https://doi.org/10.1109/MCI.2010 939578 References 55 73 Hingston, P., Preuss, M.: Red teaming with coevolution In: A.E Smith (ed.) Proceedings of the 2011 IEEE Congress on Evolutionary Computation, pp 1160–1168 IEEE Computational Intelligence Society, IEEE Press, New Orleans, USA (2011) 74 Luke, S.: Essentials of Metaheuristics (2009) http://cs.gmu.edu/~sean/book/metaheuristics/ 75 Upton, S.C., McDonald, M.J.: Automated red teaming using evolutionary algorithms In: WG31—Computing Advances in Military OR (2003) 76 Eiben, A.E., Smith, J.E.: Introduction to Evolutionary Computing Natural Computing Series Springer, Berlin (2003) 77 Choo, C.S., Chua, C.L., Tay, S.H.V.: Automated red teaming: a proposed framework for military application In: Proceedings of the 9th Annual Conference on Genetic and Evolutionary Computation, GECCO ’07, pp 1936–1942 ACM, New York, NY, USA (2007) https://doi org/10.1145/1276958.1277345 78 Decraene, J., Chandramohan, M., Low, M., Choo, C.S.: Evolvable simulations applied to automated red teaming: a preliminary study In: Simulation Conference (WSC), Proceedings of the 2010 Winter, pp 1444–1455 (2010) https://doi.org/10.1109/WSC.2010.5679047 79 Decraene, J., Low, M., Zeng, F., Zhou, S., Cai, W.: Automated modeling and analysis of agent-based simulations using the case framework In: Control Automation Robotics Vision (ICARCV), 2010 11th International Conference on, pp 346–351 (2010) https://doi.org/10 1109/ICARCV.2010.5707764 80 Liang, K.H., Wang, K.M.: Using simulation and evolutionary algorithms to evaluate the design of mix strategies of decoy and jammers in anti-torpedo tactics In: Simulation Conference, 2006 WSC 06 Proceedings of the Winter, pp 1299–1306 (2006) https://doi.org/10.1109/ WSC.2006.323228 81 Low, M.Y.H., Chandramohan, M., Choo, C.S.: Application of multi-objective bee colony optimization algorithm to automated red teaming In: Winter Simulation Conference, WSC ’09, pp 1798–1808 Winter Simulation Conference (2009) http://dl.acm.org/citation.cfm? id=1995456.1995704 82 Zeng, F., Decraene, J., Low, M., Wentong, C., Hingston, P., Zhou, S.: High-dimensional objective-based data farming In: 2011 IEEE Symposium on Computational Intelligence for Security and Defense Applications (CISDA), pp 80–87 (2011) https://doi.org/10.1109/ CISDA.2011.5945942 83 Zeng, F., Decraene, J., Low, M., Zhou, S., Cai, W.: Evolving optimal and diversified military operational plans for computational red teaming Syst J IEEE 6(3), 499–509 (2012) https:// doi.org/10.1109/JSYST.2012.2190693 84 Gowlett, P.: Moving forward with computaional red teaming Technical report DSTO-GD0630 Defense Science and Technology Organisation, Canberra, Australia (2010) 85 Alam, S., Abbass, H.A., Lokan, C., Ellejmi, M., Kirby, S.: Computational red teaming to investigate failure patterns in medium term conflict detection In: 8th Eurocontrol Innovative Research Workshop Bretigny-sur-Orge, France (2009) 86 Charles, D., Mcglinchey, S.: The past, present and future of artificial neural networks in digital games In: Proceedings of the 5th International Conference on Computer Games: Artificial Intelligence, Design and Education, pp 163–169 (2004) 87 Risi, S., Togelius, J.: Neuroevolution in games: state of the art and open challenges IEEE Trans Comput Intell AI Games 9(1), 25–41 (2017) https://doi.org/10.1109/TCIAIG.2015 2494596 88 Yannakakis, G.N., Togelius, J.: A panorama of artificial and computational intelligence in games IEEE Trans Comput Intell AI Games 7(4), 317–335 (2015) https://doi.org/10.1109/ TCIAIG.2014.2339221 89 Doherty, D., O’Riordan, C.: Evolving tactical behaviours for teams of agents in single player action games In: Mehdi, Q., Mtenzi, F., Duggan, B., McAtamney, H (eds.) Proceedings of the 9th International Conference on Computer Games: AI, Animation, Mobile, Educational & Serious Games, pp 121–126 Dublin Institute of Technology (2006) http://netserver.it nuigalway.ie/darrendoherty/publications/cgames2006.pdf 56 Simulation-Based Optimization 90 Perez Liebana, D., Dieskau, J., Hunermund, M., Mostaghim, S., Lucas, S.: Open loop search for general video game playing In: Proceedings of the 2015 Annual Conference on Genetic and Evolutionary Computation, GECCO ’15, pp 337–344 ACM, New York, NY, USA (2015) https://doi.org/10.1145/2739480.2754811 91 Pena, L., Ossowski, S., Pena, J.M., Lucas, S.: Learning and evolving combat game controllers In: 2012 IEEE Conference on Computational Intelligence and Games (CIG), pp 195–202 (2012) https://doi.org/10.1109/CIG.2012.6374156 92 Othman, N., Decraene, J., Cai, W., Hu, N., Low, M., Gouaillard, A.: Simulation-based optimization of StarCraft tactical AI through evolutionary computation In: 2012 IEEE Conference on Computational Intelligence and Games (CIG), pp 394–401 (2012) https://doi.org/ 10.1109/CIG.2012.6374182 93 Schmitt, J., Seufert, S., Zoubek, C., Köstler, H.: Potential-field-based unit behavior optimization for balancing in StarCraft II In: Proceedings of the Companion Publication of the 2015 Annual Conference on Genetic and Evolutionary Computation, GECCO Companion ’15, pp 1481–1482 ACM, New York, NY, USA (2015) https://doi.org/10.1145/2739482.2764643 94 Justesen, N., Risi, S.: Continual online evolutionary planning for in-game build order adaptation in StarCraft In: Proceedings of the Genetic and Evolutionary Computation Conference, GECCO ’17, pp 187–194 ACM, New York, NY, USA (2017) https://doi.org/10.1145/ 3071178.3071210 95 Martinez-Arellano, G., Cant, R., Woods, D.: Creating AI characters for fighting games using genetic programming IEEE Trans Comput Intell AI Games PP(99), 1–1 (2016) https:// doi.org/10.1109/TCIAIG.2016.2642158 96 Keaveney, D., O’Riordan, C.: Evolving coordination for real-time strategy games IEEE Trans Comput Intell AU Games 3(2), 155–168 (2011) 97 Lara-Cabrera, R., Cotta, C., Fernandez-Leiva, A.: A review of computational intelligence in RTS games In: 2013 IEEE Symposium on Foundations of Computational Intelligence (FOCI), pp 114–121 (2013) https://doi.org/10.1109/FOCI.2013.6602463 98 Perez, D., Samothrakis, S., Lucas, S.: Online and offline learning in multi-objective Monte Carlo tree search In: 2013 IEEE Conference on Computational Intelligence in Games (CIG), pp 1–8 (2013) https://doi.org/10.1109/CIG.2013.6633621 99 Perez, D., Mostaghim, S., Samothrakis, S., Lucas, S.: Multiobjective monte carlo tree search for real-time games IEEE Trans Comput Intell AI Games 7(4), 347–360 (2015) https:// doi.org/10.1109/TCIAIG.2014.2345842 100 Gaina, R.D., Couetoux, A., Soemers, D., Winands, M.H.M., Vodopivec, T., Kirchgebner, F., Liu, J., Lucas, S.M., Perez, D.: The 2016 Two-Player GVGAI competition IEEE Trans Comput Intell AI Games PP(99), 1–1 (2017) https://doi.org/10.1109/TCIAIG.2017.2771241 101 Loiacono, D., Lanzi, P.L., Togelius, J., Onieva, E., Pelta, D.A., Butz, M.V., Lönneker, T.D., Cardamone, L., Perez, D., Sáez, Y., Preuss, M., Quadflieg, J.: The 2009 simulated car racing championship IEEE Trans Comput Intellig AI Games 2(2), 131–147 (2010) 102 Cardamone, L.: On-line and off-line learning of driving tasks for the open racing car simulator (TORCS) using neuroevolution Master’s thesis, Politecnico di Milano (2008) 103 Cardamone, L., Loiacono, D., Lanzi, P.L.: Learning to drive in the open racing car simulator using online neuroevolution IEEE Trans Comput Intellig AI Games 2(3), 176–190 (2010) 104 Ebner, M., Tiede, T.: Evolving driving controllers using genetic programming In: 2009 IEEE Symposium on Computational Intelligence and Games CIG, pp 279 –286 (2009) https:// doi.org/10.1109/CIG.2009.5286465 105 Agapitos, A., Togelius, J., Lucas, S.M.: Evolving controllers for simulated car racing using object oriented genetic programming In: Lipson, H (ed.) GECCO, pp 1543–1550 ACM (2007) 106 Agapitos, A., Togelius, J., Lucas, S.M.: Multiobjective techniques for the use of state in genetic programming applied to simulated car racing In: IEEE Congress on Evolutionary Computation, pp 1562–1569 IEEE (2007) 107 Cardamone, L., Loiacono, D., Lanzi, P.L.: Evolving competitive car controllers for racing games with neuroevolution In: Rothlauf, F (ed.) GECCO, pp 1179–1186 ACM (2009) References 57 108 Cardamone, L., Loiacono, D., Lanzi, P.L.: On-line neuroevolution applied to the open racing car simulator In: IEEE Congress on Evolutionary Computation, pp 2622–2629 IEEE (2009) 109 Cardamone, L., Loiacono, D., Lanzi, P.: Learning drivers for TORCS through imitation using supervised methods In: 2009 IEEE Symposium on Computational Intelligence and Games CIG 2009, pp 148–155 (2009) https://doi.org/10.1109/CIG.2009.5286480 110 Cardamone, L., Loiacono, D., Lanzi, P.L.: Applying cooperative coevolution to compete in the 2009 TORCS endurance world championship In: IEEE Congress on Evolutionary Computation, pp 1–8 IEEE (2010) 111 Quadflieg, J., Preuss, M., Kramer, O., Rudolph, G.: Learning the track and planning ahead in a car racing controller In: 2010 IEEE Symposium on Computational Intelligence and Games (CIG), pp 395–402 (2010) https://doi.org/10.1109/ITW.2010.5593327 112 Quadflieg, J., Preuss, M., Rudolph, G.: Driving faster than a human player In: Chio, C.D., Cagnoni, S., Cotta, C., Ebner, M., Ekárt, A., Esparcia-Alcázar, A., Merelo, J.J., Neri, F., Preuss, M., Richter, H., Togelius, J., Yannakakis, G.N (eds.) EvoApplications (1), Lecture Notes in Computer Science, vol 6624, pp 143–152 Springer (2011) 113 Dörner, R., Göbel, S., Effelsberg, W., Wiemeyer, J (eds.): Serious Games: Foundations, Concepts and Practice Springer (2016) 114 de Andrade, K.O., Pasqual, T.B., Caurin, G.A.P., Crocomo, M.K.: Dynamic difficulty adjustment with evolutionary algorithm in games for rehabilitation robotics In: 2016 IEEE International Conference on Serious Games and Applications for Health (SeGAH), pp 1–8 (2016) https://doi.org/10.1109/SeGAH.2016.7586277 115 Frutos-Pascual, M., Zapirain, B.G.: Review of the use of AI techniques in serious games: decision making and machine learning IEEE Trans Comput Intell AI Games 9(2), 133–152 (2017) https://doi.org/10.1109/TCIAIG.2015.2512592 Chapter Conclusions Natural computing comprises methods that are influenced by principles stemming from nature Examples include natural evolution as introduced by Wallace and Darwin or swarming behavior observed in bird flocks or insect swarms Going back to the 1960s when the first approaches originated, today the area has emerged as a wide and mature research field with many application areas One of the first and still one of the most important is the usage of natural computing techniques in the context of simulation studies However, although the so-called simulation-based optimization plays such an important role in natural computing and methods stemming from this field have been applied with great success, reviews and overviews in the area of simulation rarely cover these techniques in depth This brief serves to bridge this gap by putting the natural computing methods into the context of simulation-based optimization As such, it provides a treatise of the main dialects of natural computing Here, two important concepts appear: evolutionary computation and swarm-based techniques In addition, it covers the areas of multi-objective optimization and surrogate based optimization We presented an overview of the interesting and challenging field of simulationbased optimization with natural computing methods First, a short introduction and motivation to simulation-based optimization was given Afterwards, some modern and well-established natural computing approaches were presented Here, newer approaches as for example natural evolution strategies were also discussed Most overviews focus on the task of parameter optimization, that is, searching for optimal combinations of control variables However, another task is also of interest: the question of controller or behavior learning It originally stems from the area of digital games Research there often focuses on deriving good non-player characters However, this task has importance beyond digital games especially if the simulation studies aim to identify weaknesses in designs or plans Here, behavior learning offers more degrees of freedom and thus the potential to find solutions beyond the traditional way if used appropriately For this reason, the areas of genetic programming and neuroevolution are also covered © The Author(s), under exclusive license to Springer Nature Switzerland AG 2020 S Meyer-Nieberg et al., Natural Computing for Simulation-Based Optimization and Beyond, SpringerBriefs in Operations Research, https://doi.org/10.1007/978-3-030-26215-0_4 59 60 Conclusions The methodology section is followed by exemplary applications of natural computing for simulations To summarize: The application area for natural computing coupled with simulations is vast and continues to grow While genetic algorithms are most often applied, other types of evolutionary algorithms especially specialized variants for continuous optimization are also used Multi-objective approaches are quite common which stresses the common difficulty to define a single objective for a real-life problem Learning the form of controllers by natural computing represents a very promising and challenging task So far, most approaches stem from the area of computer games Other areas, especially evolutionary data farming may also benefit from using the vast potential of genetic programming and evolving neural networks In recent years, several hybrids have been introduced in natural computing, for example neuroevolution Hybrid methods combine at least two approaches, aiming to compensate the weaknesses each singular approach may have Hybrids have appeared between several natural computing approaches and between natural computing and more traditional heuristics and metaheuristics Augmenting the natural computing by local search has given rise for example to the well-performing memetic algorithms Hybridization has been also observed in simulation-based optimization and will probably play an even more important role in the future ... Contents Introduction to Simulation- Based Optimization 1.1 Natural Computing and Simulation 1.2 Simulation- Based Optimization 1.2.1 From Task to Optimization ... when we refer to optimization, simulation, and natural computing The present chapter is devoted to a concise introduction to the field 1.1 Natural Computing and Simulation Natural computing (NC)... under exclusive license to Springer Nature Switzerland AG 2020 S Meyer-Nieberg et al., Natural Computing for Simulation- Based Optimization and Beyond, SpringerBriefs in Operations Research, https://doi.org/10.1007/978-3-030-26215-0_1