Tài liệu hạn chế xem trước, để xem đầy đủ mời bạn chọn Tải xuống
1
/ 40 trang
THÔNG TIN TÀI LIỆU
Thông tin cơ bản
Định dạng
Số trang
40
Dung lượng
1,67 MB
Nội dung
Frontiers in Evolutionary Robotics 192 We denote by SRT the algorithm which operates the robot with soft-real time considerations, influencing his behavior to avoid a dangerous battery level. This algorithm differs from N RT mainly by • battery level influences robot's fitness evaluation used by the GA and, • a new input neuron is connected to a battery level sensor. Finally, we denote by H RT the hybrid algorithm which operates the robot with hard-real time considerations, i.e., the same as S RT incorporating critical battery level sensing, and also having the capacity to change the robot's normal operation to mission oriented, guaranteeing his survivability (if at least one charging zone was previously found). 4.1 Experimental Setup As mentioned before, the experiments are performed using a modified version of YAKS. This simulation system has several different elements including: the robot simulator, neural networks, GA, and fuzzy logic based fitness. Khepera Robot For these simulations, a Khepera robot was chosen. The robot configuration has two DC motors and eight (six front and two back) infrared proximity sensors used to detect nearby obstacles. These sensors provide 10 bit output values (with 5% random noise), which allow the robot to know in approximate form the distance to local obstacles. The YAKS simulator provides the readings for the robot sensors according to the robot position and the map (room) it is in. The simulator also has information for the different areas that the robot visits and the various obstacles (walls) or zones (home, charging zones) detected in the room. In order to navigate, the robot executes up to 1000 steps in each simulation, but not every step produces forward motion as some only rotate the robot. If the robot has no more energy, it freezes and the simulation stops. Artificial Neural Network The original neural network (NN) used has eight input neurons connected to the infrared sensors, five neurons in the hidden layer and two output neurons directly connected to the motors that produce the robot movement. Additionally, in our real-time extensions we introduce another input neuron connected to the battery sensor (activated by S RT and HRT). Genetic Algorithm A GA is used to find an optimal configuration of weights for the neural network. Each individual in the GA represents a NN which is evolving with the passing of different generations. The GA uses the following parameters: • Population size: 200 • Crossover operator: random crossover • Selection method: elite strategy selection • Mutation rate: 1% • Generations: 100 For each room (see Fig. 7) we trained a robot up to 400 steps, considering only configurations with 2 or 3 charging zones, i.e. shutting down zone 3 for 2-zones simulations. Startup battery level allows the robot to finish this training phase without recharging requirements. Finally, we tested our algorithms in each room up to 1000 steps, using the previously trained NN for each respective room. The startup battery level was set to 80 (less than 50% of it's capacity), which was insufficient to realize the whole test without recharging. Applying Real-Time Survivability Considerations in Evolutionary Behavior Learning by a Mobile Robot 193 4.2 Experimental Results a) S- ROOM 2 charge zones b) S- ROOM 3 charge zones c) H- ROOM 2 charge zones d) H- ROOM 3 charge zones Figure 8. Exploration behaviour We chose the S- ROOM and H-ROOM to show results for a simple and complex room respectively, which are representative behaviors of our approach. In Fig. 8 we show the robot's exploration behavior for selected rooms. Each curve in the graph shows the average value of 10 executions of the same experiment (deviation between experiment iterations was very small, justifying only 10 executions). Let surv() i the survivability of the experiment instance i of algorithm , we define surv() as the survivability of an experiment applying algorithm as the worst case survivability instance of an experiment, i.e. surv() = min [surv() i ] (1) i=l, ,10 Please note that the end of each curve in Fig. 8 denotes the survivability of the respective algorithm (for better readability, we mark NRT survivability with a vertical line). Reaching step 1000 (the maximum duration of our experiments) means that the robot using the Frontiers in Evolutionary Robotics 194 algorithm survives the navigation experiment. Finally, in Fig. 9 we show a representative robot's battery level. Monitoring was made during test phase in a H- ROOM with 3 charging zones. a) S- ROOM 3 charge zones b) H- ROOM 2 charge zones Figure 9. Battery Behaviour 4.3 Discussion The results of our experiments are summarized below: Survivability: As shown in Fig. 8, S RT and HRT algorithms give better reliability of completing missions than the NRT method, independently of the rooms (environments) we use for testing (see Fig. 7). As expected, if fewer charging zones are provided, NRT has a less reliable conduct. Please note that as shown in Fig. 9, N RT is also prone to battery depletion risk and does not survive in any case. When varying the room complexity, i.e. 8(b) and 8(d), real-time considerations have significant impact. Using S RT, a purely behavior-based driven robot (with the additional neuron and motivation), improves it's performance. The SRT method does not guarantee Survivability since without changing the robot operation from behavior based to mission oriented the robot is prone to dying even with a greater number of recharge zones (as seen in 8(d)). Finally, we conclude that despite the uncertainty introduced by soft-computing methods, H RT (e.g. the hybrid algorithm), in general is the best and safest robot control method from a real-time point of view. Exploration Environment: As can be seen in Fig. 8 safer behaviors means slower exploration rates (more conservative), up to 12% slower in our experiments. When comparing N RT with SRT, the exploration rates are almost equal in simple environments. In more complex rooms, SRT exploration is slower than NRT (due to battery observance). However, because of SRT having better survivability on the whole it's performance wins over N RT. If we compare NRT with HRT, exploration performance also favors NRT, wich could be explained given HRT conservative battery management (see Fig. 9). Given 2 charge zones, H RT behaves differently in environments of varying complexity (up to 25%) which could be attributed to the complexity of the returning path to the nearest charging zone and loosing steps in further exploration. This phenomena becomes less notorious when increasing the number of charging zones (more options for recharge). Applying Real-Time Survivability Considerations in Evolutionary Behavior Learning by a Mobile Robot 195 5. Conclusions and Future Work In this work we investigate real-time adaptive extensions of our fuzzy logic based approach for providing biologically based motivations to be used in evolutionary mobile robot learning. We introduce active battery level sensors and recharge zones to improve robot's Survivability in environment exploration. In order to achieve this goal, we propose an improvement of our previously defined model (e.g. S RT), as well as a hybrid controller for a mobile robot (e.g. HRT), combining behavior-based and mission-oriented control mechanisms. These methods are implemented and tested in action sequence based environment exploration tasks in a Khepera mobile robot simulator. Experimental results shows that the hybrid method is in general, the best/safest robot control method from a real-time point of view. Also, our preliminary results shows a significant improvement on robot's survivability by having minor changes in the robot's motivations and NN. Currently we are implementing a real robot for environment exploration to validate our model moving from simulation to experimentation. We are also introducing dynamic motivations schedules toward robotic behavior enhancement. Improving the dependability of H RT, we want to extend this control algorithm to safety-critical domains. 6. References T. Arredondo, W. Freund, C. Muñoz, N. Navarro, & F. Quirós. (2006). Fuzzy motivations for evolutionary behavior learning by a mobile robot. Lecture Notes in Artificial Intelligence, 4031:462–471. Mohannad Al-Khatib & Jean J. Saade. (2003). An efficient data-driven fuzzy approach to the motion planning problem of a mobile robot. Fuzzy Sets Syst., 134(1):65–82. Ronald C. Arkin. (1998). Behavior-Based Robotics. MIT Press. Humberto Martínez Barberá & Antonio Gómez Skarmeta. (2002). A framework for defining and learning fuzzy behaviors for autonomous mobile robots. International Journal of Intelligent Systems, 17(1):1–20. W. Freund, T. Arredondo, C. Muñoz, N. Navarro, & F. Quirós. (2006). Realtime adaptive fuzzy motivations for evolutionary behavior learning by a mobile robot. Lecture Notes in Artificial Intelligence, 4293:101–111. S. Goodrige, M. Kay, & R. Luo. (1997). Multi-layered fuzzy behavior fusion for reactive control of an autonomous mobile robot. In Proceedings of the Sixth IEEE International Conference on Fuzzy System, pages 573–578, July 1997. Ahmed Gheith & Karsten Schwan. (1993). Chaosarc: kernel support for multiweight objects, invocations, and atomicity in real-time multiprocessor applications. ACM Transactions on Computer Systems, 11(1):33–72, February 1993. Frank Hoffmann. (2000). Soft computing techniques for the design of mobile robot behaviors. Inf. Sci., 122(2-4):241–258. W. Huitt. (2001) Motivation to learn: An overview. Technical report, Educational Psychology Interactive, Valdosta State University. Kiyotaka Izumi & Keigo Watanabe. (2000). Fuzzy behavior-based control trained by module learning to acquire the adaptive behaviors of mobile robots. Math. Comput. Simul., 51(3-4):233–243. Frontiers in Evolutionary Robotics 196 J S. R. Jang, C T. Sun, & E. Mizutani. (1997). Neuro-Fuzzy and Soft Computing: A Computational Approach to Learning and Machine Intelligence. NJ: Prentice-Hall. ISBN: 0-13-261066-3. Rahul Kumar Jha, Balvinder Singh, & Dilip Kumar Pratihar. (2005). On-line stable gait generation of a two-legged robot using a genetic-fuzzy system. Robotics and Autonomous Systems, 53(1):15–35, October 2005. Kurt Konolige, Karen Meyers, and Alessandro Saffiotti. (1992). Flakey, an autonomous mobile robot. Technical report, Stanford Research Institute International, July 1992. Hermann Kopetz. (1997). Real-Time Systems: Design Principles for Distributed Embedded Applications. Kluwer Academic Publishers, Norwell, MA, USA. Makoto Kern & Peng-Yung Woo. (2005). Implementation of a hexapod mobile robot with a fuzzy controller. Robotica, 23(6):681–688. Ji-Hong Li, Bong-Huan Jun, Pan-Mook Lee, & Seok-Won Hong. (2005). A hierarchical real- time control architecture for a semi-autonomous underwater vehicle. Ocean Engineering, 32(13):1631–1641, September 2005. G. Motet & J C. Geffroy. (2003). Dependable computing: an overview. Theor. Comput. Sci., 290(2):1115–1126, 2003. Guido Maione & David Naso. (2003). A soft computing approach for task contracting in multi-agent manufacturing control. Comput. Ind., 52(3):199–219. H. Seraji & A. Howard. (2002). Behavior-based robot navigation on challenging terrain: A fuzzylogic approach. IEEE Transactions on Robotics and Automation, 18(3):308–321. Homayoun Seraji & Ayanna Howard. (2002). Behavior-based robot navigation on challenging terrain: A fuzzy logic approach. IEEE Transactions on Robotics and Automation, 18(3):308–321, June 2002. Bernhard Sick, Markus Keidl, Markus Ramsauer, and Stefan Seltzsam. (1998). A Comparison of Traditional and Soft-Computing Methods in a Real-Time Control Application. In ICANN 98, Proc. of the 8th Int. Conference on Artificial Neural Networks, pages 725–730, Sweden, September 1998. Paul Tompkins, Anthony Stentz, & David Wettergreen. (2006). Mission-level path planning and re-planning for rover exploration. Robotics and Autonomous Systems, 54(2):174– 183, February 2006. Jeen-Shing Wang & C.S. George Lee. (2003). Self-adaptive recurrent neurofuzzy control of an autonomous underwater vehicle. IEEE Transactions on Robotics and Automation, 19(2):283–295, April 2003. YAKS simulator website: http://r2d2.ida.his.se/. S. Yamada. (2005). Evolutionary behavior learning for action-based environment modeling by a mobile robot. Applied Soft Computing, 5(2):245–257, January 2005. C. Zhou. (2002). Robot learning with ga-based fuzzy reinforcement learning agents. Information Sciences, 145(1), August 2002. Tom Ziemke, Dan-Anders Jirenhed, & Germund Hesslow. (2005). Internal simulation of perception: a minimal neuro-robotic model. Neurocomputing, 68:85–104. 10 An Evolutionary MAP Filter for Mobile Robot Global Localization L. Moreno 1 , S. Garrido 1 , M. L. Muñoz 2 , and D. Blanco 1 1 Robotic's Laboratory, Universidad Carlos III, Madrid 2 Facultad de Informática, Universidad Politécnica, Madrid Spain 1. Introduction This chapter presents a new evolutionary algorithm for Mobile Robot Global Localization. The Evolutive Localization Filter (ELF) presented in this paper is a non linear filter algorithm which is able to solve the global localization problem in a robust and efficient way. The proposed algorithm searches along the configurations space for the best robot pose estimate. The elements of each generation are the set of pose solutions and represent the areas with more probability according to the perception and motion information up to date. The population evolves according to the observation and the motion error derived from the comparison between observed and predicted data obtained from the probabilistic perception and motion model. The algorithm has been tested using a mobile robot with a laser range finder to demonstrate the effectiveness, robustness and computational efficiency of the proposed approach. Mobile robot localization finds out a robot’s coordinates relatives to its environment, assuming that one is provided with a map of the environment. Localization is a key component in navigation and required to execute successfully a trajectory. We can distinguish two different cases: the re-localization case and the global localization case. Re- localization or tracking problem tries to keep track of mobile robot´s pose, where the robot knows its initial position (at least approximately) and therefore has to maintain localized the robot along the given mission. The global localization problem does not assume any knowledge about the robot’s initial position and therefore has to globally localize itself. The two most important aspects that have to be dealt with when designing a localization system is how to represent uncertain information of the environment and the robot’s pose. Among the many ways to represent the knowledge about an environment, this article deals with geometrical localization methods and assumes that the environment is modelled geometrically as an occupancy grid map. In the robot’s pose uncertainty representation and estimation techniques, the vast majority of existing algorithms address only the position tracking problem. In this case the small incremental errors produced along the robot motion and the initial knowledge of the robot’s pose makes classical approaches such as Kalman filters or Scan Matching techniques applicable. If we consider the robot’s pose estimation as a Bayesian recursive problem, Kalman filters estimate posterior distribution of robot’s poses conditioned on sensor data. Frontiers in Evolutionary Robotics 198 Based on the Gaussian noise assumption and the Gaussian-distributed initial uncertainty this method represents posteriors distributions by Gaussians. Kalman filter constitutes an efficient solution for re-localization problems. However, the assumption’s nature of the uncertainty representation makes Kalman filters not robust in global localization problems. Scan Matching techniques are also an iterative local minimization technique and can not be used for global localization. Different families of algorithms can solve the global localization problem, some frequently used are: multi-hypothesis Kalman filters, grid-based probabilistic filters and Monte Carlo localization methods. Those methods can be included in a wider scope group of Bayesian estimation methods. Multi-hypothesis Kalman filters (Arras et al., 2002, Austin & Jensfelt, 2000, Jensfelt & Kristensen, 1999, Cox & Leonard, 1994, Roumeliotis et. al, 2000) represent distributions using mixtures of Gaussians, enabling them to keep track of multiple hypothesis, each of which is represented by a separate Gaussian. This solution presents some initialization problems: one of them is the determination of the initial hypotheses (the number can be very high and it is not bounded), this leads the algorithm to a high computational cost at initial stages. Besides, the Kalman filter is essentially a gradient based method and consequently poorly robust if the initial hypothesis is bad or noise assumptions fails. Grid-based localization algorithms (Fox et al., 1999, Burgard et al., 1996, Reuter, 2000) represent distributions by a discrete set of point probabilities distributed over the space of all possible poses. This group of algorithms are capable of representing multi-modal probabilities distributions. A third group is the Monte Carlo localization algorithms (Jensfelt et al., 2000, Thrun et al., 2001, Dellaert et al., 1999). These algorithms represent the probability distribution by means of a set of samples drawn according to the posterior distribution over robot’s poses. These algorithms can manage arbitrary noise distributions and non-linearities in the system and observation models. These methods present a high computational cost due to its probabilistic nature requires a high number of samples to draw properly the posterior probability density function. The main advantage is its statistical robustness. This article presents a localization algorithm based on a non-linear filter called Evolutionary Localization Filter (ELF). ELF solves the global localization robot problem in a robust and efficient way. The algorithm can deal with arbitrary noise distributions and non-linear space state systems. The key idea of ELF is to represent the uncertainty about the robot’s pose by a set of possible pose estimates weighted by a fitness function. The state is recursively estimated using set of solutions selected according on the weight associated to each possible solution included in the set. The set of solutions evolve in time to integrate the sensor information and the robot motion information. The adaptation engine of the ELF method is based on an evolutive adaptation mechanism which combines a stochastical gradient search together with probabilistic search to find the most promising pose’s candidates. 2. Differential evolutionary filter The evolutive optimization techniques constitute a series of probabilistic search methods that avoid derivatives or probability density estimations to estimate the best solution to a localization problem. In the method proposed here each individual in the evolutive algorithm will represent a possible solution to the localization problem and the value of the loss function represent the error to explain the perceptual and motion data. The search of this solution is done stochastically employing an evolutive search technique based on the An Evolutionary MAP Filter for Mobile Robot Global Localization 199 differential evolution method proposed by Storn and Price (Storn & Price, 2001) for global optimization problems over continuous spaces. The Evolutive Filter uses a a parallel direct search method which utilizes n dimensional parameter vectors 1 () kk kT ii in xx…x ,, =,, to point each candidate solution i to the optimization problem at iteration step k . This method utilizes N parameter vectors{01} k i x i…N;=,,, as a population for each generation t of the optimization process. Each element of the population set represents a possible solution, but it hasn’t associated a probability value to each one (in the particle filter case, each element of the particle set has associated a probability value). The initial population is chosen randomly to cover the entire parameter space uniformly. In absence of a priori information the entire parameter space has the same probability of containing the optimum parameter vector, and a uniform probability distribution is assumed. The differential evolution filter generates new parameter vectors by adding the weighted difference vector between two population members to a third member. If the resulting vector yields a lower objective function value than a predetermined population member, the newly generated vector replaces the vector with which it was compared; otherwise, the old vector is retained. This basic idea is extended by perturbing an existing vector through the addition of one or more weighted difference vectors to it. The perturbation scheme generate a variation v according to the following expression, 23 ()() kkk kk ibi rr vx Lx x Fx x=+ − + − (1) where k i x is the parameter vector to be perturbed at iteration k , k b x is the best parameter vector of the population at iteration k , 2 k r x and 3 k r x are parameter vectors chosen randomly from the population and are different from running index i . L and F are real and constant factors which controls the amplification of the differential variations () kk bi x x− and 23 () kk rr x x− . This expression has two different terms: the () kk bi x x− term is a kind of stochastical gradient while 23 () kk rr x x− is a kind of random search. In order to increase the diversity of the new generation of parameter vectors, crossover is introduced. Denote by 12 () kkk kT iii iD uuu…u ,, , =,,, the new parameter vector with if otherwise k k ij ij k ij k ij vp u x δ , , , , < = ⎧ ⎨ ⎩ (2) where k ij p , is a randomly chosen value from the interval [0 1], for each parameter j of the population member i at step k and δ is the crossover probability and constitutes the crossover control variable. The random values k ij p , are made anew for each trial vector i . Frontiers in Evolutionary Robotics 200 To decide whether or not vector k i u should become a member of generation 1i + , the new vector is compared to k i x . If vector k i u yields a better value for the objective fitness function than k i x , then is replaced by 1k i u + ; otherwise, the old value k i x is retained for the new generation. The general idea of the previous mechanism: mutation, crossover and selection are well known and can be found in literature (Goldberg 1989). x t x t r 3 x t b i new parameter vector v L( - ) x t b x t i K( - ) x t r 2 r 3 x t x t r 2 minimum Figure 1. New population member generation A. Fitness function Due to we are trying to localize a mobile robot, the natural choice for fitness function is the sum of squared errors function. If the observation vector at time t is 1 () T ttpt zz…z ,, =,, and the predicted observations according the estimated robots pose is 1 () ˆˆ ˆ T ttpt … zz z ,, =,, then the penalty function can be stated as ()()() ˆ ˆˆ T tt t ttt Lx z z x zz −=− − (3) In the global localization problem there exist some aspects that make this fitness function difficult to manage: • The range and accuracy of the sensor limits the possibility of discriminate between different poses, leading the fitness function to a high number of global maxima. • The number of sensors limits the possibility of discriminate between robot’s poses leading to multiple global maxima in the fitness function. [...]... obtained given a character string 16 4 = 64 bit if the position of the character string is considered to be the condition part The 0th 4 bit is an action part corresponding to 0000 of the condition part, the first 4 bit is an action part corresponding to 0001 of the condition part and so on The merit of this method is that only character strings of a fixed length are necessary (64 bit), which allows easy adjustment... pre-selection criterion, Proceedings of IEEE Congress on Evolutionary Computation (CEC’03), pp 69 2 -69 9, Canberra, Australia, December 2003 Ulmer, H.; Streichert, F & Zell, A (2004) Evolution Strategies with Controlled Model Assistance, Proceedings of the 2004 IEEE Congress on Evolutionary Computation, pp 1 569 -15 76, Portland, Oregon, June 2004 12 Evolutionary Morphology for Polycube Robots Takahiro Tohge... condition parts are 0, are highly influential, therefore the location of each gene is a key factor and a 64 bit chromosome has only 16 genes To solve this problem, we consider providing more than one pair of a set of conditions and action parts, i.e., a basic model and its corresponding rule For instance, the length of a chromosome of an 8 bit gene consisting of a 4 bit condition part and a 4 bit action part. .. insignificant rule such as the action part 1100 corresponds to the condition part 1100 could be generated Although such an insignificant action part can be considered an intron in a chromosome, the action part is disassembled as follows by increasing the condition parts to reduce the impact of one gene on the entire development process Genes originally have a one-directional condition part Now focusing on one block... Conference on Artificial Intelligence AAAI- 96, Portland, Oregon, USA, pp 8 96- 901 Cox, I.J., Leonard, J.J (1994) Modelling a dynamic environment using a Bayesian multi hypothesis approach Artificial Intelligence, 66 , pp 311-44 Dellaert, F., Fox, D., Burgard, W., Thrun, S (1999) Monte Carlo Localization for Mobile Robots Proceedings of the 1999 International Conference on Robotics and Automation, pp 1322-1328... corresponding condition part In this case, introducing ‘‘*:don ’t care’‘ (meaning that the conditions for a connection are disregarded for the location) would solve the issue of a smaller number of corresponding condition parts However, the action parts for genes of variable length are still 4 bit and therefore the condition parts containing many 1s make the corresponding action part meaningless For example,... problem 70 60 Averaged Fitness 50 40 Standard ES Nearest Neighbour Polynomial Model Gaussian Process 30 20 10 0 0 1000 2000 3000 4000 5000 60 00 7000 8000 Fitness Evaluations Figure 5 Comparison of the fitness curves of the Standard Evolution Strategy and the Model Assisted Evolution Strategies with different models 70 Averaged Fitness 65 60 MAES C-MAES 55 50 45 0 1000 2000 3000 4000 5000 60 00 7000 8000... the Int Conference on Robotics and Automation ICRA-02, Washington D.C., USA, pp 13711377 Austin, D.J., and Jensfelt, P (2000) Using multiple Gaussian hypotheses to represent probability distributions for mobile robot localization Proc of the Int Conference on Robotics and Automation ICRA-00, San Francisco, CA, USA, pp 10 36- 1041 Burgard, W., Fox, D., Henning, D., and Schmidt, T (19 96) Estimating the absolute... configuration by automatically recombining modules to fit a particular environment However, this chapter deals with cube-shaped modular robots that can be manually recombined to achieve significant configurations that can adjust to actual environments using evolutionary morphology This chapter describes how successfully EC 222 Frontiers in Evolutionary Robotics is applied to the morphology for real cubic... prepared both for the condition and the action parts If the flag for the condition part is 1 (meaning true), the block must be connected in that direction A 0 (meaning false) flag shows that the block must not be connected in that direction When the condition part fits in each of the four directions, a block is connected in a direction for which the condition part flag is 1 This is the basic idea of the . ISBN: 0-13- 261 066 -3. Rahul Kumar Jha, Balvinder Singh, & Dilip Kumar Pratihar. (2005). On-line stable gait generation of a two-legged robot using a genetic-fuzzy system. Robotics and Autonomous. displacement. Then go to step 2. An Evolutionary MAP Filter for Mobile Robot Global Localization 203 0 20 40 60 80 100 120 140 160 180 200 0 10 20 30 40 50 60 Number of iterations Population. 500 20 40 60 80 100 50 100 150 200 250 300 350 400 450 500 20 40 60 80 100 50 100 150 200 250 300 350 400 450 500 20 40 60 80 100 50 100 150 200 250 300 350 400 450 500 20 40 60 80 100