I Robot Localization and Map Building Robot Localization and Map Building Edited by Hanafiah Yussof In-Tech intechweb.org Published by In-Teh In-Teh Olajnica 19/2, 32000 Vukovar, Croatia Abstracting and non-prot use of the material is permitted with credit to the source. Statements and opinions expressed in the chapters are these of the individual contributors and not necessarily those of the editors or publisher. No responsibility is accepted for the accuracy of information contained in the published articles. Publisher assumes no responsibility liability for any damage or injury to persons or property arising out of the use of any materials, instructions, methods or ideas contained inside. After this work has been published by the In-Teh, authors have the right to republish it, in whole or part, in any publication of which they are an author or editor, and the make other personal use of the work. © 2010 In-teh www.intechweb.org Additional copies can be obtained from: publication@intechweb.org First published March 2010 Printed in India Technical Editor: Sonja Mujacic Cover designed by Dino Smrekar Robot Localization and Map Building, Edited by Hanaah Yussof p. cm. ISBN 978-953-7619-83-1 V Preface Navigation of mobile platform is a broad topic, covering a large spectrum of different technologies and applications. As one of the important technology highlighting the 21st century, autonomous navigation technology is currently used in broader spectra, ranging from basic mobile platform operating in land such as wheeled robots, legged robots, automated guided vehicles (AGV) and unmanned ground vehicle (UGV), to new application in underwater and airborne such as underwater robots, autonomous underwater vehicles (AUV), unmanned maritime vehicle (UMV), ying robots and unmanned aerial vehicle (UAV). Localization and mapping are the essence of successful navigation in mobile platform technology. Localization is a fundamental task in order to achieve high levels of autonomy in robot navigation and robustness in vehicle positioning. Robot localization and mapping is commonly related to cartography, combining science, technique and computation to build a trajectory map that reality can be modelled in ways that communicate spatial information effectively. The goal is for an autonomous robot to be able to construct (or use) a map or oor plan and to localize itself in it. This technology enables robot platform to analyze its motion and build some kind of map so that the robot locomotion is traceable for humans and to ease future motion trajectory generation in the robot control system. At present, we have robust methods for self-localization and mapping within environments that are static, structured, and of limited size. Localization and mapping within unstructured, dynamic, or large-scale environments remain largely an open research problem. Localization and mapping in outdoor and indoor environments are challenging tasks in autonomous navigation technology. The famous Global Positioning System (GPS) based on satellite technology may be the best choice for localization and mapping at outdoor environment. Since this technology is not applicable for indoor environment, the problem of indoor navigation is rather complex. Nevertheless, the introduction of Simultaneous Localization and Mapping (SLAM) technique has become the key enabling technology for mobile robot navigation at indoor environment. SLAM addresses the problem of acquiring a spatial map of a mobile robot environment while simultaneously localizing the robot relative to this model. The solution method for SLAM problem, which are mainly introduced in this book, is consists of three basic SLAM methods. The rst is known as extended Kalman lters (EKF) SLAM. The second is using sparse nonlinear optimization methods that based on graphical representation. The nal method is using nonparametric statistical ltering techniques known as particle lters. Nowadays, the application of SLAM has been expended to outdoor environment, for use in outdoor’s robots and autonomous vehicles and aircrafts. Several interesting works related to this issue are presented in this book. The recent rapid VI progress in sensors and fusion technology has also benets the mobile platforms performing navigation in term of improving environment recognition quality and mapping accuracy. As one of important element in robot localization and map building, this book presents interesting reports related to sensing fusion and network for optimizing environment recognition in autonomous navigation. This book describes comprehensive introduction, theories and applications related to localization, positioning and map building in mobile robot and autonomous vehicle platforms. It is organized in twenty seven chapters. Each chapter is rich with different degrees of details and approaches, supported by unique and actual resources that make it possible for readers to explore and learn the up to date knowledge in robot navigation technology. Understanding the theory and principles described in this book requires a multidisciplinary background of robotics, nonlinear system, sensor network, network engineering, computer science, physics, etc. The book at rst explores SLAM problems through extended Kalman lters, sparse nonlinear graphical representation and particle lters methods. Next, fundamental theory of motion planning and map building are presented to provide useful platform for applying SLAM methods in real mobile systems. It is then followed by the application of high-end sensor network and fusion technology that gives useful inputs for realizing autonomous navigation in both indoor and outdoor environments. Finally, some interesting results of map building and tracking can be found in 2D, 2.5D and 3D models. The actual motion of robots and vehicles when the proposed localization and positioning methods are deployed to the system can also be observed together with tracking maps and trajectory analysis. Since SLAM techniques mostly deal with static environments, this book provides good reference for future understanding the interaction of moving and non-moving objects in SLAM that still remain as open research issue in autonomous navigation technology. Hanaah Yussof Nagoya University, Japan Universiti Teknologi MARA, Malaysia VII Contents Preface V 1. VisualLocalisationofquadrupedwalkingrobots 001 RenatoSamperioandHuoshengHu 2. Rangingfusionforaccuratingstateoftheartrobotlocalization 027 HamedBastaniandHamidMirmohammad-Sadeghi 3. BasicExtendedKalmanFilter–SimultaneousLocalisationandMapping 039 OduetseMatsebe,MolaletsaNamosheandNkgathoTlale 4. ModelbasedKalmanFilterMobileRobotSelf-Localization 059 EdouardIvanjko,AndrejaKitanovandIvanPetrović 5. GlobalLocalizationbasedonaRejectionDifferentialEvolutionFilter 091 M.L.Muñoz,L.Moreno,D.BlancoandS.Garrido 6. ReliableLocalizationSystemsincludingGNSSBiasCorrection 119 PierreDelmas,ChristopheDebain,RolandChapuisandCédricTessier 7. Evaluationofaligningmethodsforlandmark-basedmapsinvisualSLAM 133 MónicaBallesta,ÓscarReinoso,ArturoGil,LuisPayáandMiguelJuliá 8. KeyElementsforMotionPlanningAlgorithms 151 AntonioBenitez,IgnacioHuitzil,DanielVallejo,JorgedelaCallejaandMa.AuxilioMedina 9. OptimumBipedTrajectoryPlanningforHumanoidRobotNavigationinUnseen Environment 175 HanaahYussofandMasahiroOhka 10. Multi-RobotCooperativeSensingandLocalization 207 Kai-TaiSong,Chi-YiTsaiandCheng-HsienChiuHuang 11. FilteringAlgorithmforReliableLocalizationofMobileRobotinMulti-Sensor Environment 227 Yong-ShikKim,JaeHoonLee,BongKeunKim,HyunMinDoandAkihisaOhya 12. ConsistentMapBuildingBasedonSensorFusionforIndoorServiceRobot 239 RenC.LuoandChunC.Lai VIII 13. MobileRobotLocalizationandMapBuildingforaNonholonomicMobileRobot 253 SongminJiaandAkiraYasuda 14. RobustGlobalUrbanLocalizationBasedonRoadMaps 267 JoseGuivant,MarkWhittyandAliciaRobledo 15. ObjectLocalizationusingStereoVision 285 SaiKrishnaVuppala 16. VisionbasedSystemsforLocalizationinServiceRobots 309 PaulrajM.P.andHemaC.R. 17. Floortexturevisualservousingmultiplecamerasformobilerobotlocalization 323 TakeshiMatsumoto,DavidPowersandNasserAsgari 18. Omni-directionalvisionsensorformobilerobotnavigationbasedonparticlelter 349 ZuoliangCao,YanbinLiandShenghuaYe 19. VisualOdometryandmappingforunderwaterAutonomousVehicles 365 SilviaBotelho,GabrielOliveira,PauloDrews,MônicaFigueiredoandCelinaHaffele 20. ADaisy-ChainingVisualServoingApproachwithApplicationsin Tracking,Localization,andMapping 383 S.S.Mehta,W.E.Dixon,G.HuandN.Gans 21. VisualBasedLocalizationofaLeggedRobotwithatopologicalrepresentation 409 FranciscoMartín,VicenteMatellán,JoséMaríaCañasandCarlosAgüero 22. MobileRobotPositioningBasedonZigBeeWirelessSensor NetworksandVisionSensor 423 WangHongbo 23. AWSNs-basedApproachandSystemforMobileRobotNavigation 445 HuaweiLiang,TaoMeiandMaxQ H.Meng 24. Real-TimeWirelessLocationandTrackingSystemwithMotionPatternDetection 467 PedroAbreua,VascoVinhasa,PedroMendesa,LuísPauloReisaandJúlioGargantab 25. SoundLocalizationforRobotNavigation 493 JieHuang 26. ObjectsLocalizationandDifferentiationUsingUltrasonicSensors 521 BogdanKreczmer 27. HeadingMeasurementsforIndoorMobileRobotswithMinimized DriftusingaMEMSGyroscopes 545 SungKyungHongandYoung-sunRyuh 28. MethodsforWheelSlipandSinkageEstimationinMobileRobots 561 GiulioReina VisualLocalisationofquadrupedwalkingrobots 1 VisualLocalisationofquadrupedwalkingrobots RenatoSamperioandHuoshengHu 0 Visual Localisation of quadruped walking robots Renato Samperio and Huosheng Hu School of Computer Science and Electronic Engineering, University of Essex United Kingdom 1. Introduction Recently, several solutions to the robot localisation problem have been proposed in the sci- entific community. In this chapter we present a localisation of a visual guided quadruped walking robot in a dynamic environment. We investigate the quality of robot localisation and landmark detection, in which robots perform the RoboCup competition (Kitano et al., 1997). The first part presents an algorithm to determine any entity of interest in a global reference frame, where the robot needs to locate landmarks within its surroundings. In the second part, a fast and hybrid localisation method is deployed to explore the characteristics of the proposed localisation method such as processing time, convergence and accuracy. In general, visual localisation of legged robots can be achieved by using artificial and natural landmarks. The landmark modelling problem has been already investigated by using prede- fined landmark matching and real-time landmark learning strategies as in (Samperio & Hu, 2010). Also, by following the pre-attentive and attentive stages of previously presented work of (Quoc et al., 2004), we propose a landmark model for describing the environment with "in- teresting" features as in (Luke et al., 2005), and to measure the quality of landmark description and selection over time as shown in (Watman et al., 2004). Specifically, we implement visual detection and matching phases of a pre-defined landmark model as in (Hayet et al., 2002) and (Sung et al., 1999), and for real-time recognised landmarks in the frequency domain (Maosen et al., 2005) where they are addressed by a similarity evaluation process presented in (Yoon & Kweon, 2001). Furthermore, we have evaluated the performance of proposed localisation methods, Fuzzy-Markov (FM), Extended Kalman Filters (EKF) and an combined solution of Fuzzy-Markov-Kalman (FM-EKF),as in (Samperio et al., 2007)(Hatice et al., 2006). The proposed hybrid method integrates a probabilistic multi-hypothesis and grid-based maps with EKF-based techniques. As it is presented in (Kristensen & Jensfelt, 2003) and (Gutmann et al., 1998), some methodologies require an extensive computation but offer a reliable posi- tioning system. By cooperating a Markov-based method into the localisation process (Gut- mann, 2002), EKF positioning can converge faster with an inaccurate grid observation. Also. Markov-based techniques and grid-based maps (Fox et al., 1998) are classic approaches to robot localisation but their computational cost is huge when the grid size in a map is small (Duckett & Nehmzow, 2000) and (Jensfelt et al., 2000) for a high resolution solution. Even the problem has been partially solved by the Monte Carlo (MCL) technique (Fox et al., 1999), (Thrun et al., 2000) and (Thrun et al., 2001), it still has difficulties handling environmental changes (Tanaka et al., 2004). In general, EKF maintains a continuous estimation of robot po- sition, but can not manage multi-hypothesis estimations as in (Baltzakis & Trahanias, 2002). 1 RobotLocalizationandMapBuilding2 Moreover, traditional EKF localisation techniques are computationally efficient, but they may also fail with quadruped walking robots present poor odometry, in situations such as leg slip- page and loss of balance. Furthermore, we propose a hybrid localisation method to eliminate inconsistencies and fuse inaccurate odometry data with costless visual data. The proposed FM-EKF localisation algorithm makes use of a fuzzy cell to speed up convergence and to maintain an efficient localisation. Subsequently, the performance of the proposed method was tested in three experimental comparisons: (i) simple movements along the pitch, (ii) localising and playing combined behaviours and c) kidnapping the robot. The rest of the chapter is organised as follows. Following the brief introduction of Section 1, Section 2 describes the proposed observer module as an updating process of a Bayesian lo- calisation method. Also, robot motion and measurement models are presented in this section for real-time landmark detection. Section 3 investigates the proposed localisation methods. Section 4 presents the system architecture. Some experimental results on landmark modelling and localisation are presented in Section 5 to show the feasibility and performance of the pro- posed localisation methods. Finally, a brief conclusion is given in Section 6. 2. Observer design This section describes a robot observer model for processing motion and measurement phases. These phases, also known as Predict and Update, involve a state estimation in a time sequence for robot localisation. Additionally, at each phase the state is updated by sensing information and modelling noise for each projected state. 2.1 Motion Model The state-space process requires a state vector as processing and positioning units in an ob- server design problem. The state vector contains three variables for robot localisation, i.e., 2D position (x, y) and orientation (θ). Additionally, the prediction phase incorporates noise from robot odometry, as it is presented below: x − t y − t θ − t = x t−1 y t−1 θ t−1 + (u lin t + w lin t )cosθ t−1 −(u lat t + w lat t )si nθ t−1 (u lin t + w lin t )si nθ t−1 + (u lat t + w lat t )cosθ t−1 u rot t + w rot t (4.9) where u lat , u lin and u rot are the lateral, linear and rotational components of odometry, and w lat , w lin and w rot are the lateral, linear and rotational components in errors of odometry. Also, t −1 refers to the time of the previous time step and t to the time of the current step. In general, state estimation is a weighted combination of noisy states using both priori and posterior estimations. Likewise, assuming that v is the measurement noise and w is the pro- cess noise, the expected value of the measurement R and process noise Q covariance matrixes are expressed separately as in the following equations: R = E[vv t ] (4.10) Q = E[ww t ] (4.11) The measurement noise in matrix R represents sensor errors and the Q matrix is also a con- fidence indicator for current prediction which increases or decreases state uncertainty. An odometry motion model, u t−1 is adopted as shown in Figure 1. Moreover, Table 1 describes all variables for three dimensional (linear, lateral and rotational) odometry information where ( x, y ) is the estimated values and (x, y) the actual states. Fig. 1. The proposed motion model for Aibo walking robot According to the empirical experimental data, the odometry system presents a deviation of 30% on average as shown in Equation (4.12). Therefore, by applying a transformation matrix W t from Equation (4.13), noise can be addressed as robot uncertainty where θ points the robot heading. Q t = (0.3u lin t ) 2 0 0 0 (0.3u lat t ) 2 0 0 0 (0.3u rot t + √ (u lin t ) 2 +(u lat t ) 2 500 ) 2 (4.12) W t = f w = cosθ t−1 −senθ t−1 0 sen θ t−1 cosθ t−1 0 0 0 1 (4.13) 2.2 Measurement Model In order to relate the robot to its surroundings, we make use of a landmark representation. The landmarks in the robot environment require notational representation of a measured vector f i t for each i-th feature as it is described in the following equation: f (z t ) = {f 1 t , f 2 t , } = { r 1 t b 1 t s 1 t , r 2 t b 2 t s 2 t , } (4.14) where landmarks are detected by an onboard active camera in terms of range r i t , bearing b i t and a signature s i t for identifying each landmark. A landmark measurement model is defined by a feature-based map m, which consists of a list of signatures and coordinate locations as follows: m = {m 1 , m 2 , } = {(m 1,x , m 1,y ), (m 2,x , m 2,y ), } (4.15) [...]... absolute differences visual landmark, IEEE International Conference on Robotics and Automation, Vol 5, pp 4827 – 4832 Yoon, K J & Kweon, I S (2001) Artificial landmark tracking based on the color histogram, IEEE/RSJ Intl Conference on Intelligent Robots and Systems, Vol 4, pp 1918–1923 26 Robot Localization and Map Building Ranging fusion for accurating state of the art robot localization 27 2 X Ranging... Experiment 1, the robot follows a trajectory in order to localise and generate a set of visible ground truth points along the pitch In Figures 9 and 10 are presented the error in X and Y axis by comparing the EKF, FM, FM-EKF methods with a grand truth In this graphs it is shown a similar performance between the methods EKF and FM-EKF for the error in X and Y 18 Robot Localization and Map Building Fig 9... 2.73 Table 2 Mean and standard deviation for experiment 1, 2 and 3 trajectory with 20 cm of bandwidth radio, and whilst the robot s head is continuously panning The robot is initially positioned 500 mm away from a coloured beacon situated at 0 degrees from the robot s mass centre The robot is also located in between three defined and one undefined landmarks Results obtained from dynamic landmark modelling... for state estimation in robotics This approach makes use of a state vector for robot positioning which is related to environment perception and robot odometry For instance, robot position is adapted using a vector st which contains ( x, y) as robot position and θ as orientation xrobot (4.17) s = yrobot robot As a Bayesian filtering method, EKF is implemented Predict and Update steps, described... Press, pp 826 – 831 24 Robot Localization and Map Building Fox, D., Burgard, W., Dellaert, F & Thrun, S (1999) Monte carlo localization: Efficient position estimation for mobile robots, AAAI/IAAI, pp 343–349 URL: http://citeseer.ist.psu.edu/36645.html Fox, D., Burgard, W & Thrun, S (1998) Markov localization for reliable robot navigation and people detection, Sensor Based Intelligent Robots, pp 1–20 Gutmann,... swapping stage modified for evaluating similarity values and it also eliminates 10% of the landmark 6 Robot Localization and Map Building Algorithm 1 Process for creating a landmark model from a list of observed features Require: Map of observed features { E} Require: A collection of landmark models { L} {The following operations generate the landmark model information.} 1: for all { E}i ⊆ { E} do 2:... used for localization, parameters such as refresh rate, cost (power consumption, computation, price, infrastructure installation burden, ), mobility and adaptively to the environment (indoor, outdoor, space robotics, underwater vehicles, ) and so on From the mobile robotics perspective, localization and mapping are deeply correlated There is the whole field of Simultaneous Localization and Mapping (SLAM),... containing multiple robots as the network nodes, can be considered a feature providing higher positioning accuracy compared to the traditional methods Generally speaking, such improvements are scalable and also provide robustness against variant inevitable disturbing environmental features and measurement noise 28 Robot Localization and Map Building 2 State of the Art Robot Localization Localization in... active camera in terms of range rt , bearing bt i for identifying each landmark A landmark measurement model is defined and a signature st by a feature-based map m, which consists of a list of signatures and coordinate locations as follows: m = {m1 , m2 , } = {(m1,x , m1,y ), (m2,x , m2,y ), } (4.15) 4 Robot Localization and Map Building Variable xa ya x t −1 y t −1 θ t −1 x t −1 y t −1 lin,lat ut urot... the first and second experiments over a duration of five minutes Initially, newly acquired landmarks are located at 500 mm and with an angle of 3Π/4rad from the robot s centre Results are presented in Table ?? The tables for Experiments 1 and 2, ˜ illustrate the mean (x) and standard deviation (σ) of each entity’s distance, angle and errors from the robot s perspective In the third experiment, landmark . I Robot Localization and Map Building Robot Localization and Map Building Edited by Hanafiah Yussof In-Tech intechweb.org Published. 227 Yong-ShikKim,JaeHoonLee,BongKeunKim,HyunMinDo and AkihisaOhya 12. Consistent Map Building BasedonSensorFusionforIndoorService Robot 239 RenC.Luo and ChunC.Lai VIII 13. Mobile Robot Localization and Map Building foraNonholonomicMobile Robot . similarity values and it also eliminates 10% of the landmark Robot Localization and Map Building6 Algorithm 1 Process for creating a landmark model from a list of observed features. Require: Map of observed