1. Trang chủ
  2. » Kỹ Thuật - Công Nghệ

Robot Localization and Map Building Part 1 docx

35 230 0

Đang tải... (xem toàn văn)

Tài liệu hạn chế xem trước, để xem đầy đủ mời bạn chọn Tải xuống

THÔNG TIN TÀI LIỆU

Thông tin cơ bản

Định dạng
Số trang 35
Dung lượng 2,34 MB

Nội dung

I Robot Localization and Map Building Robot Localization and Map Building Edited by Hanafiah Yussof In-Tech intechweb.org Published by In-Teh In-Teh Olajnica 19/2, 32000 Vukovar, Croatia Abstracting and non-prot use of the material is permitted with credit to the source. Statements and opinions expressed in the chapters are these of the individual contributors and not necessarily those of the editors or publisher. No responsibility is accepted for the accuracy of information contained in the published articles. Publisher assumes no responsibility liability for any damage or injury to persons or property arising out of the use of any materials, instructions, methods or ideas contained inside. After this work has been published by the In-Teh, authors have the right to republish it, in whole or part, in any publication of which they are an author or editor, and the make other personal use of the work. © 2010 In-teh www.intechweb.org Additional copies can be obtained from: publication@intechweb.org First published March 2010 Printed in India Technical Editor: Sonja Mujacic Cover designed by Dino Smrekar Robot Localization and Map Building, Edited by Hanaah Yussof p. cm. ISBN 978-953-7619-83-1 V Preface Navigation of mobile platform is a broad topic, covering a large spectrum of different technologies and applications. As one of the important technology highlighting the 21st century, autonomous navigation technology is currently used in broader spectra, ranging from basic mobile platform operating in land such as wheeled robots, legged robots, automated guided vehicles (AGV) and unmanned ground vehicle (UGV), to new application in underwater and airborne such as underwater robots, autonomous underwater vehicles (AUV), unmanned maritime vehicle (UMV), ying robots and unmanned aerial vehicle (UAV). Localization and mapping are the essence of successful navigation in mobile platform technology. Localization is a fundamental task in order to achieve high levels of autonomy in robot navigation and robustness in vehicle positioning. Robot localization and mapping is commonly related to cartography, combining science, technique and computation to build a trajectory map that reality can be modelled in ways that communicate spatial information effectively. The goal is for an autonomous robot to be able to construct (or use) a map or oor plan and to localize itself in it. This technology enables robot platform to analyze its motion and build some kind of map so that the robot locomotion is traceable for humans and to ease future motion trajectory generation in the robot control system. At present, we have robust methods for self-localization and mapping within environments that are static, structured, and of limited size. Localization and mapping within unstructured, dynamic, or large-scale environments remain largely an open research problem. Localization and mapping in outdoor and indoor environments are challenging tasks in autonomous navigation technology. The famous Global Positioning System (GPS) based on satellite technology may be the best choice for localization and mapping at outdoor environment. Since this technology is not applicable for indoor environment, the problem of indoor navigation is rather complex. Nevertheless, the introduction of Simultaneous Localization and Mapping (SLAM) technique has become the key enabling technology for mobile robot navigation at indoor environment. SLAM addresses the problem of acquiring a spatial map of a mobile robot environment while simultaneously localizing the robot relative to this model. The solution method for SLAM problem, which are mainly introduced in this book, is consists of three basic SLAM methods. The rst is known as extended Kalman lters (EKF) SLAM. The second is using sparse nonlinear optimization methods that based on graphical representation. The nal method is using nonparametric statistical ltering techniques known as particle lters. Nowadays, the application of SLAM has been expended to outdoor environment, for use in outdoor’s robots and autonomous vehicles and aircrafts. Several interesting works related to this issue are presented in this book. The recent rapid VI progress in sensors and fusion technology has also benets the mobile platforms performing navigation in term of improving environment recognition quality and mapping accuracy. As one of important element in robot localization and map building, this book presents interesting reports related to sensing fusion and network for optimizing environment recognition in autonomous navigation. This book describes comprehensive introduction, theories and applications related to localization, positioning and map building in mobile robot and autonomous vehicle platforms. It is organized in twenty seven chapters. Each chapter is rich with different degrees of details and approaches, supported by unique and actual resources that make it possible for readers to explore and learn the up to date knowledge in robot navigation technology. Understanding the theory and principles described in this book requires a multidisciplinary background of robotics, nonlinear system, sensor network, network engineering, computer science, physics, etc. The book at rst explores SLAM problems through extended Kalman lters, sparse nonlinear graphical representation and particle lters methods. Next, fundamental theory of motion planning and map building are presented to provide useful platform for applying SLAM methods in real mobile systems. It is then followed by the application of high-end sensor network and fusion technology that gives useful inputs for realizing autonomous navigation in both indoor and outdoor environments. Finally, some interesting results of map building and tracking can be found in 2D, 2.5D and 3D models. The actual motion of robots and vehicles when the proposed localization and positioning methods are deployed to the system can also be observed together with tracking maps and trajectory analysis. Since SLAM techniques mostly deal with static environments, this book provides good reference for future understanding the interaction of moving and non-moving objects in SLAM that still remain as open research issue in autonomous navigation technology. Hanaah Yussof Nagoya University, Japan Universiti Teknologi MARA, Malaysia VII Contents Preface V 1. VisualLocalisationofquadrupedwalkingrobots 001 RenatoSamperioandHuoshengHu 2. Rangingfusionforaccuratingstateoftheartrobotlocalization 027 HamedBastaniandHamidMirmohammad-Sadeghi 3. BasicExtendedKalmanFilter–SimultaneousLocalisationandMapping 039 OduetseMatsebe,MolaletsaNamosheandNkgathoTlale 4. ModelbasedKalmanFilterMobileRobotSelf-Localization 059 EdouardIvanjko,AndrejaKitanovandIvanPetrović 5. GlobalLocalizationbasedonaRejectionDifferentialEvolutionFilter 091 M.L.Muñoz,L.Moreno,D.BlancoandS.Garrido 6. ReliableLocalizationSystemsincludingGNSSBiasCorrection 119 PierreDelmas,ChristopheDebain,RolandChapuisandCédricTessier 7. Evaluationofaligningmethodsforlandmark-basedmapsinvisualSLAM 133 MónicaBallesta,ÓscarReinoso,ArturoGil,LuisPayáandMiguelJuliá 8. KeyElementsforMotionPlanningAlgorithms 151 AntonioBenitez,IgnacioHuitzil,DanielVallejo,JorgedelaCallejaandMa.AuxilioMedina 9. OptimumBipedTrajectoryPlanningforHumanoidRobotNavigationinUnseen Environment 175 HanaahYussofandMasahiroOhka 10. Multi-RobotCooperativeSensingandLocalization 207 Kai-TaiSong,Chi-YiTsaiandCheng-HsienChiuHuang 11. FilteringAlgorithmforReliableLocalizationofMobileRobotinMulti-Sensor Environment 227 Yong-ShikKim,JaeHoonLee,BongKeunKim,HyunMinDoandAkihisaOhya 12. ConsistentMapBuildingBasedonSensorFusionforIndoorServiceRobot 239 RenC.LuoandChunC.Lai VIII 13. MobileRobotLocalizationandMapBuildingforaNonholonomicMobileRobot 253 SongminJiaandAkiraYasuda 14. RobustGlobalUrbanLocalizationBasedonRoadMaps 267 JoseGuivant,MarkWhittyandAliciaRobledo 15. ObjectLocalizationusingStereoVision 285 SaiKrishnaVuppala 16. VisionbasedSystemsforLocalizationinServiceRobots 309 PaulrajM.P.andHemaC.R. 17. Floortexturevisualservousingmultiplecamerasformobilerobotlocalization 323 TakeshiMatsumoto,DavidPowersandNasserAsgari 18. Omni-directionalvisionsensorformobilerobotnavigationbasedonparticlelter 349 ZuoliangCao,YanbinLiandShenghuaYe 19. VisualOdometryandmappingforunderwaterAutonomousVehicles 365 SilviaBotelho,GabrielOliveira,PauloDrews,MônicaFigueiredoandCelinaHaffele 20. ADaisy-ChainingVisualServoingApproachwithApplicationsin Tracking,Localization,andMapping 383 S.S.Mehta,W.E.Dixon,G.HuandN.Gans 21. VisualBasedLocalizationofaLeggedRobotwithatopologicalrepresentation 409 FranciscoMartín,VicenteMatellán,JoséMaríaCañasandCarlosAgüero 22. MobileRobotPositioningBasedonZigBeeWirelessSensor NetworksandVisionSensor 423 WangHongbo 23. AWSNs-basedApproachandSystemforMobileRobotNavigation 445 HuaweiLiang,TaoMeiandMaxQ H.Meng 24. Real-TimeWirelessLocationandTrackingSystemwithMotionPatternDetection 467 PedroAbreua,VascoVinhasa,PedroMendesa,LuísPauloReisaandJúlioGargantab 25. SoundLocalizationforRobotNavigation 493 JieHuang 26. ObjectsLocalizationandDifferentiationUsingUltrasonicSensors 521 BogdanKreczmer 27. HeadingMeasurementsforIndoorMobileRobotswithMinimized DriftusingaMEMSGyroscopes 545 SungKyungHongandYoung-sunRyuh 28. MethodsforWheelSlipandSinkageEstimationinMobileRobots 561 GiulioReina VisualLocalisationofquadrupedwalkingrobots 1 VisualLocalisationofquadrupedwalkingrobots RenatoSamperioandHuoshengHu 0 Visual Localisation of quadruped walking robots Renato Samperio and Huosheng Hu School of Computer Science and Electronic Engineering, University of Essex United Kingdom 1. Introduction Recently, several solutions to the robot localisation problem have been proposed in the sci- entific community. In this chapter we present a localisation of a visual guided quadruped walking robot in a dynamic environment. We investigate the quality of robot localisation and landmark detection, in which robots perform the RoboCup competition (Kitano et al., 1997). The first part presents an algorithm to determine any entity of interest in a global reference frame, where the robot needs to locate landmarks within its surroundings. In the second part, a fast and hybrid localisation method is deployed to explore the characteristics of the proposed localisation method such as processing time, convergence and accuracy. In general, visual localisation of legged robots can be achieved by using artificial and natural landmarks. The landmark modelling problem has been already investigated by using prede- fined landmark matching and real-time landmark learning strategies as in (Samperio & Hu, 2010). Also, by following the pre-attentive and attentive stages of previously presented work of (Quoc et al., 2004), we propose a landmark model for describing the environment with "in- teresting" features as in (Luke et al., 2005), and to measure the quality of landmark description and selection over time as shown in (Watman et al., 2004). Specifically, we implement visual detection and matching phases of a pre-defined landmark model as in (Hayet et al., 2002) and (Sung et al., 1999), and for real-time recognised landmarks in the frequency domain (Maosen et al., 2005) where they are addressed by a similarity evaluation process presented in (Yoon & Kweon, 2001). Furthermore, we have evaluated the performance of proposed localisation methods, Fuzzy-Markov (FM), Extended Kalman Filters (EKF) and an combined solution of Fuzzy-Markov-Kalman (FM-EKF),as in (Samperio et al., 2007)(Hatice et al., 2006). The proposed hybrid method integrates a probabilistic multi-hypothesis and grid-based maps with EKF-based techniques. As it is presented in (Kristensen & Jensfelt, 2003) and (Gutmann et al., 1998), some methodologies require an extensive computation but offer a reliable posi- tioning system. By cooperating a Markov-based method into the localisation process (Gut- mann, 2002), EKF positioning can converge faster with an inaccurate grid observation. Also. Markov-based techniques and grid-based maps (Fox et al., 1998) are classic approaches to robot localisation but their computational cost is huge when the grid size in a map is small (Duckett & Nehmzow, 2000) and (Jensfelt et al., 2000) for a high resolution solution. Even the problem has been partially solved by the Monte Carlo (MCL) technique (Fox et al., 1999), (Thrun et al., 2000) and (Thrun et al., 2001), it still has difficulties handling environmental changes (Tanaka et al., 2004). In general, EKF maintains a continuous estimation of robot po- sition, but can not manage multi-hypothesis estimations as in (Baltzakis & Trahanias, 2002). 1 RobotLocalizationandMapBuilding2 Moreover, traditional EKF localisation techniques are computationally efficient, but they may also fail with quadruped walking robots present poor odometry, in situations such as leg slip- page and loss of balance. Furthermore, we propose a hybrid localisation method to eliminate inconsistencies and fuse inaccurate odometry data with costless visual data. The proposed FM-EKF localisation algorithm makes use of a fuzzy cell to speed up convergence and to maintain an efficient localisation. Subsequently, the performance of the proposed method was tested in three experimental comparisons: (i) simple movements along the pitch, (ii) localising and playing combined behaviours and c) kidnapping the robot. The rest of the chapter is organised as follows. Following the brief introduction of Section 1, Section 2 describes the proposed observer module as an updating process of a Bayesian lo- calisation method. Also, robot motion and measurement models are presented in this section for real-time landmark detection. Section 3 investigates the proposed localisation methods. Section 4 presents the system architecture. Some experimental results on landmark modelling and localisation are presented in Section 5 to show the feasibility and performance of the pro- posed localisation methods. Finally, a brief conclusion is given in Section 6. 2. Observer design This section describes a robot observer model for processing motion and measurement phases. These phases, also known as Predict and Update, involve a state estimation in a time sequence for robot localisation. Additionally, at each phase the state is updated by sensing information and modelling noise for each projected state. 2.1 Motion Model The state-space process requires a state vector as processing and positioning units in an ob- server design problem. The state vector contains three variables for robot localisation, i.e., 2D position (x, y) and orientation (θ). Additionally, the prediction phase incorporates noise from robot odometry, as it is presented below:   x − t y − t θ − t   =   x t−1 y t−1 θ t−1   +   (u lin t + w lin t )cosθ t−1 −(u lat t + w lat t )si nθ t−1 (u lin t + w lin t )si nθ t−1 + (u lat t + w lat t )cosθ t−1 u rot t + w rot t   (4.9) where u lat , u lin and u rot are the lateral, linear and rotational components of odometry, and w lat , w lin and w rot are the lateral, linear and rotational components in errors of odometry. Also, t −1 refers to the time of the previous time step and t to the time of the current step. In general, state estimation is a weighted combination of noisy states using both priori and posterior estimations. Likewise, assuming that v is the measurement noise and w is the pro- cess noise, the expected value of the measurement R and process noise Q covariance matrixes are expressed separately as in the following equations: R = E[vv t ] (4.10) Q = E[ww t ] (4.11) The measurement noise in matrix R represents sensor errors and the Q matrix is also a con- fidence indicator for current prediction which increases or decreases state uncertainty. An odometry motion model, u t−1 is adopted as shown in Figure 1. Moreover, Table 1 describes all variables for three dimensional (linear, lateral and rotational) odometry information where (  x,  y ) is the estimated values and (x, y) the actual states. Fig. 1. The proposed motion model for Aibo walking robot According to the empirical experimental data, the odometry system presents a deviation of 30% on average as shown in Equation (4.12). Therefore, by applying a transformation matrix W t from Equation (4.13), noise can be addressed as robot uncertainty where θ points the robot heading. Q t =    (0.3u lin t ) 2 0 0 0 (0.3u lat t ) 2 0 0 0 (0.3u rot t + √ (u lin t ) 2 +(u lat t ) 2 500 ) 2    (4.12) W t = f w =   cosθ t−1 −senθ t−1 0 sen θ t−1 cosθ t−1 0 0 0 1   (4.13) 2.2 Measurement Model In order to relate the robot to its surroundings, we make use of a landmark representation. The landmarks in the robot environment require notational representation of a measured vector f i t for each i-th feature as it is described in the following equation: f (z t ) = {f 1 t , f 2 t , } = {   r 1 t b 1 t s 1 t   ,   r 2 t b 2 t s 2 t   , } (4.14) where landmarks are detected by an onboard active camera in terms of range r i t , bearing b i t and a signature s i t for identifying each landmark. A landmark measurement model is defined by a feature-based map m, which consists of a list of signatures and coordinate locations as follows: m = {m 1 , m 2 , } = {(m 1,x , m 1,y ), (m 2,x , m 2,y ), } (4.15) [...]... the robot is moving along a circular 14 Robot Localization and Map Building Fig 7 Landmark model recognition for Experiments 1, 2 and 3 Visual Localisation of quadruped walking robots 15 Experpiment 1 Mean StdDev Distance 489.02 293 .14 Angle 14 6.89 9.33 Error in Distance 256.46 13 3.44 Error in Angle 2.37 8. 91 Experpiment 2 Mean StdDev Distance 394.02 11 7.32 Angle 48.63 2. 91 Error in Distance 86. 91 73.58... } (4 .14 ) 1 s2 st t i i where landmarks are detected by an onboard active camera in terms of range rt , bearing bt i for identifying each landmark A landmark measurement model is defined and a signature st by a feature-based map m, which consists of a list of signatures and coordinate locations as follows: m = {m1 , m2 , } = {(m1,x , m1,y ), (m2,x , m2,y ), } (4 .15 ) 4 Robot Localization and Map Building. .. absolute differences visual landmark, IEEE International Conference on Robotics and Automation, Vol 5, pp 4827 – 4832 Yoon, K J & Kweon, I S (20 01) Artificial landmark tracking based on the color histogram, IEEE/RSJ Intl Conference on Intelligent Robots and Systems, Vol 4, pp 19 18 19 23 26 Robot Localization and Map Building Ranging fusion for accurating state of the art robot localization 27 2 X Ranging... (j) {Update modelled values and} 9: Increase { L}k frequency {Increase modelled frequency} 10 : else {If modelled information does not exist } 11 : Create { L}k +1 (j) {Create model and} Increase { L}k +1 frequency {Increase modelled frequency} 12 : 13 : end if 14 : if time > 1 min then {After one minute } 15 : MergeList({ L}) {Select best models} 16 : end if 17 : end for 18 : end for candidates The similarity values... Conference on Intelligent Robots and Systems (IROS), pp 12 05 – 12 10 Duckett, T & Nehmzow, U (2000) Performance comparison of landmark recognition systems for navigating mobile robots, Proc 17 th National Conf on Artificial Intelligence (AAAI2000), AAAI Press/The MIT Press, pp 826 – 8 31 24 Robot Localization and Map Building Fox, D., Burgard, W., Dellaert, F & Thrun, S (19 99) Monte carlo localization: Efficient... predicted measurement zi and is calculated as follows: Then i Ht i Ht describes changes in the robot position as follows: = (st 1 )st hi mi − s t 1, x t,x  − (mit,x −st 1, x )2 +(mit,y −st 1, y )2  = mit,y −st 1, y   (mit,x −st 1, x )2 +(mit,y −st 1, y )2 0 − mit,y −st 1, y (mit,x −st 1, x )2 +(mit,y −st 1, y )2 mit,x −st 1, x −st 1, x )2 +(mit,y −st 1, y )2 t,x − ( mi 0  0    1   0 (4.27) where Ri... Experiment 1, the robot follows a trajectory in order to localise and generate a set of visible ground truth points along the pitch In Figures 9 and 10 are presented the error in X and Y axis by comparing the EKF, FM, FM-EKF methods with a grand truth In this graphs it is shown a similar performance between the methods EKF and FM-EKF for the error in X and Y 18 Robot Localization and Map Building Fig... stage modified for evaluating similarity values and it also eliminates 10 % of the landmark 6 Robot Localization and Map Building Algorithm 1 Process for creating a landmark model from a list of observed features Require: Map of observed features { E} Require: A collection of landmark models { L} {The following operations generate the landmark model information.} 1: for all { E}i ⊆ { E} do 2: Evaluate ColourCombination({... pitch in Experiment 1 Fig 10 Error in Y axis during a simple walk along the pitch in Experiment 1 Fig 11 Error in θ axis during a simple walk along the pitch in Experiment 1 Fig 12 Time performance for localisation methods during a walk along the pitch in Exp 1 Visual Localisation of quadruped walking robots 19 Fig 13 Robot trajectories for EKF, FM, FM-EKF and the overhead camera in Exp 1 axis but a poor... where At Pt 1 At is a progression of Pt 1 along a new movement and At is defined as follows:  1 At = f s =  0 0  −ulat cosθt − ulin senθt 1 t t ulin cosθt − ulat senθt 1  t t 1 0 1 0 (4 .19 ) and Wt Qt 1 WtT represents odometry noise, Wt is Jacobian motion state approximation and Qt is a covariance matrix as follows: T Q t = E [ wt wt ] (4.20) The Sony AIBO robot may not be able to obtain a landmark observation . A t P t 1 A T t + W t Q t 1 W T t (4 .19 ) Robot Localization and Map Building1 0 where A t P t 1 A T t is a progression of P t 1 along a new movement and A t is defined as follows: A t = f s =   1 0 −u lat t cosθ t −u lin t sen. 239 RenC.Luo and ChunC.Lai VIII 13 . Mobile Robot Localization and Map Building foraNonholonomicMobile Robot 253 SongminJia and AkiraYasuda 14 . RobustGlobalUrban Localization BasedonRoadMaps. similarity values and it also eliminates 10 % of the landmark Robot Localization and Map Building6 Algorithm 1 Process for creating a landmark model from a list of observed features. Require: Map of observed

Ngày đăng: 12/08/2014, 00:20

TỪ KHÓA LIÊN QUAN