1. Trang chủ
  2. » Kỹ Thuật - Công Nghệ

Vision Systems - Applications Part 10 pps

40 183 0

Đang tải... (xem toàn văn)

Tài liệu hạn chế xem trước, để xem đầy đủ mời bạn chọn Tải xuống

THÔNG TIN TÀI LIỆU

Thông tin cơ bản

Định dạng
Số trang 40
Dung lượng 725,35 KB

Nội dung

Bearing-only Simultaneous Localization and Mapping for Vision-Based Mobile Robots 351 r is independent on α . Formulas are derived for computing the PDF (Probability Density Function) when an initial observation is made. The updating of the PDF when further observations arrive is explained in Section 5.2.A. 5.1 Method description Let ),( α rp be the PDF of the landmark position in polar coordinates when only one observation has been made. We characterize ),( α rp when r and α are independent. Let β denote the measured landmark bearing. Assume that the error range for the bearing is ε ± . The landmark position is contained in the vision cone which is formed by two rays rooted at the observation point with respect to two bearings εβ − and εβ + (see Figure 11). Figure 11. The vision cone is rooted at the observation point. The surface of the hashed area is approximately α ddrr for small α d and dr The surface of the hashed area in Figure 11 for small dr and α d can be computed as [][] αα π α ππ ddrrddrdrr d rdrr ≅+=−+ 222 )(2 2 1 2 )( Because the probability of the landmark being in the vision cone is 1, we have ³³ + − = εβ εβ αα max min 1),( R R ddrrrp (7) In Equation (7), max R and min R are the bounds of the vision range interval. We define )(RF as the probability of the landmark being in the area ]},[],,[|),{( min εβεβαα +−∈∈ RRrr , )(RF can be represented as: ³³ + − = εβ εβ αα R R ddrrrpRF min ),()( (8) Vision Systems: Applications 352 We define ),( ΔΨ R as the probability of the landmark being in the dotted area in Figure 11. Since )()(),( RFRFR −Δ+=ΔΨ , we have ³³ + − Δ+ =ΔΨ εβ εβ αα R R ddrrrpR ),(),( (9) If the range r and the angle α are independent, then ),( ΔΨ R is constant with respect to R . That is, 0 ),( = ∂ ΔΨ∂ R R . From Equation (9), we derive R RF R RF ∂ ∂ = ∂ Δ+∂ )()( (10) Because of the independence of α and r , ),( α rp can be factored as )()(),( α α grfrp = (11) Without loss of generality, we impose that ³ =1)( αα dg . After factoring, Equation (8) becomes ³ = R R drrrfRF min )()( . Because of the property of the integration, we have RRf R RF )( )( = ∂ ∂ (12) From Equations (10) and (12), we deduce that RRfRRf )()()( =Δ+Δ+ . Therefore, 0)( )()( =Δ++ » ¼ º « ¬ ª Δ −Δ+ Rf RfRf R . By making Δ goes to zero, we obtain 0)()(' =+ RfRfR . The equality RRfRf 1)()(' −= can be re-written as )]'[log())]'([log( RRf −= . After integrating both sides, we obtain ¸ ¸ ¹ · ¨ ¨ © § =+ ¸ ¹ · ¨ © § =+−= R e c R cRRf c log 1 log)log())(log( Where c is a constant, let c e= ξ , we obtain R Rf ξ =)( From Equations (7) and (11), ξ can be calculated and thus minmax 1 RR − = ξ rRR rf )( 1 )( minmax − = Therefore, ),( α rp can be re-written as )( )( 1 ),( minmax αα g rRR rp × − = (13) If we use a Gaussian function for )( α g with mean β and standard deviation σ , the PDF Bearing-only Simultaneous Localization and Mapping for Vision-Based Mobile Robots 353 ),( α rp can be re-written as Equation (14). Figure 12 shows the PDF of ),( α rp σπ σ βα α 2 2 )( exp )( 1 ),( 2 2 minmax » ¼ º « ¬ ª − − × − = rRR rp (14) . Figure 12. The PDF of the landmark position following Equation (14) 5.2 Utilization of the PDF for bearing-only SLAM We illustrate the application of the PDF for our bearing-only SLAM system. Section 5.2-A describes how the PDF can be updated with a second observation. In Section 5.2-B, we present experimental results on a real robot. A. Updating the PDF with a second observation When a second observation is made after a linear motion, the landmark position falls in the uncertainty region which is the intersection of two vision cones rooted at the first and the second observation points 1 O and 2 O . We denote with 1 p and 2 p the PDFs of the landmark positions computed from Equation (14) with respect to 1 O and 2 O respectively. Let p denote the PDF of the landmark position after fusing the sensory readings from 1 O and 2 O . From the work of Stroupe et al. (2000), we have ³ = 2121 ppppp . Vision Systems: Applications 354 We want to approximate p with a Gaussian distribution q . To compute the parameters of q , we generate a set S according to the PDF p by the Rejection Method (Leydold, 1998). We determine the maximum probability density max p of p by computing 21 pp at the intersection of two bearings. The sampling process selects uniformly a sample point s and a random number }1,0{ ∈ νν . If ν < max )( psp , s is rejected, otherwise s is accepted and added to S . The sampling process is repeated until enough points are accepted. Figure 13 shows the generated samples in the uncertainty regions of four landmarks. The mean x and the covariance matrix C of q are obtained by computing the mean and the covariance matrix of S as previously done by Smith & Cheeseman (1986) and Stroupe et al. (2000). In Figure 13, the contour plots present the PDFs of landmark positions. The estimated PDFs in Figure 13 are expressed in the robot-based frame R F . Since the robot is moving over time, its frame changes too. Therefore, it is necessary to change the coordinate systems to express all the estimations in the global frame L F . We use the method introduced in Section 4 to transfer the samples in S from R F to L F . After the change of frames, the uncertainties of 1 L and 2 L are transferred to other objects. The samples of other objects are taken to approximate the PDFs of the object positions in L F . Figure 14 shows the distribution of the samples after the change of frames. The contour plots present the PDFs of the object positions in the global frame L F associated to the points ),( 21 LL . Figure 13. The PDFs and the contour plots of four landmarks in the robot-based frame R F ; in this example, the uncertainty region of each landmark is a bounded polygon. The generated samples are distributed in the uncertainty regions Bearing-only Simultaneous Localization and Mapping for Vision-Based Mobile Robots 355 Figure 14. After the change of frames from R F to L F , the PDFs and the contour plots of 1 O , 2 O and 3 L , 4 L are presented in the global frame L F B. Experimental Results Our method was evaluated using a Khepera robot equipped with a colour camera (176 x 255 resolution). The Khepera robot has a 6 centimetres diameter. A KheperaSot robot soccer playing field, 105 x 68 square centimetres, was used for the experiment arena (see Figure 15). There were four artificial landmarks in the playing field. The first and second landmarks were placed at the posts of a goal, 30 centimetres apart from each other. The objective of the experiment is to evaluate the accuracy of the method by estimating the positions of the third and the fourth landmarks. At each iteration, the robot was randomly placed in the field. The robot took a panoramic image and then moved in a straight line and captured a second panoramic image. Landmark bearings were extracted from the panoramic images using colour thresholding. A total of 40 random movements were performed in this experiment. min R was set to 3 centimetres, max R was set to 120 centimetres, and the vision error ε was ± 3 degrees. Figure 16(b) shows the errors of the estimated landmark positions. For 3 L , the estimated error was reduced from approximately 9 centimetres at the first iteration to less than 1 centimetre at the last iteration. For 4 L , the error was reduced from 14 centimetres to 2.77 centimetres. The experiment shows that the estimated error of landmark position is sensitive to the relative distance with respect to 1 L and 2 L . Vision Systems: Applications 356 Figure 15. Experimental setup, the landmarks are the vertical tubes Figure 16. (a) Diagram of the experimental setup. (b) The errors of the estimated landmark positions We made another experiment to test the sensitivity of the errors of the landmark positions with respect to the different directions of the robot’s moving trajectories. We let the robot move in four different directions with respect to three landmarks. In Figure 17(a), stars denote the landmark positions and arrows denote the moving trajectories. The robot repeated 10 iterations for each trajectory. The errors on 3 L in four trajectories after the tenth iteration were 2.12, 1.17, 1.51, and 13.99 centimetres respectively. The error of the fourth trajectory is large because the robot moves along a line that is close to 3 L . Therefore, the vision cones at the first and the second observations are nearly identical. Bearing-only Simultaneous Localization and Mapping for Vision-Based Mobile Robots 357 The estimation of the landmark position is more accurate when the intersection of two vision cones is small. This is the case of the second trajectory where the intersection is the smallest among all trajectories. Figure 17. (a) Trajectories of the robot for the experiment to study the relationship between moving directions and estimated errors of landmark positions. (b) The errors on 3 L at each iteration Although the intersecting area of 3 L for the first and the third trajectories are the same, the intersecting areas of 1 L and 2 L for the first trajectory are much bigger than the areas from the third trajectory. This is the reason why the estimated error from the third trajectory is smaller than the one for the first trajectory. 6. Conclusion In this chapter, we proposed a vision-based approach to bearing-only SLAM in a 2- dimensional space. We assumed the environment contained several visually distinguishable landmarks. This approach is inspired from techniques used in stereo vision and Structure From Motion. Our landmark initialization method relies solely on the bearing measurements from a single camera. This method does not require information from an odometer or a range sensor. All the object positions can be estimated in a landmark-based frame. The trade-off is that this method requires the robot to be able to move in a straight line for a short while to initialize the landmarks. The proposed method is particularly accurate and useful when the robot can guide itself in a straight line by visually locking on static objects. Since the method does not rely on odometry and range information, the induced map is up to a scale factor only. In our method, the distance |||| 21 LL − of two landmarks is taken as the measurement unit of the map. The selection of 1 L and 2 L is critical for the accuracy of Vision Systems: Applications 358 the map. In Section 3.1, the mathematical derivation shows that the estimated error of a landmark position is proportional to the range of the landmark and the inverse of the relative change in landmark bearings. Choosing 1 L and 2 L with larger change in bearings produces a more accurate mapping of the environment. In the sensitivity analysis, we showed how the uncertainties of the objects’ positions are affected by a change of frames. We determine how an observer attached to a landmark- based frame L F can deduce the uncertainties in L F from the uncertainties transmitted by an observer attached to the robot-based frame R F . Each estimate of landmark uncertainties requires a pair of the observations in a straight movement. The simulation in Section 4.3 shows how the uncertainties of landmark positions are refined when the robot moves in a polygonal line. With dead reckoning, the error of the estimated robot’s location increases with time because of cumulated odometric errors. In our method, we set 1 O and 2 O (pair of observation points in a straight movement) at ]'0,0[ and ]'0,1[ in R F . There is no dead reckoning error on 1 O and 2 O by construction. In practice, the robot’s movement may not be perfectly straight. However, the non-straight nature of the trajectory can be compensated by increasing the size of the confidence interval of the bearing. The induced map created by our method can be refined with EKF or PF algorithms. With EKF, the uncertainty region computed from the geometric method can be translated into a Gaussian PDF. With PF, the weights of the samples can be computed with the formulas derived in Section 5.1. Since the samples are drawn from the uncertainty region, the number of samples is minimized. The accuracy of our method was evaluated with simulations and experiments on a real robot. Experimental results demonstrate the usefulness of this approach for a bearing-only SLAM system. We are currently working on the unknown data association when all landmarks are visually identical. In future work, we will deal with the problems of object occlusion and non-planar environments. That is, the system will be extended from a 2- dimensional to a 3-dimensional space. 7. References Bailey, T. (2003). Constrained Initialisation for Bearing-Only SLAM, Proceedings of IEEE International Conference on Robotics and Automation , pp. 1966 - 1971, 1050-4729 Bailey, T. & Durrant-Whyte, H. (2006). Simultaneous Localization and Mapping (SLAM): Part II, IEEE Robotics and Automation Magazine, page numbers (108-117), 1070-9932 Costa, A.; Kantor, G. & Choset, H. (2004). Bearing-only Landmark Initialization with Unknown Data Association, Proceedings of IEEE International Conference on Robotics and Automation , pp. 1764 - 1770 Davison, A. (2003). Real-time simultaneous localization and mapping with a single camera, Proceedings of International Conference on Computer Vision, pp. 1403-1410, Nice, October Davison, A.; Cid, Y. & Kita, N. (2004). Real-time 3D SLAM with wide-angle vision, Proceedings of IAV2004 - 5th IFAC/EURON Symposium on Intelligent Autonomous Vehicles , Lisboa, Portugal, July Bearing-only Simultaneous Localization and Mapping for Vision-Based Mobile Robots 359 Fox, D.; Burgard, F.; Dellaert, W. & Thrun, S. (1999). Monte Carlo localization: Efficient position estimation for mobile robot, Proceedings of National Conference on Artificial Intelligence , pp. 343-349 Goncavles, L.; Bernardo, E. d.; Benson, D.; Svedman, M.; Karlsson, N.; Ostrovski, J. & Pirjanian, P. (2005). A visual front-end for simultaneous localization and mapping, Proceedings of IEEE International Conference on Robotics and Automation, pp. 44-49 Huang, H.; Maire, F. & Keeratipranon, N. (2005a). A Direct Localization Method Using only the Bearings Extracted from Two Panoramic Views Along a Linear Trajectory, Proceedings of Autonomous Minirobots for Research and Edutainment (AMiRE), pp. 201- 206, Fukui, Japan Huang, H.; Maire, F. & Keeratipranon, N. (2005b). Uncertainty Analysis of a Landmark Initialization Method for Simultaneous Localization and Mapping, Proceedings of Australian Conference on Robotics and Automation , Sydney, Australia Jensfelt, P.; Kragic, D.; Folkesson, J. & Bjorkman, M. (2006). A Framework for Vision Based Bearing Only 3D SLAM, Proceedings of IEEE International Conference on Robotics and Automation , pp. 1944- 1950, 0-7803-9505-0, Orlando, FL Kwok, N. M. & Dissanayake, G. (2004). An Efficient Multiple Hypothesis Filter for Bearing- Only SLAM, Proceedings of IEEE International Conference on Intelligent Robots and Systems , pp. 736- 741, 0-7803-8463-6 Lemaire, T.; Lacroix, S. & Sola, J. (2005). A practical 3D Bearing-Only SLAM algorithm, Proceedings of IEEE International Conference on Intelligent Robots and Systems, pp. 2757-2762 Leonard, J. J. & Durrant-Whyte, H. F. (1991). Simultaneous localization for an autonomous mobile robot, Proceedings of IEEE Intelligent Robots and System, pp. 1442-1447, Osaka, Japan Levitt, T. S. & Lawton, D. M. (1990). Qualitative navigation for mobile robots, Artificial Intelligence, Vol. 44, No. 3, page numbers (305-360) Leydold, J. (1998). A Rejection Technique for Sampling from log-Concave Multivariate Distributions, Modelling and Computer Simulation, Vol. 8, No. 3, page numbers (254- 280) Menegatti, E.; Zoccarato, M.; Pagello, E. & Ishiguro, H. (2004). Image-based Monte Carlo localisation with omnidirectional images, Robotics and Autonomous Systems, Vol. 48, No. 1, page numbers (17-30) Montemerlo, M.; Thrun, S.; Koller, D. & Wegbreit, B. (2003). FastSLAM 2.0: An improved particle filtering algorithm for simultaneous localization and mapping that provably converges, Proceedings of International Joint Conferences on Artificial Intelligence , pp. 1151-1156, Morgan Kaufmann, IJCAI Mouragnon, E.; Lhuillier, M.; Dhome, M.; Dekeyser, F. & Sayd, P. (2006). 3D Reconstruction of complex structures with bundle adjustment: an incremental approach, Proceedings of IEEE International Conference on Robotics and Automation, pp. 3055 - 3061, Orlando, USA Murray, D. & Jennings, C. (1998). Stereo vision based mapping and navigation for mobile robots., Proceedings of IEEE International Conference on Robotics and Automation, pp. 1694 - 1699, New Mexico Nister, D.; Naroditsky, O. & Bergen, J. (2004). Visual Odometry, Proceedings of IEEE Computer Society Conference , pp. I-652 - I-659, Princeton Vision Systems: Applications 360 Rizzi, A. & Cassinis, R. (2001). A robot self-localization system based on omnidirectional color images, Robotics and Autonomous Systems, Vol. 34, No. 1, page numbers (23-38) Sabe, K.; Fukuchi, M.; Gutmann, J S.; Ohashi, T.; Kawamoto, K. & Yoshigahara, T. (2004). Obstacle Avoidance and Path Planning for Humanoid Robots using Stereo Vision, Proceedings of International Conference on Robotics and Automation, pp. 592 - 597, New Orleans Se, S.; Lowe, D. & Little, J. J. (2002). Mobile Robot Localization and Mapping with Uncertainty Using Scale-Invariant Visual Landmarks, International Journal of Robotics Research, Vol. 21, No. 8, page numbers (735 - 758) Siegwart, R. & Nourbaksh, I. R. (2004). Introduction to Autonomous Mobile Robots, The MIT Press, Cambridge, Massachusetts Sim, R.; Elinas, P.; Griffin, M. & Little, J. J. (2005). Vision-based slam using the rao- blackwellised particle filter, Proceedings of IJCAI Workshop on Reasoning with Uncertainty in Robotics , Edinburgh, Scotland Smith, R. & Cheeseman, P. (1986). On the Representation and Estimation of Spatial Uncertainty, The International Journal of Robotics Research, Vol. 5, No. 4, page numbers (56-68) Sola, J.; Monin, A.; Devy, M. & Lemaire, T. (2005). Undelayed Initialization in Bearing Only SLAM, Proceedings of IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) , pp. 2751-2756 Spero, D. J. (2005). Simultaneous Localisation and Map Building: The Kidnapped Way, Intelligent Robotics Research Centre, Monash University, Melbourne, Australia. Stroupe, A. W.; Martin, M. C. & Balch, T. (2000). Merging Probabilistic Observations for Mobile Distributed Sensing, Pittsburgh, Robotics Institute, Carnegie Mellon University. Usher, K.; Ridley, P. & Corke, P. (2003). Visual Servoing of a Car-like Vehicle - An Application of Omnidirectional Vision, Proceedings of IEEE International Conference on Robotics and Automation , pp. 4288- 4293, 0-7803-7736-2 Wooden, D. (2006). A Guide to Vision-Based Map Building, IEEE Robotics and Automation Magazine, June 2006 , page numbers (94-98) Zunino, G. & Christensen, H. I. (2001). Simultaneous localization and mapping in domestic environments, Proceedings of International Conference on Multisensor Fusion and Integration for Intelligent System , pp. 67 - 72 [...]... is the NI-PXI -1 00 0B Object Recognition for Obstacles-free Trajectories Applied to Navigation Control Description Bus Video Inputs Spatial Resolution Píxel depth Video input Standard Digital I/O 365 Standard monochromatic PXI/CompactPCI 1 640 × 480 RS-170 768 × 576 CCIR 8 bits RS-170, CCIR 1 Table 1 Characteristics of the NI-PXI-1407 On the NI-PXI -1 00 0B run all the LabView DLL’s and Matlab applications. .. diagonals 374 Vision Systems: Applications 4.2.3 Particle Analysis Technique A particle is a group of pixel with non-zero values in an image, usually binary The particles can be characterized by measurements related to its attributes as position, area, shape and others (National, 2004) The particle detection consists in applying an erosion process to the original image so it can be removed small particles... ’05, pp 145–150, ISBN: 8 3-7 14 3-2 6 6-6 , Poznan, Poland, June 2005 Dudek, G & Jenkin M (2000) Computational Principles of Mobile Robotics, Cambridge University Press, ISBN: 0-5 21 5-6 87 6-5 , U.S.A Kelly, R.; Moreno, J & Campa, R (2004) Visual servoing of planar robots via velocity fields Proceedings of the IEEE 43rd Conference on Decision and Control, pp 4028–4033, ISBN: 0-7 80 3-8 68 2-5 , Atlantis, Paradise Island,... 1427, ISSN: 002 0-7 179 386 Vision Systems: Applications Moreno J & Kelly R (2003c) Hierarchical velocity field control for robot manipulators Proceedings of the IEEE International Conference on Robotics and Automation ICRA’03, Vol 3, pp 4374–4379, ISBN: 0-7 80 3-7 73 6-2 , Taipei, Taiwan, September 2003 National Instruments (2004) IMAQ Vision Concepts Manual, National Instruments Santos-Victor, J & Sandini... Computer Vision and Image Understanding: CVIU, Vol 67, No 3, (September 1997) 223–238, ISSN: 107 73142 Seelinger, M.; Yoder J-D.; Baumgartner, E T & Skaar, S B (2002) High-precision visual control of mobile manipulators IEEE Transactions on Robotics and Automation, Vol 18, No 6, (December 2002) 957–965, ISSN: 104 2-2 96X Skaar, S B.; Yalda-Mooshabad I & Brockman W H (1992) Nonholonomic camera-space manipulation... American Control Conference, Vol 2, pp 774–779, ISBN: 0-7 80 3-4 99 0-3 , San Diego, U.S.A., June 1999 Li, P Y & Horowitz R (1999) Passive velocity field control of mechanical manipulators IEEE Transactions on Robotics and Automation, Vol 15, No 4, (August 1999) 751–763, ISSN: 104 2-2 96X Li, P Y & Horowitz R (2001) Passive velocity field control (PVFC): Part I - Geometry and Robustness IEEE Transactions on Automatic... 46, No 9, (September 2001) 1346–1359, ISSN: 001 8-9 286 Li, P Y & Horowitz R (2001) Passive velocity field control (PVFC): Part II - application to contour following IEEE Transactions on Automatic Control, Vol 46, No 9, (September 2001) 1360–1371, ISSN: 001 8-9 286 Mitchell, T (1997) Machine Learning, McGraw-Hill Science/Engineering/Math, ISBN: 0070 4-2 80 7-7 , U.S.A Moreno J & Kelly R (2003a) On manipulator... and Soft Computing, ICAISC 2006, Lecture Notes in Computer Science, pp 722–731, ISBN: 3-5 40 3-5 74 8-3 , Zakopane, Poland, June 2006 Cervantes I.; Kelly, R.; Alvarez, J & Moreno J (2002) A robust velocity field control IEEE Transactions on Control, Systems and Technologies, Vol 10, No 6, (November 2002) 888–894, ISSN: 106 3-6 536 Dixon, W E.; Galluzo, T.; Hu, G & Crane, C (2005) Adaptive velocity field control... these variables was divided in the following membership functions: • Angle Error (Degrees): Small Positive (0 -9 0.), Big Positive (90 -1 80.), Small Negative (180 -2 70.) and Big Negative (270 -3 60.) • Distance to the destination: Far (150 cm - 800 cm), close (20 cm -1 50 cm) and very close (0 cm - 40 cm) • Velocities (Left and Right wheels): Fast (8.5cm/s), Medium (5 cm/s), slow(3.5cm/s) and very slow(0.8cm/s)... 2004): • Pseudo-random Image sub-sampling: By this method, it can be obtained more uniformity on sampling distribution through the image, without using a predefined grid With a uniform grid information like the presence of horizontal and vertical 368 Vision Systems: Applications borders could be lost In the other hand, completely random sampling can produce large areas with poor sampling or over-sampled . 1966 - 1971, 105 0-4 729 Bailey, T. & Durrant-Whyte, H. (2006). Simultaneous Localization and Mapping (SLAM): Part II, IEEE Robotics and Automation Magazine, page numbers (10 8-1 17), 107 0-9 932. Conference on Computer Vision, pp. 140 3-1 410, Nice, October Davison, A.; Cid, Y. & Kita, N. (2004). Real-time 3D SLAM with wide-angle vision, Proceedings of IAV2004 - 5th IFAC/EURON Symposium. 1694 - 1699, New Mexico Nister, D.; Naroditsky, O. & Bergen, J. (2004). Visual Odometry, Proceedings of IEEE Computer Society Conference , pp. I-652 - I-659, Princeton Vision Systems: Applications

Ngày đăng: 11/08/2014, 06:21