1. Trang chủ
  2. » Kỹ Thuật - Công Nghệ

Robot Localization and Map Building Part 11 potx

35 203 0

Đang tải... (xem toàn văn)

Tài liệu hạn chế xem trước, để xem đầy đủ mời bạn chọn Tải xuống

THÔNG TIN TÀI LIỆU

Nội dung

RobotLocalizationandMapBuilding344 Fig. 18. Differential motion model, the ICC is constrained to be along one axis By using the differential motion model, the localization algorithm assumes perfect behaviour by the locomotive components, as well as the lack of external forces that may force the robot to an unnatural motion. Although the effects of these types of motion will be small, it is worth while to consider an enhancement on the approach to handle the special cases. The core of the hybrid motion model is the differential motion model, which is able to perform reasonably well on its own. The case which trips the differential motion model is when the motion vector of the camera does not fall within the allowed range, where the range is determined by the velocity of the two motions and expected motion in the perpendicular direction, as shown in Fig. 19. Using this plot, it is possible to determine the expected perpendicular motion for a pair of motion vectors. Although it is possible to derive this dynamically, a look-up table can also be used, since the number of entries is limited by the maximum magnitude of the motion vectors. When checking to see if the motion is within the bounds of the constrained motion or not, a small tolerance amount must be included to account for the sub-pixel motions and the latency introduced by the smoothing algorithm. Fig. 19. Wheel motion against lateral motion, lateral motion is maximised when the robot spins at the center. If the motion falls outside of the expected range, an alternate algorithm must be used to calculate the robot. The first approach which has been tested uses the exact motion model, while the second approach assumes that the error was caused by a bump of some sort and models the motion as a pure translation based on the average of the two motion vectors. The result showed that using the exact algorithm produced an error value of 2.98% and 16.42% for translation and rotation respectively, while treating the motion as translation produced a result of 2.44% and 5.32% error for translation and rotation respectively. A final comparison is made in table 7 with the three motion models using the windowed smoothing algorithm with a quadratic weight function of size 4 and factor of 4. All of the motion models used the two cameras on the side configuration, which provided the best performance out of the other arrangements. The results show a significant improvement over the naïve approaches and appear to be a promising localization algorithm. It is worth noting that the difference between the forward and reverse measurements is mostly due to incorrect scales being used, thus can be reduced by the calibration process in determining the height of the camera and the viewing angle. Motion model Translation (%) Rotation (%) Forward Backward Clockwise Anti-clockwise Exact 2.88 1.21 88.12 9.2 Differential 2.88 0.78 9.72 0.97 Hybrid motion 1.95 0.41 2.32 0.53 Table 7. Performance of hybrid motion model, the optimal algorithms for the other components are used for this test 6. Practical considerations As briefly mentioned earlier, it is important to provide a precise calibration information in terms of the camera configuration, such that any scaling errors can be eliminated. This becomes a crucial issue when the robot is required to interact more with the environment, as the coordinate systems used to represent the pose of the objects within the environment will determine the actions the robot must perform Other practical considerations to make is the type of environment the robot will be operating in. The current system is only designed to operate indoors, due to the physical configuration of the wheels, thus the performance of the proposed localization algorithm are tested on a wider range of indoor surfaces. A collection of surface types were found and tested on, which is summarised in table 8. The results are quite promising, except for some of the very dark and glossy surfaces. The similarity in the appearance make it difficult for the localization algorithm to correctly track the features, which resulted in the errors being introduced. The final test that was conducted was based on the effects of the accumulation over a longer term period. Although the error rates show reasonable performance, the difference between the actual motion, the proposed algorithm, and the standard wheel encoder based dead reckoning approaches are compared for a traversal of the lab environment, as shown in Fig. 20. Since the algorithm derives the pose changes based on actual observations of movement, it provides a far better model of the robot motion as it bypasses many inaccuracies in modelling the locomotive characteristics. One major advantage of this approach is the level of accuracy that is maintained after a change in the orientation, which is the primary cause of long term misalignment. Floortexturevisualservousingmultiplecamerasformobilerobotlocalization 345 Fig. 18. Differential motion model, the ICC is constrained to be along one axis By using the differential motion model, the localization algorithm assumes perfect behaviour by the locomotive components, as well as the lack of external forces that may force the robot to an unnatural motion. Although the effects of these types of motion will be small, it is worth while to consider an enhancement on the approach to handle the special cases. The core of the hybrid motion model is the differential motion model, which is able to perform reasonably well on its own. The case which trips the differential motion model is when the motion vector of the camera does not fall within the allowed range, where the range is determined by the velocity of the two motions and expected motion in the perpendicular direction, as shown in Fig. 19. Using this plot, it is possible to determine the expected perpendicular motion for a pair of motion vectors. Although it is possible to derive this dynamically, a look-up table can also be used, since the number of entries is limited by the maximum magnitude of the motion vectors. When checking to see if the motion is within the bounds of the constrained motion or not, a small tolerance amount must be included to account for the sub-pixel motions and the latency introduced by the smoothing algorithm. Fig. 19. Wheel motion against lateral motion, lateral motion is maximised when the robot spins at the center. If the motion falls outside of the expected range, an alternate algorithm must be used to calculate the robot. The first approach which has been tested uses the exact motion model, while the second approach assumes that the error was caused by a bump of some sort and models the motion as a pure translation based on the average of the two motion vectors. The result showed that using the exact algorithm produced an error value of 2.98% and 16.42% for translation and rotation respectively, while treating the motion as translation produced a result of 2.44% and 5.32% error for translation and rotation respectively. A final comparison is made in table 7 with the three motion models using the windowed smoothing algorithm with a quadratic weight function of size 4 and factor of 4. All of the motion models used the two cameras on the side configuration, which provided the best performance out of the other arrangements. The results show a significant improvement over the naïve approaches and appear to be a promising localization algorithm. It is worth noting that the difference between the forward and reverse measurements is mostly due to incorrect scales being used, thus can be reduced by the calibration process in determining the height of the camera and the viewing angle. Motion model Translation (%) Rotation (%) Forward Backward Clockwise Anti-clockwise Exact 2.88 1.21 88.12 9.2 Differential 2.88 0.78 9.72 0.97 Hybrid motion 1.95 0.41 2.32 0.53 Table 7. Performance of hybrid motion model, the optimal algorithms for the other components are used for this test 6. Practical considerations As briefly mentioned earlier, it is important to provide a precise calibration information in terms of the camera configuration, such that any scaling errors can be eliminated. This becomes a crucial issue when the robot is required to interact more with the environment, as the coordinate systems used to represent the pose of the objects within the environment will determine the actions the robot must perform Other practical considerations to make is the type of environment the robot will be operating in. The current system is only designed to operate indoors, due to the physical configuration of the wheels, thus the performance of the proposed localization algorithm are tested on a wider range of indoor surfaces. A collection of surface types were found and tested on, which is summarised in table 8. The results are quite promising, except for some of the very dark and glossy surfaces. The similarity in the appearance make it difficult for the localization algorithm to correctly track the features, which resulted in the errors being introduced. The final test that was conducted was based on the effects of the accumulation over a longer term period. Although the error rates show reasonable performance, the difference between the actual motion, the proposed algorithm, and the standard wheel encoder based dead reckoning approaches are compared for a traversal of the lab environment, as shown in Fig. 20. Since the algorithm derives the pose changes based on actual observations of movement, it provides a far better model of the robot motion as it bypasses many inaccuracies in modelling the locomotive characteristics. One major advantage of this approach is the level of accuracy that is maintained after a change in the orientation, which is the primary cause of long term misalignment. RobotLocalizationandMapBuilding346 Surface Sample Translation (%) Rotation (%) Forward Backward Clockwise Anti-clockwise Vinyl 1.87 0.29 2.83 0.58 Table 2.37 0.52 3.08 0.92 Timber 8.21 1.07 7.18 3.84 Rubber 17.3 0.28 18.71 6.17 Tile 1.94 0.2 2.96 0.86 Brick 1.7 0.42 3.21 1.6 Concrete 2.44 0.27 2.69 0.76 Table 8. Performance on different surfaces, all but the dark textures performed well Fig. 20. Long traversal, the path's length was 26.5m in length 7. Summary A local localization algorithm for mobile robots has been proposed, which is based on the idea of using multiple off-the-shelf webcams to perform ground texture tracking. The localization module has been developed on a custom built robot and tested in real indoor environments with dramatic improvement over encoder based dead reckoning approaches. To take advantage of the constraints provided by the system and the type of environment the robot is exposed to, various characteristics of the camera were configured and adjusted to reduce the complexity in the tracking task. There are two constraints that are used for the proposed approach to work, which are:  The elevation of the camera to the ground remains constant, and  The features being tracked can only translate and not rotate in between frames. Due to the processing requirement, only two filters are actively used, which are the lens warp removal filter and block removal filter. After exploring several scoring algorithms to find the feature, a simple algorithm based on the standard deviation has been used with a shape of 16 by 16 pixel square. To improve the processing time for finding the feature, a prediction is made to where the feature is located, followed by a spiral search sequence to quickly find the best candidate, which has lead to approximately 30% speed up. By accounting for some of the sub-pixel motions by interpolating around the best candidate, the precision of the tracking increased by approximately 6 times. To distinguish between translation and rotation of the robot, a second tracker was introduced to form a two-cameras-on-the-side configuration. The two motion vectors were smoothed by using a sliding window of size 4 and a quadratic weight decay function to better synchronise the two data sources. A hybrid motion model has been introduced to handle two types of motions; regular motion based on the locomotive constraints and irregular motion, caused by bumps and sudden slippages. By switching between the two, the performance of the algorithm showed some improvements even though the frequency of erroneous tracking is already quite small. The proposed localization algorithm has been tested on various surfaces types that are commonly found in indoor environments with less than 1% error on both translation and rotation. It was found that the algorithm did not operate so well on very dark surfaces with highly repetitive or indistinguishable texture patterns. As long as the constraints can be maintained, the approach allows for an immediate and precise localization with low cost hardware at a reasonably small processing cost. 8. References Davison, A.J. (1998), Mobile Robot Navigation using Active Vision, Thesis, University of Oxford. Jensfelt, P. (2001), Approaches to Mobile Robot Localization in Indoor Environments, Thesis, Royal Institute of Technology. Kalman, R.E. (1960), A New Approach to Linear Filtering and Prediction Problem, Journal of Basic Engineering, Vol. 82, Series D, pp. 35-45. Krootjohn, S. (2007), Video image processing using MPEG Technology for a mobile robot, Thesis, Vanderbilt University. Floortexturevisualservousingmultiplecamerasformobilerobotlocalization 347 Surface Sample Translation (%) Rotation (%) Forward Backward Clockwise Anti-clockwise Vinyl 1.87 0.29 2.83 0.58 Table 2.37 0.52 3.08 0.92 Timber 8.21 1.07 7.18 3.84 Rubber 17.3 0.28 18.71 6.17 Tile 1.94 0.2 2.96 0.86 Brick 1.7 0.42 3.21 1.6 Concrete 2.44 0.27 2.69 0.76 Table 8. Performance on different surfaces, all but the dark textures performed well Fig. 20. Long traversal, the path's length was 26.5m in length 7. Summary A local localization algorithm for mobile robots has been proposed, which is based on the idea of using multiple off-the-shelf webcams to perform ground texture tracking. The localization module has been developed on a custom built robot and tested in real indoor environments with dramatic improvement over encoder based dead reckoning approaches. To take advantage of the constraints provided by the system and the type of environment the robot is exposed to, various characteristics of the camera were configured and adjusted to reduce the complexity in the tracking task. There are two constraints that are used for the proposed approach to work, which are:  The elevation of the camera to the ground remains constant, and  The features being tracked can only translate and not rotate in between frames. Due to the processing requirement, only two filters are actively used, which are the lens warp removal filter and block removal filter. After exploring several scoring algorithms to find the feature, a simple algorithm based on the standard deviation has been used with a shape of 16 by 16 pixel square. To improve the processing time for finding the feature, a prediction is made to where the feature is located, followed by a spiral search sequence to quickly find the best candidate, which has lead to approximately 30% speed up. By accounting for some of the sub-pixel motions by interpolating around the best candidate, the precision of the tracking increased by approximately 6 times. To distinguish between translation and rotation of the robot, a second tracker was introduced to form a two-cameras-on-the-side configuration. The two motion vectors were smoothed by using a sliding window of size 4 and a quadratic weight decay function to better synchronise the two data sources. A hybrid motion model has been introduced to handle two types of motions; regular motion based on the locomotive constraints and irregular motion, caused by bumps and sudden slippages. By switching between the two, the performance of the algorithm showed some improvements even though the frequency of erroneous tracking is already quite small. The proposed localization algorithm has been tested on various surfaces types that are commonly found in indoor environments with less than 1% error on both translation and rotation. It was found that the algorithm did not operate so well on very dark surfaces with highly repetitive or indistinguishable texture patterns. As long as the constraints can be maintained, the approach allows for an immediate and precise localization with low cost hardware at a reasonably small processing cost. 8. References Davison, A.J. (1998), Mobile Robot Navigation using Active Vision, Thesis, University of Oxford. Jensfelt, P. (2001), Approaches to Mobile Robot Localization in Indoor Environments, Thesis, Royal Institute of Technology. Kalman, R.E. (1960), A New Approach to Linear Filtering and Prediction Problem, Journal of Basic Engineering, Vol. 82, Series D, pp. 35-45. Krootjohn, S. (2007), Video image processing using MPEG Technology for a mobile robot, Thesis, Vanderbilt University. RobotLocalizationandMapBuilding348 Marchand, E. & Chaumette, F. (2005), Features tracking for visual servoing purpose, Robotics and Autonomous Systems, Vol. 52, No. 1, pp. 53-70. Ng, T.W. (2003), The optical mouse as a two-dimensional displacement sensor, Sensors and Actuators, Vol. 107, No. 1, pp. 21-25. Ritter, G.X. & Wilson, J.N. (1996), Handbook of Computer Vision Algorithms in Image Algebra, CRC press, United States. Se, S.; Lowe, D. & Little, J. (2002), Mobile robot localization and mapping with uncertainty using scale-invariant visual landmarks, International Journal of Robotics Research, Vol. 21; Part 8, pp. 735-758. Shi, J. & Tomasi, C. (1994), Good features to track, IEEE Conference on Computer Vision and Pattern Recognition, pp. 593-600. Sim, R. & Dudek, G. (1998), Mobile robot localization from learned landmarks, In Proceedings of IEEE/RSJ Conference on Intelligent Robots and Systems, Vol. 2, pp. 1060-1065. Thrun, S.; Fox, D.; Burgard, W. & Dellaert, F. (2001), Robust Monte Carlo Localization for Mobile Robots, Artificial Intelligence, Vol. 128, No. 1, pp. 99-141. Vilmanis, Y. (2005), Intelligent Robot System, Thesis, Flinders University. Wolf, J.; Burgard, W. & Burkhardt, H. (2002), Robust Vision-Based Localization for Mobile Robots using an Image Retrieval System Based on Invariant Features, In Proc. of the IEEE International Conference on Robotics & Automation, Vol. 1, pp. 359-365. Omni-directionalvisionsensorformobilerobotnavigationbasedonparticlelter 349 Omni-directional vision sensor for mobile robot navigation based on particlelter ZuoliangCao,YanbinLiandShenghuaYe x Omni-directional vision sensor for mobile robot navigation based on particle filter Zuoliang Cao, Yanbin Li and Shenghua Ye Tianjin University of Technology P.R. China 1. Introduction Automatic Guided Vehicles (AGV) provide automated material movement for a variety of industries including the automobile, chemicals/ plastics, hospital, newspaper, commercial print, paper, food & beverage, pharmaceutical, warehouse and distribution center, and manufacturing industries. And they can reduce labor and material costs while improving safety and reducing product and equipment damage. An AGV consists of one or more computer controlled wheel based load carriers (normally battery powered) that runs on the plant floor (or if outdoors on a paved area) without the need for an onboard operator or driver. AGVS have defined paths or areas within which or over which they can go. Navigation is achieved by any one of several means, including following a path defined by buried inductive wires, surface mounted magnetic or optical strips, or alternatively by way of visual guidance. (Crisan D, &Doucet A. 2002). This chapter describes a total navigation solution for mobile robots. It enables a mobile robot to efficiently localize itself and navigate in a large man-made environment, which can be indoor, outdoor or a combination of both. For instance, the inside of a house, an entire university campus or even a small city lie in the possibilities. Traditionally, other sensors except cameras are used for robot navigation, like GPS and laser scanners. Because GPS needs a direct line of sight to the satellites, it cannot be used indoors or in narrow city centre streets, i.e. the very conditions we foresee in our application. Time- of-flight laser scanners are widely applicable, but are expensive and voluminous, even when the scanning field is restricted to a horizontal plane. The latter only yields a poor world representation, with the risk of not detecting essential obstacles such as table tops. (Belviken E, &Acklam PJ. 2001) That is why we aim at a vision-only solution to navigation. Vision is, in comparison with these other sensors, much more informative. Moreover, cameras are quite compact and increasingly cheap. We observe also that many biological species, in particular migratory birds, use mainly their visual sensors for navigation. We chose to use an omni-directional camera as visual sensor, because of its wide field of view and thus rich content of the images acquired with. Besides, we added a few artificial markers to the environment for navigation. In a word, we present a novel visual navigation system for the AGV. With an omni‐ 18 RobotLocalizationandMapBuilding350 directional camera as sensor and a DSP as processor, this system is able to recover distortion of a whole image due to fish-eye lens. It can recognize and track man-made landmarks robustly in a complex natural environment, then localize itself using the landmarks at each moment. The localization information is sent to a computer inside the robot and enables fast and simple path planning towards a specified goal. We developed a real-time visual servo technique to steer the system along the computed path. The whole chapter falls into seven sections. Section I introduces various navigation strategies widely used in the mobile robot, and discusses the advantages and disadvantages of these navigation strategies firstly. Then a vision-only solution to navigation is proposed, with an omni-directional camara as visual sensor. This visual navigation strategy can guide the mobile robot to move along a planned path in a structured environment. Section II illuminates the framework of the whole system, including the targeted application of this research, an automated guided vehicle (AGV). The two-color sequence landmarks are originated to locate and navigate the AGV. Besides, a visual image processing system based on DSP has been developed. The entire related distortion correction, tracking and localization algorithm are run in the built-in image processing system. This compact on board unit can be easily integrated into a variety of mobile devices and appears low power, well modularity and mobility. Section III lists all sorts of fish-eye lens distortion models and sets forth a method of distortion correction. The involved fish-eye lens satisfies isometric projection, and the optical imaging center and a distortion parameter of the visual sensor need to be figured out in order to realize distortion correction. Thus the classical calibration for common lens paramenters is applied to the fish-eye lens, and those above parameters are figured out. Section IV talks about the novel real-time visual tracking with Particle Filter, which yields an efficient localization of the robot, with focus on man-made landmarks. It is the key technology that make the whole system work. The original tracking algorithm uses the result of object recognition to validate the output of Particle Filter, which improves the robustness of tracking algorithm in complex environment. Section V puts stress on the localization and navigation algorithm with the coordinates of the landmarks provided by particle filter. The location and the orientation of the AGV are worked out based on coordinate transformation, in the three-dimensional enviromnent rebuilt on the two-color sequence landmarks. Moreover, a PID control strategy is run in the built-in computer of the AGV for navigation. Section VI presents the actual effect of the mobile robot navigation. The experimental environment and the experimental steps are introduced in detail, and six pictures of experiment results are shown and discussed. Secion VII summarizes the work done about the research. A beacon tracker based on Particle Filter is implemented in the built-in image processing system. Real-time distortion correction and tracking algorithms are performed in the system, and the AGV is located and navigated with the tracking results of hte landmarks from the beacon tracker. 2. System framework Our mobile robot platform is shown in Fig. 1. With our method, the only additional hardware required is a fish-eye lens camera and an embedded hardware module. The fish- eye lens is fixed on the top of the vehicle to get omni-directional vision, and the embedded system based on DSP takes charge in distortion rectification, target recognition, target tracking and localization. Fig. 1. Mobile robot platform 2.1 Mobile robot Usually, a mobile robot is composed of the body, battery and charging system, drives, steering, precision parking device, motion controllers, communication devices, transfer system, and visual navigation subsystem and so on. The body includes the frame and the corresponding mechanical structures such as reduction gearbox, motors and wheels, etc, and it is a fundamental part of the AGV. There are three working ways for an AGV: (1) Automatic mode. When the AGV is set in automatic operation mode, the operator enters the appropriate command according to the plan path, and the AGV start to work in the unmanned mode; (2) Semi-automatic mode. The operator can directly assist the AGV to complete its work through the buttons on the AGV; (3) Manual mode. The operator can also use remote control trolley to move the AGV to the desired location manually. The mobile robot platform of this chapter is a tracked AGV, and it consists of an embedded Industrial Personal Computer (IPC), motion control system, multiple infrared sensors and ultrasound sensors, network communication system and so on. The IPC uses industrial- grade embedded motherboard, including low-power, high-performance Pentium-M 1.8G CPU, SATA 160G HDD, DDR400 2G memory, six independent RS232 serial ports, eight separate USB2.0 interface. Moreover, four-channel real-time image acquisition card can be configured on this motherboard. The specific hardware modules of the mobile robot are shown in Fig. 2. Omni-directionalvisionsensorformobilerobotnavigationbasedonparticlelter 351 directional camera as sensor and a DSP as processor, this system is able to recover distortion of a whole image due to fish-eye lens. It can recognize and track man-made landmarks robustly in a complex natural environment, then localize itself using the landmarks at each moment. The localization information is sent to a computer inside the robot and enables fast and simple path planning towards a specified goal. We developed a real-time visual servo technique to steer the system along the computed path. The whole chapter falls into seven sections. Section I introduces various navigation strategies widely used in the mobile robot, and discusses the advantages and disadvantages of these navigation strategies firstly. Then a vision-only solution to navigation is proposed, with an omni-directional camara as visual sensor. This visual navigation strategy can guide the mobile robot to move along a planned path in a structured environment. Section II illuminates the framework of the whole system, including the targeted application of this research, an automated guided vehicle (AGV). The two-color sequence landmarks are originated to locate and navigate the AGV. Besides, a visual image processing system based on DSP has been developed. The entire related distortion correction, tracking and localization algorithm are run in the built-in image processing system. This compact on board unit can be easily integrated into a variety of mobile devices and appears low power, well modularity and mobility. Section III lists all sorts of fish-eye lens distortion models and sets forth a method of distortion correction. The involved fish-eye lens satisfies isometric projection, and the optical imaging center and a distortion parameter of the visual sensor need to be figured out in order to realize distortion correction. Thus the classical calibration for common lens paramenters is applied to the fish-eye lens, and those above parameters are figured out. Section IV talks about the novel real-time visual tracking with Particle Filter, which yields an efficient localization of the robot, with focus on man-made landmarks. It is the key technology that make the whole system work. The original tracking algorithm uses the result of object recognition to validate the output of Particle Filter, which improves the robustness of tracking algorithm in complex environment. Section V puts stress on the localization and navigation algorithm with the coordinates of the landmarks provided by particle filter. The location and the orientation of the AGV are worked out based on coordinate transformation, in the three-dimensional enviromnent rebuilt on the two-color sequence landmarks. Moreover, a PID control strategy is run in the built-in computer of the AGV for navigation. Section VI presents the actual effect of the mobile robot navigation. The experimental environment and the experimental steps are introduced in detail, and six pictures of experiment results are shown and discussed. Secion VII summarizes the work done about the research. A beacon tracker based on Particle Filter is implemented in the built-in image processing system. Real-time distortion correction and tracking algorithms are performed in the system, and the AGV is located and navigated with the tracking results of hte landmarks from the beacon tracker. 2. System framework Our mobile robot platform is shown in Fig. 1. With our method, the only additional hardware required is a fish-eye lens camera and an embedded hardware module. The fish- eye lens is fixed on the top of the vehicle to get omni-directional vision, and the embedded system based on DSP takes charge in distortion rectification, target recognition, target tracking and localization. Fig. 1. Mobile robot platform 2.1 Mobile robot Usually, a mobile robot is composed of the body, battery and charging system, drives, steering, precision parking device, motion controllers, communication devices, transfer system, and visual navigation subsystem and so on. The body includes the frame and the corresponding mechanical structures such as reduction gearbox, motors and wheels, etc, and it is a fundamental part of the AGV. There are three working ways for an AGV: (1) Automatic mode. When the AGV is set in automatic operation mode, the operator enters the appropriate command according to the plan path, and the AGV start to work in the unmanned mode; (2) Semi-automatic mode. The operator can directly assist the AGV to complete its work through the buttons on the AGV; (3) Manual mode. The operator can also use remote control trolley to move the AGV to the desired location manually. The mobile robot platform of this chapter is a tracked AGV, and it consists of an embedded Industrial Personal Computer (IPC), motion control system, multiple infrared sensors and ultrasound sensors, network communication system and so on. The IPC uses industrial- grade embedded motherboard, including low-power, high-performance Pentium-M 1.8G CPU, SATA 160G HDD, DDR400 2G memory, six independent RS232 serial ports, eight separate USB2.0 interface. Moreover, four-channel real-time image acquisition card can be configured on this motherboard. The specific hardware modules of the mobile robot are shown in Fig. 2. RobotLocalizationandMapBuilding352 Fig. 2. Hardware modules of the mobile robot platform 2.2 Fish-eye lens The project research needs high-quality visual image information, so that the camera features color, planar array, large-resolution and CCD light-sensitive device. We adopt imported Japanese ultra-wide angle fish-eye lens Fujinon FE185C046HA-1 as well as analog color CCD camera Watec221S, as shown in Fig. 3. Fig. 3. Fish-eye lens camera The performance parameters of the fish-eye lens are shown in Table 1. And the CCD size is 1/2 inches, PAL stardard, and the resolution (horizontal) is 50 lines (Y/C 480 lines). Its effective pixel (K) is P440K, minimum illumination is 0.1Lux, and lens mount method is CS installation, with operating voltage of DC +12V. Focus(mm) 1.4 Aperture Range F1.4-F16 CCD size 1/2” Minimum object distance(m) 0.1 BFL(mm) 9.70 Interface C Weight(g) 150 Table 1. The performance parameters of the fish-eye lens 2.3 Embedded hardware platform The vast majority of image processing systems used currently by the AGV are based on the traditional PC or high-performance IPC. Although the PC-based architecture is simple and mature technically, and applied widely, these image processing systems are redundant in terms of both resource allocation and volume, besides they have poor flexibility, heavy weight and high power consumption, which is not suitable for the mobile vehicle system application. Fig. 4. System diagram of hardware platform While the embedded system, as a highly-integrated application platform, features great practicability, low cost, small size, easy expansion and low power consumption. Therefore, we drew on the current successful application of DSP and FPGA chips in the multimedia processing, considering the characteristics of the vehicle image processing system such as large computation, high real-time requirement and limited resources, proposed a solution to build an embedded hardware image processor based on FPGA+DSP. And target recognition, tracking and localization are achieved on this hardware platform. The system diagram of the hardware platform is shown in Fig. 4. We use Altera’s Cyclone series FPGA EP2C20 and TI’s DaVinci DSP TM320DM642 to build the embedded hardware image processor. Firstly, the input analog video signal goes [...]... algorithm and Self-Organizing maps, 366 Robot Localization and Map Building followed by implementation, test analysis and results with different undersea features Finally, the conclusion of the study and future perspectives are presented 2 Related Works Localization, navigation and mapping using vision-based algorithms use visual landmarks to create visual maps of the environment In the other hand the... disappeared for a while 5 Localization and navigation 5.1 Localization With the coordinates of the landmarks provided by particle filter, we can locate and navigate the AGV automatically 360 Robot Localization and Map Building Fig 10 coordinate conversion As Fig.10 shows, the world coordinates of the two beacons are (x1, y1) and (x2, y2), and their relative camera coordinates are (x’1, y’1) and (x’2, y’2) In... centimeter as metric unit, including figure 8 374 Robot Localization and Map Building Fig 7 Number of keypoints detected and true correlation during the robotic arm movement Visual Odometry and mapping for underwater Autonomous Vehicles 375 Fig 8 Position determinated by the robotic arm odometry and a visual system, without and with distortion 4.2 Online Robotic Localization Tests were performed to evaluate... Improved particle filter To localize the AGV with landmarks, the coordinate of the beacons in an image should be figured out firstly So an improved particle filter is employed to track these beacons and get their coordinates 358 Robot Localization and Map Building As an algorithm framework, a particle filter can be used to track multiple objects in the case of nonlinear and non-Gaussian problem And the... the ocean In these images, natural landmarks, also called keypoints in this work, can be detected allowing the AUV visual odometry In this text we propose a new approach to AUV localization and mapping Our approach extract and map keypoints between consecutive images in underwater environment, building online keypoint maps This maps can be used to robot localization and navigation We use Scale Invariant... used to locate the robot during the navigation 372 Robot Localization and Map Building 3.3.2 Location the robot on the map New frames are captured during the navigation For each new frame F, SIFT calculates a set of m keypoints Xi, see equation 6 A n=131 dimensional descriptor vector is associated to each keypoint We use the trained SOM to map/ locate the robot in the environment A mapping stage is runned... sensor for mobile robot navigation based on particle filter 355 switch to the next group of landmarks in order to achieve continuous landmark recognition and tracking for the next localization and navigation of the AGV In a word, this landmark mode has the topological features close to the natural scenery in indoor and outdoor environment, is simple and practical, easy to layout and maintain, which... specified manually, and then the particle filter can track them, which is unacceptable for autonomous navigation Thus, we combine an object recognition algorithm with particle filter, and use the object recognition algorithm to specify landmarks automatically And once particle filter fails to track the landmarks occasionally, the object recognition algorithm will function to relocate the landmarks In order... Furthermore, these keypoints are used as landmarks in an online topological mapping We propose the use of self-organizing maps(SOM) based on Kohonen maps[Teuvo Kohonen] and Growing Cell Structures(GCS)[ Bernd Fritzke] that allow a consistent map construction even in presence of noisy information First the chapter presents related works on self -localization and mapping Section III presents a detailed... SpeedL and SpeedR are the initial velocities of the left and right wheels respectively, the velocity is 0.6m/s And K1 and K2 are the weight of angle deviation and position deviation respectively We can use equation (9) to figure out the velocities of both wheels, and realize the AGV navigation finally 6 Navigation experiments 6.1 Experimental conditions Fig 11 experiment environment 362 Robot Localization . mobile robot, Thesis, Vanderbilt University. Robot Localization and Map Building3 48 Marchand, E. & Chaumette, F. (2005), Features tracking for visual servoing purpose, Robotics and Autonomous. 234.49 Robot Localization and Map Building3 58 As an algorithm framework, a particle filter can be used to track multiple objects in the case of nonlinear and non-Gaussian problem. And the. Localization With the coordinates of the landmarks provided by particle filter, we can locate and navigate the AGV automatically. Robot Localization and Map Building3 60 Fig. 10. coordinate

Ngày đăng: 12/08/2014, 00:20

TỪ KHÓA LIÊN QUAN