Computer Vision Computer Vision Edited by Xiong Zhihui I-Tech IV Published by In-Teh In-Teh is Croatian branch of I-Tech Education and Publishing KG, Vienna, Austria. Abstracting and non-profit use of the material is permitted with credit to the source. Statements and opinions expressed in the chapters are these of the individual contributors and not necessarily those of the editors or publisher. No responsibility is accepted for the accuracy of information contained in the published articles. Publisher assumes no responsibility liability for any damage or injury to persons or property arising out of the use of any materials, instructions, methods or ideas contained inside. After this work has been published by the In-Teh, authors have the right to republish it, in whole or part, in any publication of which they are an author or editor, and the make other personal use of the work. © 2008 In-teh www.in-teh.org Additional copies can be obtained from: publication@ars-journal.com First published November 2008 Printed in Croatia A catalogue record for this book is available from the University Library Rijeka under no. 120110068 Computer Vision, Edited by Xiong Zhihui p. cm. ISBN 978-953-7619-21-3 1. Computer Vision, Xiong Zhihui Preface Computer vision uses digital computer techniques to extract, characterize, and interpret information in visual images of a three-dimensional world. The goal of computer vision is primarily to enable engineering systems to model and manipulate the environment by using visual sensing. The field of computer vision can be characterized as immature and diverse. Even though earlier work exists, it was not until the late 1970s that a more focused study of the field started when computers could manage the processing of large data sets such as images. There are numerous applications of computer vision, including robotic systems that sense their environment, people detection in surveillance systems, object inspection on an assembly line, image database organization and medical scans. Application of computer vision on robotics attempt to identify objects represented in digitized images provided by video cameras, thus enabling robots to "see". Much work has been done on stereo vision as an aid to object identification and location within a three- dimensional field of view. Recognition of objects in real time, as would be needed for active robots in complex environments, usually requires computing power beyond the capabilities of present-day technology. This book presents some research trends on computer vision, especially on application of robotics, and on advanced approachs for computer vision (such as omnidirectional vision). Among them, research on RFID technology integrating stereo vision to localize an indoor mobile robot is included in this book. Besides, this book includes many research on omnidirectional vision, and the combination of omnidirectional vision with robotics. This book features representative work on the computer vision, and it puts more focus on robotics vision and omnidirectioal vision. The intended audience is anyone who wishes to become familiar with the latest research work on computer vision, especially its applications on robots. The contents of this book allow the reader to know more technical aspects and applications of computer vision. Researchers and instructors will benefit from this book. Editor Xiong Zhihui College of Information System and Management, National University of Defense Technology, P.R. China Contents Preface V 1. Behavior Fusion for Visually-Guided Service Robots 001 Mohamed Abdellatif 2. Dynamic Omnidirectional Vision Localization Using a Beacon Tracker Based on Particle Filter 013 Zuoliang Cao, Xianqiu Meng and Shiyu Liu 3. Paracatadioptric Geometry using Conformal Geometric Algebra 029 Carlos López-Franco 4. Treating Image Loss by using the Vision/Motion Link: A Generic Framework 045 David Folio and Viviane Cadenat 5. Nonlinear Stable Formation Control using Omnidirectional Images 071 Christiano Couto Gava, Raquel Frizera Vassallo, Flavio Roberti and Ricardo Carelli 6. Dealing with Data Association in Visual SLAM 99 Arturo Gil, Óscar Reinoso, Mónica Ballesta and David Úbeda 7. Precise and Robust Large-Shape Formation using Uncalibrated Vision for a Virtual Mold 111 Biao Zhang, Emilio J. Gonzalez-Galvan, Jesse Batsche, Steven B. Skaar, Luis A. Raygoza and Ambrocio Loredo 8. Humanoid with Interaction Ability Using Vision and Speech Information 125 Junichi Ido, Ryuichi Nisimura, Yoshio Matsumoto and Tsukasa Ogasawara 9. Development of Localization Method of Mobile Robot with RFID Technology and Stereo Vision 139 Songmin Jia, Jinbuo Sheng and Kunikatsu Takase VIII 10. An Implementation of Humanoid Vision - Analysis of Eye Movement and Implementation to Robot 159 Kunihito Kato, Masayuki Shamoto and Kazuhiko Yamamot 11. Methods for Postprocessing in Single-Step Diffuse Optical Tomography 169 Alexander B. Konovalov, Vitaly V. Vlasov, Dmitry V. Mogilenskikh, Olga V. Kravtsenyuk and Vladimir V. Lyubimov 12. Towards High-Speed Vision for Attention and Navigation of Autonomous City Explorer (ACE) 189 Tingting Xu, Tianguang Zhang, Kolja Kühnlenz and Martin Buss 13. New Hierarchical Approaches in Real-Time Robust Image Feature Detection and Matching 215 M. Langer and K D. Kuhnert 14. Image Acquisition Rate Control Based on Object State Information in Physical and Image Coordinates 231 Feng-Li Lian and Shih-Yuan Peng 15. Active Tracking System with Rapid Eye Movement Involving Simultaneous Top-down and Bottom-up Attention Control 255 Masakazu Matsugu, Kan Torii and Yoshinori Ito 16. Parallel Processing System for Sensory Information Controlled by Mathematical Activation-Input-Modulation Model 271 Masahiko Mikawa, Takeshi Tsujimura, and Kazuyo Tanaka 17. Development of Pilot Assistance System with Stereo Vision for Robot Manipulation 287 Takeshi Nishida, Shuichi Kurogi, Koichi Yamanaka, Wataru Kogushi and Yuichi Arimura 18. Camera Modelling and Calibration - with Applications 303 Anders Ryberg, Anna-Karin Christiansson, Bengt Lennartson and Kenneth Eriksson 19. Algorithms of Digital Processing and the Analysis of Underwater Sonar Images 333 S.V. Sai, A.G. Shoberg and L.A. Naumov 20. Indoor Mobile Robot Navigation by Center Following based on Monocular Vision 351 Takeshi Saitoh, Naoya Tada and Ryosuke Konishi IX 21. Temporal Coordination among Two Vision-Guided Vehicles: A Nonlinear Dynamical Systems Approach 367 Cristina P Santos and Manuel João Ferreira 22. Machine Vision: Approaches and Limitations 395 Moisés Rivas López, Oleg Sergiyenko and Vera Tyrsa 23. Image Processing for Next-Generation Robots 429 Gabor Sziebig, Bjørn Solvang and Peter Korondi 24. Projective Reconstruction and Its Application in Object Recognition for Robot Vision System 441 Ferenc Tél and Béla Lantos 25. Vision-based Augmented Reality Applications 471 Yuko Uematsu and Hideo Saito 26. Catadioptric Omni-directional Stereo Vision and Its Applications in Moving Objects Detection 493 Xiong Zhihui, Chen Wang and Zhang Maojun 27. Person Following Robot with Vision-based and Sensor Fusion Tracking Algorithm 519 Takafumi Sonoura, Takashi Yoshimi, Manabu Nishiyama, Hideichi Nakamoto, Seiji Tokura and Nobuto Matsuhira [...]... vehicle is determined by the main microprocessor 8 Computer Vision with inputs from different components All programs are implemented in C++ code and several video and data processing libraries are used, including Matrox Imaging Library, MIL and OpenCV Fig 6 Photograph of the mobile service robot The robot sensors and actuators communicate with the host computer via wired connections The DC motor is controlled... region is also computed and forwarded as input to the controller, as shown schematically in Fig 2 Fig 2 Schematic representation of target measurement in the gray image showing extracted target region 4 Computer Vision 3 Design of controller The goal of the controller is to enable the mobile robot to satisfy two objectives namely: target following and obstacle avoidance simultaneously The objectives are... GR STR1 SR1 SR3 Table 1 The Fuzzy Rule Matrix for the Target following FLC ( The columns show states for the target horizontal velocity, while the rows show states of target horizontal displacement) 6 Computer Vision The motion decision for the tracking behavior is calculated through the fusion of the image displacement and image velocity in the fuzzy logic inference matrix The values of matrix entry... Matrox Foursight module to process the image as a dedicated vision processor The images received from the cameras are digitized via a Meteor II frame grabber and stored in the memory of the Foursight computer for online processing by specially designed software We implemented algorithms that grab, calibrate the color image to eliminate the camera offset The target color is identified to the system... active at the same time In the cooperative approach, all behaviors contribute to the output, rather than a single behavior dominates after passing an objective criterion An example of the cooperative 2 Computer Vision approach is proposed by (Khatib, 1985) using artificial potential fields to fuse control decisions from several behaviors The potential field method suffers from being amenable to local... extraction of target area is shown in Fig 7 The left image shows original captured image and the extracted target area is shown in the right image Fig 7 The segmented image showing the detected target area A computer program was devised to construct the Hue-Saturation, H-S histogram shown in Fig 8 The advantage of this representation is that it enables better extraction of the target when it had been well... adjusted at first to view the target inside the color image Fig 8 The Hue-Saturation diagram showing regions of darker intensity as those corresponding to higher voting of the target object pixels 10 Computer Vision The robot starts to move as shown in the robot track, Fig 9 and keeps moving forward V ertical C oord in ate of rob ot w ork sp ace, m m 3000 Target X 2500 2000 1500 1000 500 O 0 0 500... Park, R (1996) Color Image Palette Construction Based on the HSI Color System for minimizing the reconstruction error, In Proceeding of IEEE International Conference on Image Processing, pp: 1041-1044 12 Computer Vision Littmann, E & Ritter, H (1997) Adaptive color segmentation—a comparison of neural and statistical methods IEEE Transactions on Neural Networks, Vol 8, No 1, pp:175-185 Luo, R.C ; Chen,... imaging principle on the fisheye lens Firstly a method for calibrating the omni-vision system is proposed The method relies on the utilities of a cylinder on which inner wall including several straight 14 Computer Vision lines to calibrate the center, radius and gradient of a fisheye lens Then we can make use of these calibration parameters for the correction of distortions Several imaging rules are conceived... of fisheye distortion, the distance between two consecutive intersection points are not equal in the image But the corresponding coordinates of intersection points in the fisheye image is achieved 16 Computer Vision Fig 2 Calibration for omnidirectional vision system Then we use a support vector machine (SVM) to regress the intersection points in order to get the mapping between the fisheye image coordinate . under no. 12 011 0068 Computer Vision, Edited by Xiong Zhihui p. cm. ISBN 978-953-7 619 - 21- 3 1. Computer Vision, Xiong Zhihui Preface Computer vision uses digital computer techniques. 0.00 0.20 0.40 0.60 0.80 1. 00 1. 20 -240 -14 0 -40 60 16 0 Target displacement from image center in pixels Degree of Membership L R M a 0.00 0.20 0.40 0.60 0.80 1. 00 1. 20 -20 -10 0 10 20 Target image. object pixels. Computer Vision 10 The robot starts to move as shown in the robot track, Fig. 9 and keeps moving forward. 0 500 10 00 15 00 2000 2500 3 000 0 500 10 00 15 00 Horizontal coordinate