1. Trang chủ
  2. » Ngoại Ngữ

Combining stereo vision and fuzzy image based visual servoing for autonomous object grasping using a 6 DOF manipulator

78 211 0

Đang tải... (xem toàn văn)

Tài liệu hạn chế xem trước, để xem đầy đủ mời bạn chọn Tải xuống

THÔNG TIN TÀI LIỆU

Thông tin cơ bản

Định dạng
Số trang 78
Dung lượng 3,73 MB

Nội dung

D9703808 Combining Stereo Vision and Fuzzy Image Based Visual Servoing for Autonomous Object Grasping Using a 6-DOF Manipulator Le Duc Hanh : I II ABSTRACT This dissertation presents a new grasping method in which a 6-DOF industrial robot can autonomously grasp a stationary, randomly positioned rectangular object using a combination of stereo vision and image-based visual servoing with a fuzzy controller (IBVSFC) First, an openCV software and a color filter algorithm are used to extract the specific color features of the object Then, the 3D coordinates of the object to be grasped are derived by the stereo vision algorithm, and the coordinates are used to guide the robotic arm to the approximate location of the object using inverse kinematicss Finally, IBVSFC precisely adjusts the pose of the end-effector to coincide with that of the object to make a successful grasp The accuracy and robustness of the system and the algorithm were tested and proven to be effective in real scenarios involving a 6-DOF industrial robot Although the application of this dissertation is limited in grasping a simple cubic object, the same methodology can be easily applied to objects with other geometric shapes III (IBVSFC) OpenCV IBVSFC IV ACKNOWLEDGEMENTS I would like to thank all of the people who encouraged and supported me as I was undertaking this dissertation Firstly, I would like to express my gratitude to my supervisor, Prof Dr Lin Chyi-Yeu and Prof Dr Lin Chi-Ying, for his invaluable guidance, advice and encouragement throughout the course of this dissertation as well as the writing of this dissertation All the theoretical design of control systems cannot be built without his help Also, I would like to express my sincere appreciation to all members in Advanced Intelligent Robotics lab for their help during my stay here Finally, my greatest thanks are to my family: my father, my mother, my younger brother, who has unconditionally supported me during my stay here V T ABLE OF CONTENTS Abstract I II Acknowledgements III Table of contents IV List of Figures V List of Tables VII Chapter INTRODUCTION 1.1 Overview 1.2 Motivation 1.3 Contribution 1.4 Dissertation structure .7 Chapter SYSTEM CONFIGURATION 2.1 Denso Robot 2.2 Vision system .10 Chapter IMAGE PROCESSING .13 3.1 Color filter 13 3.2 Stereo vision .16 Chapter FORWARD AND INVERSE KINEMATICS 21 4.1 Direct kinematics 21 4.2 Inverse Kinematics .25 Chapter CONTROL ALGORITHM 29 5.1 Position based control 29 5.2 Image based visual servoing 31 VI 5.2.1 Classical image based visual control 33 5.2.2 PID control 35 5.2.3 Fuzzy image based visual control 35 Chapter EXPERIMENT AND CONCLUSION 41 6.1 Experiments 41 6.2 Stacking Cubics 51 Chapter CONCLUSION AND FUTTURE WORK 57 7.1 Conclusion 57 7.1 Contributions 57 7.2 Future works 58 References 59 Appendix A 63 Appendix B 64 VII LIST OF FIGURES Fig VS-6556G Denso robot arm [13] Fig VS-6556G robot with the attached grasping component .9 Fig The 3rd camera attached to the end-effecter 11 Fig The Stereo vision system 11 Fig Color feature filter .15 Fig Volume filter 15 Fig Matlab toolbox for calibrate camera 17 Fig 3D geometrical model [16] 19 Fig Camera and system calibration 20 Fig 10 Link coordinating system and its parameters 22 Fig 11 Diagram of VS-6556G used in the dissertation .24 Fig 12 Procedure to solve the inverse kinematics for VS-6556G robot used in the dissertation 26 Fig 13 (from left to right): VM1 monocular eye-in-hand, VM2 monocular stand-alone, VM3 binocular eye-in-hand, VM4 binocular stand-alone and VM5 redundant camera system [6] .30 Fig 14 The control system using the stereo vision 31 Fig 15 Orientation of a rectangle cubic in 2D 32 Fig 16 Orientation of a rectangle cubic viewed in 3rd camera 33 Fig 17 Image based visual control scheme 33 Fig 18 The membership function of the position error .37 Fig 19 The membership function of the change rate of the position error 38 Fig 20 The membership function of the output signal for the fuzzy controller 38 Fig 21 Control block diagram of the fuzzy controller combined with tuning VIII mechanism 40 Fig 22 Experiment apparatus 42 Fig.23 Multitask working flow chart 43 Fig 24 The grasping task performed by the Denso Robot: (Phase 1) initial position, (Phase 2) approaching the target using stereo vision, (Phase 3) image-based visual servoing, (Phase 4) grasping the object 44 Fig 25 The overall performance result of the gripper in X coordinate .46 Fig 26 The overall error result of the gripper and rectangle cubic in X coordinate 46 Fig 27 The overall performance result of the gripper in Y coordinate .47 Fig 28 The overall error result of the gripper and rectangle cubic in Y coordinate 47 Fig 29 The overall performance result of the gripper in Z coordinate 48 Fig 30 The overall error result of the gripper and rectangle cubic in Z coordinate 48 Fig 31 The error angle result between the camera and the rectangle cubic 49 Fig 32 The error result between the center of the camera and the center of the rectangle cubic in X direction 50 Fig 33 The error result between the center of the camera and the center of the rectangle cubic in Y direction 51 Fig 34 Experimental setup for stacking three cubic 51 Fig 35 The grasping task performed to grasp the blue cubic by the Denso Robot: (Phase 1) initial position, (Phase 2) approaching the target using stereo vision, (Phase 3) image-based visual servoing, (Phase 4) grasping the object 52 Fig 36 The stacking task performed to stack the blue cubic on the orange one by the Denso Robot: (Phase 1) initial position, (Phase 2) approaching the target using stereo vision, (Phase 3) image-based visual servoing, (Phase 4) grasping the object 53 Fig 37 The grasping task performed to grasp the pink cubic by the Denso Robot: (Phase 1) initial position, (Phase 2) approaching the target using stereo vision, (Phase IX servoing and the image based visual servoing is the same with the previous Chapter 3, and Chapter Using the tracking flow chart algorithm shown in Fig 18, after the task is executed, it can be realized that the process of each grasping task is also divided into phases as shown in the following Fig 35 to Fig 38 Phase Phase Phase Phase Fig 35 The grasping task performed to grasp the blue cubic by the Denso Robot: (Phase 1) initial position, (Phase 2) approaching the target using stereo vision, (Phase 3) image-based visual servoing, (Phase 4) grasping the object 52 Phase Phase Phase Phase Fig 36 The stacking task performed to stack the blue cubic on the orange one by the Denso Robot: (Phase 1) initial position, (Phase 2) approaching the target using stereo vision, (Phase 3) image-based visual servoing, (Phase 4) grasping the object 53 Phase Phase Phase Phase Fig 37 The grasping task performed to grasp the pink cubic by the Denso Robot: (Phase 1) initial position, (Phase 2) approaching the target using stereo vision, (Phase 3) image-based visual servoing, (Phase 4) grasping the object 54 Phase Phase Phase Phase Fig 38 The stacking task performed to stack the pink cubic on the blue one by the Denso Robot: (Phase 1) initial position, (Phase 2) approaching the target using stereo vision, (Phase 3) image-based visual servoing, (Phase 4) grasping the object From the movie we can see that by using the hybrid visual servoing algorithm in this dissertation, the position of three cubic are recognized sequentially and then they are autonomously stacked by using the robot arm Once again, the stereo vision has 55 effectiveness by taking the role as the pointing device to guide the robot to the target even it has the error’s computation and the visual servoing algorithm has the effectiveness in adjusting the precise of the system Although the application of this dissertation is only demonstrated in grasping a simple cubic object, the same methodology can be easily extended to grasping objects with other geometric shapes 56 CHAPTER CONCLUSION 7.1 Conclusion To overcome the disadvantages of stereo vision and image-based visual servoing applied to the grasping of an object, this dissertation presents a new grasping method that allows a robot to constantly observe the object to grasp and thus facilitates a fast convergence This method uses a combination of stereo vision and image-based visual servoing with a fuzzy controller The experimental data shown here demonstrate that the accuracy of the system can be effectively improved using the IBVSFC The error of the system is small, approximately mm Using the eye-in-hand structure and IBVSFC will help alleviate the burdens of the high computational cost associated to the system calibration while providing a high success rate and high accuracy when working under weak calibrations Moreover, the structure of the system is not complicated; it is easy to set up in complex environments, and it has a lower cost compared with other systems on the market 7.2 Contributions As the result of the study some contributions have been summarized as follows: 57 - One intelligent image based visual servoing (image based visual servoing with fuzzy controller) has been designed for 6-DOF manipulator for the grasping task this dissertation’s controller used fewer feature information of the object than classical controller did while it also can make the system converge fast and stable - The control scheme is stable with the changing environment (setup in different real complex environment) With the same experimental setup, the grasping techniques can be easily duplicated in different environments with weak re-calibrations or calibration errors of the vision system This advantage helps to save the precious time and computational cost in calibration when applying in various environments - The same methodology can be easily extended to grasping objects with other geometric shapes 7.3 Future works From the reported in this dissertation, the following aspects are suggested for further investigation - To develop control algorithms for grasping more objects in one task - To design the force controller for the gripper - To reduce the time of a task 58 REFERENCE S [1] Tarca, R., Pasc, I., Tarca, N., and Popentiu-Vladicescu, F “Remote robot control Via Internet Using Augmented Reality” The 18th DAAAM International symposium on Intelligent Manufacturing Automation, pp 739-740, 2007 [2] Prem, K P and Behera, L., “Visual servoing of redundant manipulator with Jacobian matrix estimation using self-organizing map,” Robotic and Autonomous Systems, Vol 58(11), pp 978-990, 2010 [3] Nasrabadi, N.M., “A stereo vision technique using curve-segments and relaxation matching,” IEEE Transactions on Pattern Analysis and Machine Intelligence, Vol 14(5), pp 566-572, 1992 [4] Bohg, J and Kragic, D., “Grasping familiar objects using shape context,” International Conference on Advanced Robotics, pp 1-6, 2009 [5] Huchinson, S., Hager, G.D, and Corke, P.I., “A tutorial on visual servo control,” IEEE Transactions on Robotics and Automation, Vol 12(5), pp 651-670, 1996 [6] Kragic, D., and Christensen, H.I., “Survey on visual servoing for manipulation,” Technical Report ISRN KTH/NA/P—02/01—SE, Computational Vision and Active Perception Laboratory (CVAP), CVAP259, cvap/cvaplop/lop-cvap.html 59 2002 http://www.nada.kth.se/ [7] Shirai, Y and Inoue, H., “Guiding a robot by visual feedback in assembling tasks,” Pattern Recognition, Vol 5(2), pp 99-108, 1973 [8] Peter, I.C and Malcolm, C.G., “Dynamic effects in visual closed-loop systems,” IEEE Transactions on Robotics and Automation, Vol 12(5), pp 671-683, 1996 [9] Willian, J Hulls, C.C.W., and Bell, G.S., “Relative end-effector control using Cartesian position based visual servoing,” IEEE Transaction Robotic and Automation, Vol 12(5), pp 684-696, 1996 [10] Kosmopoulos, D.I., “Robust Jacobian matrix estimation for image-based visual servoing,” Robotic and Computer-Integrated Manufacturing, Vol 27(1), pp 82-87, 2011 [11] Chesi, G., Hashimoto, K., Prattichizzo, D and Vicino, A., “A switching control law for keeping features in the field of view in eye-in-hand visual servoing, “ IEEE International Conference on Robotics and Automation, pp 3929-3934, 2003 [12] Sebastian, J.M., Pari, L., Angle, L and Traslosheros, A., “Uncalibrated visual servoing using the fundamental matrix,” Robotics and Autonomous Systems, Vol 57(1), pp 1-10, 2009 [13]VS Series Data Sheet http://www.densorobotics.com/ content/ salessheets/ DENSORobotcs_datasheet_VS_650-850.pdf [14] Iraji, S.M., Yavari, A., “Skin color segmentation in fuzzy YCBCR color space 60 with the Mamdani inference,” American Journal of Scientific Research,, pp 131-137, 2011 [15] Camera calibration toolbox for MATLAB, http://www.vision.caltech.edu/ [16] Enescu, V., Cubber, D.G., Cauwerts, K., Sahli, H., Demeester, E., Vanhooydonck, D and Nuttin, M., “Active stereo vision-based mobile robot navigation for person tracking,” Integrated Computer-Aided Engineering, Vol 13(3), pp 203-222, 2006 [17] Mustafa, N., Ugur, Y., Murat, S B C., “Fuzzy neural network based intelligent controller for 3-DOF robot manipulators,” The 5th International Symposium on intelligent Manufacturing System, pp 884-895, 2006 [18] K Y Tsai, “Admissible motions in manipulator’s work space,” Ph.D thesis, University of Wisconsin-Milwaukee, 1990 [19] Siradjiddin, I., Behera, L., McGinnity, T.M and Coleman, S., “Image based visual servoing of a DOF robot manipulator using a distributed fuzzy proportional controller,” 2010 IEEE International Conference on Fuzzy Systems, pp 1-8, 2010 [20] Chaumette, F and HutChinson, S., “Visual servo control Part 1: Basic approaches,” IEEE Robotics & Automation Magazine, Vol 13(4), pp 82-90, 2006 [21] Kandel, A., Luo, Y and Zhang, Y.Q., “Stability analysis of fuzzy control systems,” Fuzzy Sets and Systems, Vol 105(1), pp 33-48, 1999 [22] Hans, B and Silvio, H., “A fuzzy approach to stability of fuzzy controllers,” 61 Fuzzy Sets and Systems, Vol 96(2), pp 161-172, 1998 [23] Lo J C and Lin M L., “Fuzzy stability analysis using interlacing condition,” The 10th IEEE International Conference on Fuzzy Systems, pp 634-637, 2001 62 APPENDIX A 63 APPENDIX B 64 65 66

Ngày đăng: 13/05/2016, 16:13

Nguồn tham khảo

Tài liệu tham khảo Loại Chi tiết
[1] Tarca, R., Pasc, I., Tarca, N., and Popentiu- Vladicescu, F. “Remote robot control Via Internet Using Augmented Reality” The 18th DAAAM International symposium on Intelligent Manufacturing Automation, pp. 739-740, 2007 Sách, tạp chí
Tiêu đề: Remote robot control Via Internet Using Augmented Reality
[2] Prem, K. P. and Behera, L., “ Visual servoing of redundant manipulator with Jacobian matrix estimation using self-organizing map, ” Robotic and Autonomous Systems, Vol. 58(11), pp. 978-990, 2010 Sách, tạp chí
Tiêu đề: Visual servoing of redundant manipulator with Jacobian matrix estimation using self-organizing map
[3] Nasrabadi, N.M., “A stereo vi sion technique using curve-segments and relaxation matching,” IEEE Transactions on Pattern Analysis and Machine Intelligence, Vol.14(5), pp. 566-572, 1992.[4 ] Bohg, J. and Kragic, D., “Grasping familiar objects using shape context,”International Conference on Advanced Robotics, pp. 1-6, 2009 Sách, tạp chí
Tiêu đề: A stereo vision technique using curve-segments and relaxation matching,” IEEE Transactions on Pattern Analysis and Machine Intelligence, Vol. 14(5), pp. 566-572, 1992. [4] Bohg, J. and Kragic, D., “Grasping familiar objects using shape context
[5] Huchinson, S., Hager, G.D, and Corke, P.I., “ A tutorial on visual servo c ontrol,” IEEE Transactions on Robotics and Automation, Vol. 12(5), pp. 651-670, 1996 Sách, tạp chí
Tiêu đề: A tutorial on visual servo control
[6] Kragic, D., and Christensen, H.I., “Survey on visual servoing for manipulation,” Technical Report. ISRN KTH/NA/P — 02/01 — SE, Computational Vision and Active Perception Laboratory (CVAP), CVAP259, 2002. http://www.nada.kth.se/ Sách, tạp chí
Tiêu đề: Survey on visual servoing for manipulation
[7] Shirai, Y. and Inoue, H., “Guiding a robot by visual feedback in assembling tasks , ” Pattern Recognition, Vol. 5(2), pp. 99-108, 1973 Sách, tạp chí
Tiêu đề: Guiding a robot by visual feedback in assembling tasks
[8] Peter, I.C. and Malcolm, C.G., “Dynamic effects in visual closed-loop systems, ” IEEE Transactions on Robotics and Automation, Vol. 12(5), pp. 671-683, 1996 Sách, tạp chí
Tiêu đề: Dynamic effects in visual closed-loop systems
[9] Willian, J. Hulls, C.C.W., and Bell, G.S., “Relative end -effector control using Cartesian position based visual servoin g,” IEEE Transaction Robotic and Automation, Vol. 12(5), pp. 684-696, 1996 Sách, tạp chí
Tiêu đề: Relative end-effector control using Cartesian position based visual servoing
[10] Kosmopoulos, D.I., “Robust Jacobian matrix estimation for image -based visual servoing,” Robotic and Computer -Integrated Manufacturing, Vol. 27(1), pp. 82-87,2011 Sách, tạp chí
Tiêu đề: Robust Jacobian matrix estimation for image-based visual servoing
[12] Sebastian, J.M., Pari, L., Angle, L. and Traslosheros, A., “Uncalibrated visual servoing using the fundamental matrix, ” Robotics and Autonomous Systems, Vol.57(1), pp. 1-10, 2009 Sách, tạp chí
Tiêu đề: Uncalibrated visual servoing using the fundamental matrix
[11] Chesi, G., Hashimoto, K., Prattichizzo, D. and Vicino, A., “A switching control law for keeping features in the field of view in eye-in- hand visual servoing, “ IEEE International Conference on Robotics and Automation, pp. 3929-3934, 2003 Khác

TỪ KHÓA LIÊN QUAN

TÀI LIỆU CÙNG NGƯỜI DÙNG

TÀI LIỆU LIÊN QUAN