1. Trang chủ
  2. » Luận Văn - Báo Cáo

Design and implementation of marker based visual localization robot

60 0 0
Tài liệu đã được kiểm tra trùng lặp

Đang tải... (xem toàn văn)

Tài liệu hạn chế xem trước, để xem đầy đủ mời bạn chọn Tải xuống

THÔNG TIN TÀI LIỆU

Thông tin cơ bản

Định dạng
Số trang 60
Dung lượng 5,29 MB

Nội dung

MINISTRY OF EDUCATION AND TRAINING HO CHI MINH CITY UNIVERSITY OF TECHNOLOGY AND EDUCATION FACULTY FOR HIGH QUALITY TRAINING GRADUATION PROJECT COMPUTER ENGINEERING TECHNOLOGY DESIGN AND IMPLEMENTATION OF MARKER-BASED VISUAL LOCALIZATION ROBOT LECTURER: LE MINH THANH STUDENT: DANG DINH GIA BAO TA QUOC THINH SKL 0 6 Ho Chi Minh City, December, 2022 HO CHI MINH CITY UNIVERSITY OF TECHNOLOGY AND EDUCATION FACULTY FOR HIGH QUALITY TRAINING GRADUATION PROJECT DESIGN AND IMPLEMENTATION OF MARKER-BASED VISUAL LOCALIZATION ROBOT ĐẶNG ĐÌNH GIA BẢO Student ID: 18119005 TẠ QUỐC THỊNH Student ID: 18119043 Major: COMPUTER ENGINEERING TECHNOLOGY Advisor: LÊ MINH THÀNH Ho Chi Minh City, December 2022 HO CHI MINH CITY UNIVERSITY OF TECHNOLOGY AND EDUCATION FACULTY FOR HIGH QUALITY TRAINING GRADUATION PROJECT DESIGN AND IMPLEMENTATION OF MARKER-BASED VISUAL LOCALIZATION ROBOT ĐẶNG ĐÌNH GIA BẢO Student ID: 18119005 TẠ QUỐC THỊNH Student ID: 18119043 Major: COMPUTER ENGINEERING TECHNOLOGY Advisor: LÊ MINH THÀNH Ho Chi Minh City, December 2022 THE SOCIALIST REPUBLIC OF VIETNAM Independence – Freedom– Happiness -Ho Chi Minh City, December 24, 2022 GRADUATION PROJECT ASSIGNMENT Student name: Đặng Đình Gia Bảo Student ID: 18119005 Student name: Tạ Quốc Thịnh Student ID: 18119043 Major: Computer Engineering Technology Class: 18119CLA Advisor: Lê Minh Thành Phone number: 0917677930 Date of assignment: 1/10/2022 Date of submission: 25/12/2022 Project title: Design and implementation of a marker-based visual localization robot Initial materials provided by the advisor: Documents and papers relate to the work of visual localization and Aruco marker detection Content of the project: • Research the working flow of Aruco marker detection • Survey the ability of Aruco detecting in different environments • Create a user interface web application • Design and implement the marker-based autonomous robot Final product: In general, the model of the autonomous robot can travel to a specific location and return to the entrance The working field of the robot presents the working environment of a warehouse The final product is tested in many scenarios through a web interface CHAIR OF THE PROGRAM ADVISOR (Sign with full name) (Sign with full name) THE SOCIALIST REPUBLIC OF VIETNAM Independence – Freedom– Happiness -Ho Chi Minh City, December 24, 2022 ADVISOR’S EVALUATION SHEET Student name: Đặng Đình Gia Bảo Student ID: 18119005 Student name: Tạ Quốc Thịnh Student ID: 18119043 Major: Computer Engineering Technology Class: 18119CLA Project title: Design and implementation of a marker-based visual localization robot Advisor: Lê Minh Thành EVALUATION Content of the project: Strengths: Weaknesses: Approval for oral defense? (Approved or denied) Overall evaluation: (Excellent, Good, Fair, Poor) Mark: ……………… (in words: ) Ho Chi Minh City, ADVISOR (Sign with full name) HO CHI MINH CITY OF UNIVERSITY OF TECHNOLOGY AND EDUCATION FACULTY OF HIGH QUALITY TRAINING SOCIALIST REPUBLIC OF VIETNAM Independence – Freedom – Happiness Ho Chi Minh City, January 6th, 2023 MODIFYING EXPLANATION OF THE GRADUATION PROJECT MAJOR: COMPUTER TECHNOLOGY ENGINEERING Project title: Marker-based visual localization robot Student: Đặng Đình Gia Bảo Student ID: 18119005 Student: Tạ Quốc Thịnh Student ID: 18119043 Advisor: Lê Minh Thành Defending Council: Council 2, Room: A3-404, 3rd January 2023 Modifying explanation of the graduation project: TT Council comments Editing results Specific Provide requirements, specification specifications in details requirements is and provided in Chapter 3, section 3.1 All references are rewritten All references must be according rewritten to the writing guidelines Create a comparison table Note The comparison table of the between the proposed work proposed method with other with the other related works methods is updated in Table 4.3 Head of Department (Sign with full name) Advisor (Sign with full name) Students (Sign with full name) HO CHI MINH CITY UNIVERSITY OF TECHNOLOGY AND EDUCATION FACULTY FOR HIGH QUALITY TRAINING THE SOCIALIST REPUBLIC OF VIETNAM Independence – Freedom– Happiness PRE-DEFENSE EVALUATION SHEET Student name: Đặng Đình Gia Bảo Student name: Tạ Quốc Thịnh Project title: Marker-based visual localization robot Name of Reviewer: PHAN VĂN CA Student ID: 18119005 Student ID: 18119043 EVALUATION Content and workload of the project Design and implementation of self-driving car prototype using marker-based visual localization algorithm Strengths: The thesis is a well written, organized and illustrated The prototype is tested and evaluated Weaknesses: Lack of requirements specifications Approval for oral defense? (Approved or denied) Approved Comments and suggestions - Provide requirements, specifications in details Ho Chi Minh City, REVIEWER (Sign with full name) Phan Văn Ca THE SOCIALIST REPUBLIC OF VIETNAM Independence – Freedom– Happiness EVALUATION SHEET OF DEFENSE COMMITTEE MEMBER Student name: Student ID: Student name: Student ID: Major: Project title: Name of Defense Committee Member: EVALUATION Content and workload of the project Strengths: Weaknesses: Overall evaluation: (Excellent, Good, Fair, Poor) Mark: ……………… (in words: ) Ho Chi Minh City, COMMITTEE MEMBER (Sign with full name) DISCLAIMER We thus formally declare that this thesis is the product of our research and implementation No published article has been reproduced without a source citation from our report We take full responsibility for any violations that may have occurred Students Dang Dinh Gia Bao Ta Quoc Thinh 4.2 WORKING RESULTS 4.2.1 Moving to a specific location from the entrance In this scenario, the selected destination is entered into our web UI as a series of numbers ranging from to 12, which are also displayed on each cell in the working area Robot then proceeds to the cell with the same number as the user's input number The robot moves to the destination after detecting the corresponding marker ID based on the input number Whenever a new location is submitted by the user, a notification is displayed Figure 4.6: Sending data to Google Firebase The detected marker ID and distance measurement are displayed on the Raspbian OS terminal Destinations "8" and "12" are submitted to demonstrate the operation of auto mode In other words, our robot must travel from the entrance to the exact cell with the corresponding marked number Figures 4.2 and 4.3 illustrate the robot's movement from the entrance to location number 7, turning right, and arriving at cell In the first test scenario, the input is "8," which corresponds to markers with ID "1" and "3," and the robot travels to the row with the destination number based on ID 1, then left and stops at a cell marked with number "8" based on ID The current status of the 45 robot is immediately updated on the user website Figure 4.7: Robot moves forward from the entrance Figure 4.8: Robots turns right to reach the destination Finally, we test with input number “10” Figure 4.5 demonstrates the robot behavior 46 when moving to location 10 Figure 4.9: Robot travels to location 10 The robot first rotates the servo until it detects a marker with ID "1," at which point it moves forward However, when the distance value displayed on the terminal reaches 20, the robot stops but does not turn right and rotates the servo This occurred because, as the robot approached the marker, it lost sight of it and had to locate marker ID to confirm that it was on the correct path to the destination We modify marker ID so that the robot can detect it again As the robot approached the destination, a notification is sent to them 47 4.2.2 Testing accuracy and comparison Table 4.2 illustrates the precision of our robot operation based on the number of trails versus the number of successful moves to some chosen destination We choose locations 1, 4, 5, 8, 10, and 12 We performs twenty times for each testing case in this accuracy test and measure the average time taken per trial from entrance to the destination for each test case Number of tests is the number of times that we test each case, number of successes is the number of times that our robot fits right in the destination, and the accuracy is calculated by dividing the number of tests by the number of successes This test is for proving our robot ability to move to location at the rear and center of the working area since location marked with 1, 4, 10 and 12 are at the rear and location numbered 5, and are at the center Our robot successfully stops in cell number "1" in the test case with location eighteen times out of 20 times (90% accuracy) Cell number is the same case with cell Because our robot can quickly identify the marker with ID "1" inside the robot's camera range and stop at the precise spot, test cases with location number and have the highest accuracy Following, we put our robot to the test with position 5, which is the cell labeled with the number "5" Our robot's accuracy drops to 17 out of 20 times in this situation, or roughly 75% accuracy The decrease in accuracy occurred as a result of our robot occasionally turning at an incorrect angle, which caused the robot's course to deviate from the intended direction This also occurred with the test scenario for location The test case with position 12 has the lowest accuracy, only being correct 10 times out of 20 (about 50%) When our robot approaches too close to a marker, the marker's edges are beyond the camera's field of view, making it difficult to identify using the OpenCV module and preventing the robot from stopping precisely at spot 12 Another reason that makes the accuracy of the location 12 low is the error in turning angle Location 10 has its accuracy slightly higher than number 12 because the error from turning angle is not existed The time taken of each case is calculated by the average of time taken of each trial Test cases with the lowest time is location and because our robot just needs to go straight ahead The highest time taken case is location 12 because it is the furthest location on the working field and our robot needs time to detect its location after turning In other cases, our robot takes about 10 seconds to minutes to finish its expedition In conclusion, our robot’s accuracy is 75.843% in average of all test case Our robot takes 0.856 minutes 48 in average to finish its expedition Table 4.2: Travelling accuracy of robot No Destination Number of Number of Accuracy tests successes (%) Time taken per test (minutes) Location 20 18 90 0.361 Location 20 17 85 0.416 Location 20 14 70 1.212 Location 20 15 75 1.561 Location 10 20 15 75 1.157 Location 12 20 12 60 1.983 75.843 0.856 Average Table 4.3 compares three types of autonomous robots in terms of path defining, stability, and scalability: line follower robots, our marker-based visual localization robots, and GNSS-based robots The line follower robot is highly accurate, but it must follow a pre-drawn path To scale up the system, the user must redraw the paths The paths of the marker-based visual localization robot and the GNSS-based robot are defined by algorithms so that they can reach any location on the working area; however, the GNSSbased robot cannot work indoors Because of the camera's limited working range, the accuracy of our marker-based robot is low when the target location is at the back of the working area Because the edge of the maker is hidden when the robot gets close, our robot cannot detect it In terms of scalability, the GNSS-based robot's working range can be increased by modifying the path finding algorithm It cannot, however, be extended to areas where the GPS signal cannot reach Our marker-based robot's working range can be increased by adjusting its path finding algorithm, but it is limited by the camera's working range due to image shrinking In conclusion, the line follower robot has high stability but it is not a flexible application in terms of path defining and scalability, the GNSS-based robot and our marker-based robot are flexible in terms of path defining and scalability, but the GNSS-based robot’s operation is limited by the GPS signal strength while our robot’s operation is limited by the camera working range 49 Table 4.3: Comparison of marker-based visual localization with other navigation methods Line following robot Path Defining Marker-based visual localization robot Must have a specific line to follow Paths are defined by algorithm as desired Paths are drawn on GNSS-based robot Paths are defined by algorithm as desired the field Can reach any location on the working area Stability High accuracy at any path but it cannot reach a location where there is no line to follow The accuracy at the rear Can reach any location on the working field of the working field is low Cannot work indoor Affected by lighting condition Scalability Re-draw the path on the field and adjust algorithm in the firmware Adjust the location of reference makers and algorithm in the firmware Limited by the camera working range Adjust algorithm of the firmware Can be extended anywhere the robot can receive GPS signal 50 CHAPTER 5: CONCLUSION AND FUTURE WORK 5.1 CONCLUSION The design of a "Marker-based visual localization robot" has been completed and proven to achieve the desired results This project's working behaviors are demonstrated through scenarios, and as a result, any location on the working field can be reached with quite high accuracy In terms of hardware performance, our final product guarantee is that it will work well within continuous hours, and that all visual input will be quickly processed by the Raspberry Pi embedded computer in order to control the car to the destination As a warehouse application, our robot always returns to the entrance after completing a delivery Furthermore, the robot's user interface supports both manual and auto modes, which can be switched instantly by the user Users can also access the robot from multiple smart devices, as long as the browser on the user devices shares the same internet connection as the robot Our robot, however, still has some flaws Initially, our robot does not automatically adjust its current location when it is outside of working range or strayed from the intended direction The traveling shock disrupts our robot's vision through the camera on the way to the desired position, causing the robot to miscalculate and deviate when compared to the destination Because our working area resembles the environment of a warehouse with specific paths, the robot's other limitation is that it can only travel along a predetermined path from the entrance to the destination Finally, if our robot gets too close to a marker, it fails to detect it because one of its edges is hidden from the camera 5.2 FUTURE WORK The previous section indicates that this thesis has some restrictions In order to increase the robot's precision, algorithms like PID help the robot go to the desired area with great accuracy When compared to actual warehouse robot applications, our robot still lacks a few functionalities like obstacle recognition and the delivery of large objects because of time and technology limitations Applying our robot's foundations to stronger hardware, on the other hand, will cure our robot's design defects and add additional resources to develop more advanced features to meet practical demands 51 REFERENCE [1] Babinec, A., Jurišica, L., Hubinský, P., & Duchoň, F “Visual localization of mobile robot using artificial markers” Procedia Engineering”, 2014 [2] Martin Humenberger, Gabriela Csurka Khedari, Nicolas Guerin, Boris Chidlovs, (2022, Sep 20), “Method for visual localization” [Online] Available at: https://europe.naverlabs.com/blog/methods-for-visual-localization/ [3] OpenCV (2022, Sep 25), “ArUco marker – OpenCV” [Online] Available at: https://docs.opencv.org/4.x/d5/dae/tutorial_aruco_detection.html [4] Yu, J., Jiang, W., Luo, Z., & Yang, L “Application of a vision-based single target on robot positioning system.” Sensors, 2021 [5] Shang, W A “Survey of mobile robot vision self-localization” Journal of Automation and Control Engineering Vol, 7(2), 2019 [6] Campilho, A., Karray, F., & ter Haar Romeny, B (Eds.) (2018) Image Analysis and Recognition: 15th International Conference, ICIAR 2018, Póvoa de Varzim, Portugal, June 27–29, 2018, Proceedings (Vol 10882) Springer [7] Isaksson, J., & Magnusson, L., “Camera pose estimation with moving Arucoboard.: Retrieving camera pose in a stereo camera tolling system application.”, 2020 [8] OpenCV (2022, Sep 25), “Image Thresholding” [Online] Available at: https://docs.opencv.org/4.x/d7/d4d/tutorial_py_thresholding.html [9] Dramble (2022, Nov 10), “Power Consumption Benchmarks” [Online] Available at: https://www.pidramble.com/wiki/benchmarks/power-consumption [10] Arduino (2022, Sep 25), “How to use the L298N Motor Driver © GPL3+”[Online] Available at: 52 https://create.arduino.cc/projecthub/ryanchan/how-to-use-the-l298n-motor-driverb124c5 [11] Steve Cassidy (2022, Sep 25) “Writing Web Applications in Python with Bottle” [Online] Available at: https://pwp.stevecassidy.net/bottle/python-webapps/ [12] Loose, H., Zasepa, M., Pierzchala, P., & Ritter, R “GPS Based Navigation of an Autonomous Mobile Vehicle In Solid State Phenomena” (Vol 147, pp 55-60) Trans Tech Publications Ltd, 2009 [13] Goyal, N., Aryan, R., Sharma, N., & Chhabra, V (2021) “Line Follower CargoBot For Warehouse Automation”, 2021 53 Appendix Source code: https://drive.google.com/drive/folders/1VwtdlSqnlUkoHEk0DrXwjKVBp0nG50K?usp=sharing Firebase Database: https://arucorobot-default-rtdb.asia-southeast1.firebasedatabase.app/ 54 55 56 57 58 S K L 0

Ngày đăng: 09/10/2023, 16:25

TÀI LIỆU CÙNG NGƯỜI DÙNG

TÀI LIỆU LIÊN QUAN