1. Trang chủ
  2. » Luận Văn - Báo Cáo

Development of autonomous delivery vehicle in university campus

128 0 0

Đang tải... (xem toàn văn)

Tài liệu hạn chế xem trước, để xem đầy đủ mời bạn chọn Tải xuống

THÔNG TIN TÀI LIỆU

Nội dung

MINISTRY OF EDUCATION AND TRAINING HO CHI MINH CITY UNIVERSITY OF TECHNOLOGY AND EDUCATION FACULTY OF MECHANICAL ENGINEERING GRADUATION PROJECT ROBOTIC AND ARTIFICIAL INTELLIGENCE DEVELOPMENT OF AUTONOMOUS DELIVERY VEHICLE IN UNIVERSITY CAMPUS ADVISOR: M.E NGUYEN MINH TRIET STUDENT: CHU NHAT TAN VAN DINH QUANG THAI NGUYEN CHI THIEN SKL010840 Ho Chi Minh city, July 2023 MINISTRY OF EDUCATION AND TRAINING HCMC UNIVERSITY OF TECHNOLOGY AND EDUCATION ⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯ FACULTY OF MECHANICAL ENGINEERING GRADUATION THESIS Project: “DEVELOPMENT OF AUTONOMOUS DELIVERY VEHICLE IN UNIVERSITY CAMPUS” Advisor: M.E NGUYEN MINH TRIET Students: CHU NHAT TAN ID: 19134082 VAN DINH QUANG THAI ID: 19134083 NGUYEN CHI THIEN ID: 19134085 Class: 19134 Ho Chi Minh city, July 2023 HO CHI MINH CITY UNIVERSITY OF TECHNOLOGY AND DUCATION FACULTY OF MECHANICAL ENGINEERING ⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯ DEPARTMENT OF MECHATRONICS GRADUATION PROJECT DEVELOPMENT OF AUTONOMOUS DELIVERY VEHICLE IN UNIVERSITY CAMPUS Advisor: M.E NGUYEN MINH TRIET Students: CHU NHAT TAN ID: 19134082 VAN DINH QUANG THAI ID: 19134083 NGUYEN CHI THIEN ID: 19134085 Class: 19134 Ho Chi Minh city, July 2023 TRƯỜNG ĐẠI HỌC SƯ PHẠM KỸ THUẬT TP HCM KHOA CƠ KHÍ CHẾ TẠO MÁY CỘNG HỒ XÃ HỘI CHỦ NGHĨA VIỆT NAM Độc lập - Tự – Hạnh phúc NHIỆM VỤ ĐỒ ÁN TỐT NGHIỆP Học kỳ II / Năm học 2022-2023 Giảng viên hướng dẫn: Nguyễn Minh Triết Sinh viên thực hiện: Chu Nhật Tân MSSV: 19134082 Điện thoại: 0333025233 Sinh viên thực hiện: Văn Đình Quảng Thái MSSV: 19134083 Điện thoại: 0816337939 Sinh viên thực hiện: Nguyễn Chí Thiện MSSV: 19134085 Điện thoại: 0383045380 Đề tài tốt nghiệp: - Mã số đề tài: 22223DT144 - Tên đề tài: Development of Autonomous Delivery Vehicle in University Campus Các số liệu, tài liệu ban đầu: • Tìm hiểu động không chổi than mạch điều khiển • Tìm hiểu cảm biến cho xe tự hành: cảm biến khoảng cách, IMU • Tìm hiểu GPS cách ứng dụng • Tìm hiểu camera xử lí ảnh Nội dung đồ án: • Thiết kế khí mạch điện • Điều khiển động • Xử lí ảnh nhận diện người đường • Định vị toạ độ xe khuôn viên trường Các sản phẩm dự kiến Mơ hình demo robot giao hàng khn viên trường Ngày giao đồ án: 15/03/2023 Ngày nộp đồ án: 18/07/2023 Ngơn ngữ trình bày: Bản báo cáo: Trình bày bảo vệ: Tiếng Anh Tiếng Anh   Tiếng Việt Tiếng Việt   TRƯỞNG BỘ MÔN GIẢNG VIÊN HƯỚNG DẪN (Ký, ghi rõ họ tên) (Ký, ghi rõ họ tên)  Được phép bảo vệ …………………………………………… (GVHD ký, ghi rõ họ tên) i DISCLAIMER Project title: Development of Autonomous Delivery Vehicle in University Campus Student name: Chu Nhat Tan Student ID: 19134082 Student name: Van Dinh Quang Thai Student ID: 19134083 Student name: Nguyen Chi Thien Student ID: 19134085 Class: 19134 Date of submission: 18/07/2023 We assure you that this is our research project and is guided by M.E Nguyen Minh Triet The research content and the results in this topic are honest I not copy from any published article without citing the source The figures in the table for analysis, evaluation and comments were collected by the authors from different sources specified in the references If there is any violation, I will take full responsibility Ho Chi Minh City, July 18, 2023 SIGN (Sign with full name) ii ACKNOWLEDGEMENTS We deeply express my sincere thanks to M.E Nguyen Minh Triet for all his help and support in completing our project ‘Development of Autonomous Delivery Vehicle in University Campus’ Without his assistance and suggestions, the project could not have been finished Furthermore, we also grateful to Assoc Prof Nguyen Truong Thinh my former head of Faculty of Mechanical Engineering, who brought us an opportunity to study and research in this major In addition, the guide and knowledge we receive from all teachers that taught us during four years of studying at university is a precious tool for us to utilize during the process of building the project We are grateful to the support provided by our university - Ho Chi Minh University of Education and Technology, especially Faculty of Mechanical Engineering Their provision of resources, facilities, and financial assistance has been essential in carrying out this project and enhancing its outcomes We want to convey our most heartfelt gratitude to our families and friends for their constant love and support for us during these years of difficult and challenging college Their belief in me has been a source of strength and motivation We would like to acknowledge the researchers, authors, and scholars whose work has influenced and inspired this project Their contributions to the field have been invaluable In the process of working on my capstone project, due to constrained information and limited time of research, mistakes are inevitable Therefore, we look forward to getting suggestions from everyone who is interested in this field of research, so that our project can be improved further In conclusion, we are indebted to all those involved, as well as others who have contributed in different ways This project stands as a testament to the collaborative efforts of a supportive network Thank you all for your invaluable support Sincerely, Chu Nhat Tan Van Dinh Quang Thai Nguyen Chi Thien iii ABSTRACT Inspired by the rapidly developing technology of autonomous vehicles, the research topic focuses on studying and developing a self-driving system for delivery within the premises of a university campus The developed system consists of three main sub-systems, namely, mobile robot, website application and server The mobile robot is equipped with features such as a navigation system, obstacle detection system and real-time monitoring system to maneuver it autonomously The photoelectric sensors and real-time object detection with a camera are used for obstacle detection, coupled with a Global Positioning System (GPS) for navigation purposes A camera is also attached to the robot for real-time monitoring purposes A website application with customized map based on Google Maps was created to provide the user with a useful interface A server acted as the intermediate for data transfer and receive between the mobile robot and the website application This paper discusses the three main sub-systems and the performance of each feature of the mobile robot has been evaluated separately In the scope of this project, the vehicle can deliver packages to the customer The expected outcome of this research is a simplified self-driving system capable of autonomously navigating to designated locations within the school premises The self-driving vehicles will be able to identify obstacles and humans during operation to ensure safe maneuvering The GPS positioning system and lane recognition cameras will play vital roles in accurately determining the vehicle's position and providing appropriate control signals based on its current location Its position can be tracked continuously and shown on the website The use of autonomous vehicles for delivery services is a new potential goldmine Furthermore, since the e-commerce and delivery industry are growing at a rapid rate, it is recommended that a system that could handle the high-volume traffic as well as serve as a new customer attraction, be implemented iv TÓM TẮT ĐỒ ÁN DEVELOPMENT OF AUTONOMOUS DELIVERY VEHICLE IN UNIVERSITY CAMPUS Lấy cảm hứng từ công nghệ xe tự hành phát triển mạnh gần đây, đề tài tập trung vào việc nghiên cứu phát triển hệ thống xe tự hành để giao hàng khuôn viên trường học Đề tài nhằm giải thách thức liên quan đến việc giao hàng môi trường trường học Thông qua việc phát triển hệ thống xe tự hành, đề tài tìm hiểu cấu trúc tính cần thiết xe tự hành, bao gồm cảm biến, hệ thống điều khiển giao diện người-máy Ngoài ra, đề tài tập trung vào khả tương tác an toàn xe tự hành mơi trường trường học Việc tích hợp cơng nghệ máy học nhận dạng vật thể giúp xe tự hành phát tránh vật cản, người phương tiện khác trình di chuyển Bên cạnh đó, hệ thống định vị vị trí xe phần khơng thể thiếu trình vận hành xe, điển hình cơng nghệ định vị vệ tinh tồn cầu (GPS) Kết dự kiến đề tài hệ thống xe tự hành đơn giản, có khả di chuyển đến vị trí định khn viên trường học cách tự động Xe tự hành nhận diện vật cản người vận hành để xử lí an tồn Hệ thống định vị GPS camera nhận diện đường tính thiết yếu xe tự hành để định vị, từ đưa tín hiệu điều khiển phù hợp với vị trí xe v TABLE OF CONTENTS NHIỆM VỤ ĐỒ ÁN TỐT NGHIỆP i DISCLAIMER ii ACKNOWLEDGEMENTS iii ABSTRACT iv TÓM TẮT ĐỒ ÁN .v TABLE OF CONTENTS vi LIST OF FIGURES ix LIST OF TABLES xiii ABBREVIATION LIST xiv CHAPTER 1: AN OVERALL PERSPECTIVE ON THE TOPIC 1.1 Introduction 1.1.1 Autonomous vehicles – a world trend 1.1.2 The urgency of the topic 1.1.3 Types of Autonomous Delivery Vehicles 1.2 Scientific and practical meanings 1.3 Objectives of the thesis 1.4 Research scope .7 1.5 Research method 1.6 The content of thesis CHAPTER 2: THEORETICAL BASIS .9 2.1 GPS (Global Positioning System) 2.1.1 How GPS works .9 2.1.2 GPS signal 10 2.2 Bearing angle .10 2.2.1 Formula to find bearing two given points 11 2.2.2 An example of bearing angle calculation .11 2.3 PID controller .12 2.3.1 The feedback principle 12 2.3.2 PID control 13 2.4 Theoretical foundations of images and image processing 16 2.4.1 Pixels (Picture Element) .17 2.4.2 Image resolution 17 2.4.3 Gray level of the image 18 2.4.4 Color space 18 2.4.5 Black-white photograph and digital image 20 2.4.6 Color photo 21 2.5 Hough Transform 22 2.5.1 Overview 22 2.5.2 Algorithm theory 22 2.6 Artificial Intelligence 24 2.6.1 Machine Learning: .25 2.6.2 Deep Learning 26 vi 2.7 Convolutional Neural Network (CNN) 28 2.7.1 Computer vision 28 2.7.2 Convolutional Neural Networks (CNN) 28 2.7.3 Block convolution 32 2.7.4 Single layer CNN network 34 2.7.5 Simple CNN Network 34 2.7.6 Pooling class 35 2.7.7 A CNN recognizes lanes 36 2.7.8 Advantages of CNN .37 CHAPTER 3: MECHANICAL SYSTEM AND HARDWARE CONFIGURATION .38 3.1 Chapter introduction 38 3.2 Technical requirements 38 3.3 Vehicle mechanism structure .38 3.4 Wheels and motors .40 3.4.1 Types of DC motor 40 3.4.2 Brushless DC (BLDC) motors .41 3.4.3 Calculation for choose driving wheels 45 3.4.4 Choosing omnidirectional wheel 48 3.5 Hardware configurations 48 3.5.1 Raspberry Pi model B .48 3.5.2 Arduino mega 2560 49 3.5.3 Raspberry Pi Camera Module V2 51 3.5.4 Brushless DC Motor Controller Driver Board .53 3.5.5 Photoelectric switch E18-D80NK 54 3.5.6 Module IMU MPU-9250 .55 3.5.7 TEL0138 USB GPS Receiver 56 3.5.8 Webcam Rapoo C200 57 3.6 Schematic design 58 3.6.1 General connection diagram of the autonomous vehicle .58 3.6.2 Driver – motor wiring diagram 58 3.7 General stand model design 59 CHAPTER 4: THE VEHICLE CONTROL ALGORITHM 62 4.1 Flow of controlling autonomous vehicle .62 4.1.1 Locate in Map 62 4.1.2 Wheels speed controller .63 4.1.3 Orientation control .65 4.1.4 Moving control .66 4.2 Image processing process: 66 4.2.1 Distance estimation 67 4.2.2 Lane Detection .80 CHAPTER 5: THE EXPERIMENT RESULT AND ANALYSIS 93 5.1 Practical model .93 5.2 Wheels speed control 93 vii CHAPTER 5: 5.4 Transmission result Figure 5.6 Real-time database Figure 5.6 shows the Firebase real-time database used as the intermediate between the web application and autonomous vehicle The current position of the autonomous vehicle is represented by latitude and longitude The destination is represented by “0” Figure 5.7 The location of autonomous vehicle Each feature was coded, tested, and debugged until the functionality of each feature is satisfying, except for GPS navigation During the code integration, any compiler error was debugged until the whole code is error-free and modifications are made to meet this project’s requirements During the testing of the mobile robot sub-system, some errors were detected due to some environmental factors especially related to GPS navigation The signal received from the GPS is not very accurate The localization of the autonomous vehicle is successfully updated in the cloud server every 10s Real-time monitoring of the navigation process can be viewed remotely as shown in Figure 5.7 97 CHAPTER 5: 5.5 5.5.1 Image processing and tracking results Distance Estimation In this part to be able to identify people we use Pi camera and photoelectric sensors The set range that the photoelectric sensor can receive is from to 50cm The operating range of the camera is limited to about 62.2 degrees (Figure 5.8) If the camera detects people in this area, the autonomous vehicle will stop The way to identify a person and return the person's distance to the vehicle will be described in section 4.2.1 Figure 5.8 Viewpoint of Camera in Oxy plane In order to capture the whole person in the frame from a distance of 40cm, we experimented and selected the appropriate height and inclination for the camera The camera is placed at a height of 0.5m and inclined degrees from the x-axis Figure 5.9 Camera tilt relative to the x-axis 98 CHAPTER 5: This is the picture that I got from my Raspberry Camera We use YOLOv4-tiny for detecting people and returning its bounding box Figure 5.10 shows the distance between people person is 94cm from the camera and the other person is 194cm from the camera Figure 5.10 Image captured by Raspberry Camera The distance obtained from the algorithm is quite accurate and deviates from reality by ±10cm If the person is within the range of the camera, the robot will stop as figure 5.11 Figure 5.11 Camera detects people in range 5.5.2 Tracking lane line In this section we will present the results of the image processing: - Apply mask for the image: 99 CHAPTER 5: Figure 5.12 Image with mask - Convert to gray Image: Figure 5.13.Gray Image - Blur the image: Figure 5.14 Blur Image - Extract the edge: 100 CHAPTER 5: Figure 5.15 Canny Edges - Region of Interest: Figure 5.16 Region of Interest - Detect line with Hough-Transform: Figure 5.17 Image with line detect - Calculate the average line: 101 CHAPTER 5: Figure 5.18 Result image The results obtained from the others frame are shown in the following figures 102 CHAPTER 5: Figure 5.19 Lane Detection 5.6 Result analysis 5.6.1 Overall practical model drawbacks In the scope of this project, the model of the autonomous vehicle is simple to build and usable, but it has many drawbacks that can affect the moving behavior and the operation of other detection algorithms One big downside is that the vehicle doesn’t have any suspension fork that shocks and vibrations happen a lot when moving on the road which affects significantly to the real time detection by camera and webcam Another noticeable limitation is the unbalance of the vehicle when building that makes it hard to tuning PID for moving straight even in a flat environment such as office floor Overall, the practical model can be further improved by applying suspension fork to the mechanism structure, further design of vehicle frame which provides more balance and stability; moreover, the hardware configurations can be better equipping with high technology components if possible 5.6.2 Human detection and distance estimation with Yolov4 Tiny On the detection performance side: • Accuracy: YOLOv4 Tiny sacrifices some accuracy compared to the full YOLOv4 model in order to achieve faster inference speed and lower computational requirements • Object Detection: YOLOv4 Tiny performs real-time object detection, providing bounding box coordinates and class labels for detected objects in an input image or video frame • Speed: YOLOv4 Tiny is optimized for fast inference on resource-constrained devices, making it suitable for real-time applications with limited computational power • Localization Accuracy: The accuracy of object localization, i.e., accurately identifying the location of the detected objects, is a critical factor in evaluating the performance of YOLOv4 Tiny In summary, its performance and computational power requirement of this algorithm fits perfectly with the needs and scope of the project 103 CHAPTER 5: 5.6.3 Lane line tracking algorithms With this algorithm, depending on the input of the image, the results will change significantly Therefore, we need to adjust the parameters in the algorithm to achieve the highest performance Here are some of the key factors that we find when analyzing the accuracy of lane detection using the Hough Transform: • Environment: The environment greatly affects the algorithm Examples can be mentioned such as: brightness, complex road layouts, brightness, faded or broken lane line, and occultation from other vehicles or objects • Parameter tuning: The accuracy of the Hough Transform heavily relies on parameter selection Parameters such as the minimum number of votes required for a line to be considered, the threshold for line detection, and the resolution of the parameter space affect the accuracy of the detected lanes Finding suitable parameter values often involves a trialand-error process and fine-tuning based on the specific dataset and application • Postprocess and filtering the lines after using Hough transform: Postprocessing steps are typically applied to refine the detected lane lines Filtering techniques can be used to remove outlier lines and smooth out the detected lanes These postprocessing steps play a crucial role in improving the accuracy of the lane detection results It is essential to assess lane detection system accuracy using appropriate metrics such as Intersection over Union (IoU) or the proportion of correctly recognized lanes These metrics aid in evaluating accuracy and measure how effectively the Hough Transform-based Lane recognition system works compared to ground truth or reference data By understanding these factors and employing appropriate strategies, it is possible to improve the accuracy of lane detection systems based on the Hough Transform This image processing-based algorithm is usable, but it has many downsides When the pattern of the road changes dramatically, the lane line output is not what we were expecting Therefore, we can improve the accuracy of the detection system by tuning the parameters for each road pattern and then use the output for the dataset of higher Deep Learning algorithms such as UNET or ENET for further development of lane line detection 104 CHAPTER 6: CHAPTER 6: CONCLUSION In conclusion, our university capstone project focused on creating an autonomous delivery vehicle specifically tailored for use within our campus environment Throughout this incredible journey, we have successfully designed, implemented, and tested a solution to address the unique delivery needs on our campus This project has provided us with invaluable insights into the complexities and challenges involved in developing an autonomous system We have acquired and honed technical skills in diverse areas such as computer vision, sensor integration, navigation algorithms, and system integration Moreover, this project has been a tremendous learning experience in project management, teamwork, and creative problem-solving The close collaboration among team members, effective communication, and our ability to adapt to unforeseen obstacles were integral to the successful completion of this endeavor The level of completion of the project can be demonstrated as follow: - First and foremost, we have put considerable effort into the design and planning phase This involved carefully conceptualizing the project, analyzing requirements, and crafting a well-defined system architecture By setting clear goals, objectives, and project scope, we ensured that our project was on the right track from the start - Moving on to the development and implementation stage, we dedicated ourselves to creating the necessary software, hardware, and algorithms for our autonomous delivery vehicle We integrated various components, including sensors, actuators, and control systems, to bring our prototype to life - To ensure the reliability and performance of our vehicle, we conducted thorough testing and validation We rigorously evaluated its navigation capabilities, detection algorithms, and overall system functionality Furthermore, testing the vehicle in our university campus provides us with valuable insight to help us fine-tune our design - Looking ahead, we provided recommendations for future enhancements based on our findings and experiences We identified areas such as advanced navigation algorithms, improved sensor integration, and scalability for larger campuses These suggestions will serve as valuable guidance for future iterations and potential implementation 105 REFERENCES English [1] Adam Paszke, Abhishek Chaurasia, Sangpil Kim, Eugenio Culurciello, ENet: A Deep Neural Network Architecture for Real-Time Semantic Segmentation, arXiv:1606.02147v1, 6/2016 [2] Alexey Bochkovskiy, Chien-Yao Wang, Hong-Yuan Mark Liao, YOLOv4: Optimal Speed and Accuracy of Object Detection, arXiv:2004.10934, 12/2020 [3] Center for Sustainable Systems, University of Michigan, Autonomous Vehicles Factsheet, Pub No CSS16-18, 2022 [4] Karl J Astrom, Tore Hagglund, PID controller: Theory, Design and Tuning 2nd Edition, The Instrumentation, Systems, and Automation Society, 1995 [5] Olaf Ronneberger, Philipp Fischer, and Thomas Brox, U-Net: Convolutional Networks for Biomedical Image Segmentation, arXiv:1505.04597, 5/2015 [6] Omron Industrial Automation, Photoelectric Sensors, https://www.ia.omron.com/support/guide/43/introduction.html#:~:text=A%20Ph otoelectric%20Sensor%20consists%20primarily,it%20to%20an%20electrical%2 0output [7] Priyam Anilkumar Parikh, Saurin Mukundbhai Sheth, Rupesh Parmanand Vasani, Velocity Analysis of a DC Brushed Encoder Motor using ZieglerNichols Algorithm: A Case of an Automated Guided Vehicle, pp 1-8, Volume 9, Issue 38, Indian Journal of Science and Technology, 2016 [8] Raspberry Pi foundation, About the Camera Modules, link https://www.raspberrypi.com/documentation/accessories/camera.html [9] Zicong Jiang, Liquan Zhao, Shuaiyang Li, Yanfei Jia, Real-time object detection method based on improved YOLOv4-tiny, arXiv:2011.04244, 4/2020 Vietnamese [10] Viet Anh, Phát đường thẳng với Hough Transform – OpenCV, link https://aicurious.io/blog/2019-10-24-hough-transform-phat-hien-duong-thang, 2019 Others [11] Adrian Rosebrock, Find distance from camera to object/marker using Python and OpenCV, link https://pyimagesearch.com/2015/01/19/find-distance-cameraobjectmarker-using-python-opencv/, 1/2015 106 [12] AlexeyAB et al., darknet, link https://github.com/pjreddie/darknet, 7/2022 [13] Danielle Collins, What are Hall effect sensors and what is their role in DC motors?, link https://www.motioncontroltips.com/faq-what-are-hall-effectsensors-and-what-is-their-role-in-dc-motors/, 1/2017 [14] Electrical Technology, Brushless DC Motor (BLDC) – Construction, Working & Applications, link https://wp.me/p4O3ZJ-189, 2016 [15] Garmin, what is GPS, link https://www.garmin.com/en-US/aboutgps/, 2023 [16] National Geographic, GPS, link https://education.nationalgeographic.org/resource/gps/ [17] Igismap, Formula to Find Bearing or Heading angle between two points: Latitude Longitude, link https://www.igismap.com/formula-to-find-bearing-orheading-angle-between-two-points-latitude-longitude/ [18] Michael Virgo, Manoshape, MLND-Capstone, link https://github.com/mvirgo/MLND-Capstone/tree/master, 5/2020 [19] Open Source Computer Vision, Cascade Classifier, link https://docs.opencv.org/3.4/db/d28/tutorial_cascade_classifier.html, 7/ 2023 [20] MAD-EE, Easy Inexpensive Hoverboard Motor Driver, link https://madee.com/easy-inexpensive-hoverboard-motor-controller/, 10/2021 [21] Abhishek Jaiswal, Object Detection Using Haar Cascade: OpenCV, link https://www.analyticsvidhya.com/blog/2022/04/object-detection-using-haarcascade-opencv/, 4/2022 107 APPENDIX • Stand design drawing: Vehicle stand I Jig • Package position: We intend to put the package of goods in the back The figure below show the place of goods Model pretend to put the goods II S K L 0

Ngày đăng: 14/11/2023, 10:11

w