1. Trang chủ
  2. » Giáo Dục - Đào Tạo

Design and implementation of a self driving car for following lane

89 13 0

Đang tải... (xem toàn văn)

Tài liệu hạn chế xem trước, để xem đầy đủ mời bạn chọn Tải xuống

THÔNG TIN TÀI LIỆU

Thông tin cơ bản

Tiêu đề Design And Implementation Of A Self-Driving Car For Following Lane
Tác giả Lê Thị Kiều Giang, Nguyễn Hưng Thịnh
Người hướng dẫn Assoc. Prof. Trương Ngọc Sơn
Trường học Ho Chi Minh City University of Technology and Education
Chuyên ngành Computer Engineering Technology
Thể loại Graduation Project
Năm xuất bản 2023
Thành phố Ho Chi Minh City
Định dạng
Số trang 89
Dung lượng 9,12 MB

Nội dung

MINISTRY OF EDUCATION AND TRAINING HO CHI MINH CITY UNIVERSITY OF TECHNOLOGY AND EDUCATION FACULTY FOR HIGH QUALITY TRAINING GRADUATION THESIS COMPUTER ENGINEERING TECHNOLOGY DESIGN AND IMPLEMENTATION OF A SELF-DRIVING CAR FOR FOLLOWING LANE ADVISOR: ASSOC PROF TRUONG NGOC SON STUDENTS: LE THI KIEU GIANG NGUYEN HUNG THINH SKL011174 Ho Chi Minh City, June 2023 HO CHI MINH CITY UNIVERSITY OF TECHNOLOGY AND EDUCATION FACULTY FOR HIGH QUALITY TRAINING GRADUATION PROJECT DESIGN AND IMPLEMENTATION OF A SELF-DRIVING CAR FOR FOLLOWING LANE LÊ THỊ KIỀU GIANG Student ID: 19119001 NGUYỄN HƯNG THỊNH Student ID: 19119056 Major: COMPUTER ENGINEERING TECHNOLOGY Advisor: TRƯƠNG NGỌC SƠN, Assoc Prof Ho Chi Minh City, June 2023 THE SOCIALIST REPUBLIC OF VIETNAM Independence – Freedom– Happiness -Ho Chi Minh City, June 26, 2023 GRADUATION PROJECT ASSIGNMENT Student name: Lê Thị Kiều Giang Student ID: 19119001 Student name: Nguyễn Hưng Thịnh Student ID: 19119056 Major: Computer Engineering Technology Advisor: Assoc Prof Trương Ngọc Sơn Class: 19119CLA Phone number: 0931085929 Date of assignment: 25 February, 2023 Date of submission: 26 June, 2023 Project title: Design and implementation of a self-driving car for following lane Initial materials provided by the advisor: Documents such as paper and books related to AI, control hardware devices Content of the project: - Analyze the challenges of the project - Learn about the technical specifications, guiding thought and theoretical basis of the components of the hardware - Propose the model and summarize the overall system Design block diagram, principal diagram - Pre-processing data (cleaning data, generating object detection data) - System configuration and design hardware - Test run, check, evaluate and adjust - Conduct report writing Final product: A car model that moves along the lane and can recognize traffic signs, a final report, a video CHAIR OF THE PROGRAM (Sign with full name) ADVISOR (Sign with full name) THE SOCIALIST REPUBLIC OF VIETNAM Independence – Freedom– Happiness -Ho Chi Minh City, June 26, 2023 ADVISOR’S EVALUATION SHEET Student name: Lê Thị Kiều Giang Student ID: 19119001 Student name: Nguyễn Hưng Thịnh Student ID: 19119056 Major: Computer Engineering Technology Project title: Design and implementation of a self-driving car for following lane Advisor: Assoc Prof Trương Ngọc Sơn EVALUATION Content of the project: Strengths: Weaknesses: Approval for oral defense? (Approved or denied) Overall evaluation: (Excellent, Good, Fair, Poor) Mark:………….(in words: ) Ho Chi Minh City, June …, 2023 ADVISOR (Sign with full name) ĐẠI HỌC SƯ PHẠM KỸ THUẬT THÀNH PHỐ HỒ CHÍ MINH KHOA ĐÀO TẠO CHẤT LƯỢNG CAO CỘNG HOÀ XÃ HỘI CHỦ NGHĨA VIỆT NAM Độc lập – Tự Do – Hạnh phúc Tp HCM, ngày 15 tháng 07 năm 2023 BẢN GIẢI TRÌNH CHỈNH SỬA ĐỒ ÁN TỐT NGHIỆP NGÀNH: CNKT MÁY TÍNH Tên đề tài: Design and implementation of a self-driving car for following lane Tên sinh viên: Lê Thị Kiều Giang MSSV: 19119001 Tên sinh viên: Nguyễn Hưng Thịnh MSSV: 19119056 GVHD: PGS TS Trương Ngọc Sơn Hội đồng bảo vệ HĐ 1, phòng A4-401, ngày 07 tháng 07 năm 2023 Giải trình chỉnh sửa báo cáo đồ án tốt nghiệp: TT Nội dung góp ý Hội đồng The authors have mentioned that “AI system will transmit signals to the control system, including angle and speed data” Which function calculates these parameters in this system? Kết chỉnh sửa, bổ sung Ghi Đã bổ sung “AI system will transmit signals to the control system, including angle and speed data The angle data will be calculated through the PID method and the speed data will be obtained from the Linear function.” phần 4.2 trang 50 Authors should be replace “the group”, “our team” by “we” Đã kiểm tra chỉnh sửa, kết trình bày bổ sung trang 1, 2, 3, 19 65 Authors should shorten the thesis objectives, focus on the feature or implementation methods of your “self-driving car for following lane” instead Đã kiểm tra chỉnh sửa, kết trình bày bổ sung phần 1.2 1.4.1 trang of researching theory or kits Chapter Design and Đã kiểm tra chỉnh sửa, kết Số hiệu: BM16/QT-PKHCN-QHQT-NCKH/02 Lần soát xét: 02 Ngày hiệu lực: 01/4/2020 Trang: 1/2 Implementation: authors trình bày bổ sung phần should not introduce too much 3.3.1.1 3.3.2.1 about board or kit specification; instead, focus on the requirements of blocks and your “methods” to process or “designs” of blocks to meet these requirements Figure 3.1: The name of blocks should be “DC motor control”, “Servo motor control” instead of “Control motor DC” and “Control Đã kiểm tra chỉnh sửa, kết trình bày bổ sung Figure 3.1 trang 19 Servo” Đã kiểm tra chỉnh sửa, kết Flowchart: the “begin” and trình bày bổ sung “end” should be represented in Figure 3.6, Figure 3.7, Figure terminator shapes instead of 3.16, Figure 3.17, Figure 3.18, ellipses Figure 3.2, Figure 3.36 Figure 3.37 Xác nhận trưởng ngành (Ký họ tên) Xác nhận GVHD (Ký họ tên) Số hiệu: BM16/QT-PKHCN-QHQT-NCKH/02 Lần sốt xét: 02 Nhóm thực báo cáo (Ký họ tên) Ngày hiệu lực: 01/4/2020 Trang: 2/2 DECLARATION We hereby declare that this is the final report, "Design and implementation of a self-driving car for following lane" The simulations and study findings are accurate and were carried out entirely under the direction of the instructor, Assoc Prof TRUONG NGOC SON The report does not duplicate any other sources either Additionally, the paper includes a variety of cited and carefully labeled reference materials Before the department, faculty, and school, we would like to fully accept responsibility for this promise Student LE THI KIEU GIANG NGUYEN HUNG THINH i ACKNOWLEDGEMENTS First, we would like to express my deepest gratitude to the School Board of the Ho Chi Minh City University of Technology and Education, as well as the Faculty for High Quality Training, for establishing wonderful conditions for me to pursue our project In addition, we would also like to express our sincere thanks to the Head of the Department, Assoc Prof Truong Ngoc Son, who always closely follows the learning situation and encourages and creates development opportunities for each generation of students Last but not least, we cannot prevent mistakes due to a lack of knowledge and implementation time We welcome your feedback and suggestions to help us improve this topic Many thanks for all your help and regards Student LE THI KIEU GIANG NGUYEN HUNG THINH ii Table 4.2: Results of deploying for Lane detection model DEPLOY DETECTION OF LANE Model Format CUDA CPU Model size U-net Onnx 40-50fps 30-40fps 0,422M The best results obtained after testing the h5 models after training Because, we changed from file “.h5” to “.onnx”, the model size decreased from 1.4M to 0.422M as well as making the model light and suitable during deploy and operation 4.4.2 Evaluate of the Yolo model The Figure 4.18 is result after training: Figure 4.18: Result of the recognized traffic sign by Yolo Traffic sign recognition results using model Yolo Table 4.3: Results of training for Yolo versions DETECTION OF THE TRAFFIC SIGN Model Precision Recall F1_score MAP@50 MAP@50-95 Yolov4-tiny 0,920 0,980 0,950 0,977 Yolov5n 0,994 0,993 0,990 0,994 0,898 Yolov7-tiny 0,987 0,992 0,990 0,993 0,876 Yolov8n 0,994 0,996 0,990 0,995 0,934 From the Table 4.3, we see that Yolov8n will give us high results and outperform previous generation Yolo models such as Precision, Recall, F1_core and MAP In contrast, Yolov4 tiny gives a low result, but it exceeds the safe threshold and is greater 57 than 90% of the total allowable index Table 4.4: Results of deploying for Yolo versions DEPLOY DETECTION OF TRAFFIC SIGN Model Format CUDA CPU Model size Yolov4-tiny Engine 21-22fps X 32,3M Yolov5n Yolov7-tiny Yolov8n Onnx 6-7fps Engine 10-11fps X 7,7M X 7,1M Onnx 3.5-4fps X 23,8M Engine 7-8fps X 20M Onnx 4-4.5fps X 11,6M During deploying, we used the Onnx runtime source code to load the “.onnx” model of Yolov5 to Yolov8 onto Jetson Nano But the results obtained in terms of FPS index are still quite low to be able to apply and develop together with autonomous vehicles during the vehicle's movement Besides, the TensorRT source code gives better results and can increase the performance in detection of Yolo versions Yolov4-tiny obtained the highest fps and Yolov8 gave the lowest result Although Yolov4-tiny gives the lowest results in the Yolo versions, the fps obtained is quite high and stable from 24 to 25fps 4.4.3 CNN model The result of training with CNN model: Figure 4.19: Loss and Accuracy Graph of training with CNN model After training with 20 epochs, the accuracy and validation accuracy values are still in the 0.97-0.9 range Furthermore, the magnitude of Loss and Validation Loss findings are greatly improved The range has a tiny divergence, and both converge to zero at a value between 0.08 and 0.5 58 Table 4.5: Results of Classification CLASSIFICATION Model Epochs Dataset Accuracy Val_Accuracy Loss Val_Loss Model size Format CNN 20 2000 0,993 0,995 0,037 0,025 5,4M h5 From the Table 4.5, we see that CNN model will give us a result significantly high, go up to 0,993 Besides, the result of Loss converges to 4.5 FPS expansion and improvement 4.5.1 Run a Yolov4 and YoloV5 model by converting it to the TensorRT engine and then running it in Jetson Nano During inference, TensorRT-based apps can outperform CPU-only systems by up to 36 times It has a response time of less than 7ms and can optimize for specific targets As a result, developers may improve neural network models trained on all main frameworks for quicker inference, including PyTorch, TensorFlow, ONNX, and MATLAB If you run Yolo models on Jackson Nano without conversion or optimization, the frame rate will be very low, which is why it's always a good idea to convert or build your Yolov5 model with the help of TensorRT engine, because once you have the tensity engine from the Yolo model, you will have a better performance on Jetson Nano Before starting the conversion process Yolo model to the Tensor engine, we have to install a lot of libraries Installing Python Modules, PyCuda, Seaborn, Torch and Torchvision Starting by converting the Yolo models into the TensorRT engine: Figure 4.20: The process of converting the Yolo models into the TensorRT engine The basic step is that we will first convert this pt file into a WTS file and then by using this wts file we will build the engine file and then obviously based on the engine file we can inferencing over the images Our aim when using Yolov5n, is a small version of the Yolov5 model because when using large or very large Yolov5 models, which have a very large number of parameters, the complexity increases, as well as the ability to operate and convert to TensorRT engine 59 is quite large, it will be very heavy, and they will work very slowly So, we use Yolov5n version to optimize fps and reduce parameters to make the model lighter Make a build directory within yolov5 Copy and paste the generated.wts file into the build directory, then run the command $ cmake Figure 4.21: Result of run the command $ cmake After the cmake process completes, we will run the $ make command The result is as Figure 4.22: Figure 4.22: Result of run the command $ make To run the Build Engine file we use the command: $ /yolov5_det -s yolov5n.wts yolov5n.engine n Figure 4.23: Result of export to yolov5n.engine This engine file can take up to 10 to 15 minutes to complete 60 Deploy engine file on Jetson Nano by TensorRT: Before that, we used onnx or tflite files to identify traffic signs Because the files in that format are large, the model is heavy and reduces the fps To improve that, we use the file engine to make the model lighter to improve the fps as much as possible Figure 4.24: The flowchart of Deploy engine file on Jetson Nano by TensorRT After finishing converting the Yolo models into the Tensor engine, we will load the engine file ".engine" to Jetson Nano In the movement, will recognize the valid sign appearing on the screen, then proceed to get the Bounding Box and determine the name of the previously labeled class Displays detection results and fps achieved and ends the process 4.5.2 Improve the detection accuracy of Yolo To improve accuracy when deploying Yolo to Jetson Nano, many methods have been suggested by researchers After reading and studying some of those improvement methods, we propose to use the Classification method to improve accuracy in detection The Figure 4.25 is the process of using the method of combining Yolo with Classification: 61 Figure 4.25: The flowchart of combining Yolo with Classification We still use Yolo to detect the object, then get the Bounding Box coordinate value The image will then be cropped with the previously taken value, and the Classification model will be used to detect the image The result of the detection process is to display the class of the object being deceptive and accuracy Compared with the above method is the detection process using only Yolo, presented as Figure 4.26: Figure 4.26: The flowchart of combining Yolo without Classification Unlike the method of combining Yolo with Classification, after drawing the Bounding Box, it will perform the process of detecting and displaying the class result of the detected object and accuracy 62 Table 4.6: Results of deploying detection lane and traffic signs DEPLOY DETECTION LANE AND TRAFFIC SIGNS Model CUDA CPU Format U-net 40-50fps 30-40fps Onnx CNN 40-50fps 30-40fps Onnx Yolov4-tiny 21-22fps X Engine Yolov5n 10-11fps X Engine CNN Yolov4-tiny 21-22fps X Onnx Engine U-net Yolov4-tiny 21-22fps X Onnx Engine U-net CNN Yolov4-tiny 16-17fps X Onnx Onnx Engine CNN Yolov5n 9.5-10fps X Onnx Engine U-net Yolov5n 9.5-10fps X Onnx Engine U-net CNN Yolov5n 9.5-10fps X Onnx Onnx Engine From the evaluation results of previous Yolo versions, we will consider Yolov4 and Yolov5 to apply to the project and run-in practice Both U-net and CNN models give quite high and stable fps results because both are very light in nature on both GPU (CUDA) and CPU Besides, combining models at the same time will reduce the fps from 50 to 19 with Yolov4-tiny and 50 to 9.5 with Yolov5n Through all the collected data, we decided to choose the application of models Yolov4-tiny (or Yolov4 max), U-net and CNN because we get quite high fps results from 19-20 fps for a stable power source After improving the model, we will collect some good information when running the Yolo recognition combining classification and segmentation to identify the steering angle shown in Figures 4.27: 63 Figure 4.27: Results of the combination of Yolo, classification, and segmentation The status now shows from 18 to 19 fps which is very high compared to the original just under 10 fps Accuracy is above the threshold of 0.9 and the class names are displayed mostly correctly The applied PID method is expressed through parameters, speed, and steering angle of 60 and 90 respectively The wheel steering angle has been set to range from 50 to 120 degrees and 90 degrees to help the car run in a straight line, less than 90 will be cheaper left and larger cars will be affordable Furthermore, the above deployment models are already running in a stable source and environment Actual fps will range from 16-17 fps instead of 18-19 fps because in actual motion there will be some problems such as camera shake, noise, glare, false recognition, and complexity of the control program The engine makes the fps drop rapidly 64 CHAPTER 5: CONCLUSION AND FUTURE WORKS 5.1 Conclusion Despite numerous challenges in studying, familiarizing, and understanding the issue, we can see that the feasibility exists and can be accomplished within the time and conditions for project approval The objectives were met by this method The report accomplished the following tasks: - Learn and apply the control algorithm to navigate the steering angle and speed for the vehicle - Using cameras in the mapping process during the movement, navigate the vehicle by following the signs - Integrate an embedded computer into the vehicle to reduce its size and increase its flexibility - Apply the knowledge learned in the subject's construction process and hardware design In addition, there are jobs that the report has not completed well: - Accuracy in sharp or sudden turns is not stable - FPS has not been improved when deploying from Yolov5 to Yolov8 on Jetson Nano 5.2 Future works The current research serves as the foundation and premise for effectively building and developing the issue at a higher level Improve the system in general and make it more stable in a variety of scenarios To improve the stability of the self-driving car position, we can be training more so that the output quality is higher, allowing us to compute a more precise position for the moving automobile 65 REFERENCES [1] Rikiya Yamashita, Mizuho Nishio, Richard Kinh Gian Do & Kaori Togashi, "Convolutional neural networks: an overview and application in radiology," 22 June 2018 [2] Laith Alzubaidi, Jinglan Zhang, Amjad J Humaidi, Ayad Al-Dujaili, Ye Duan, Omran Al-Shamma, J Santamaría, Mohammed A Fadhel, Muthana Al-Amidie & Laith Farhan, "Review of deep learning: concepts, CNN architectures, challenges, applications, future directions," 31 March 2021 [3] Joseph Redmon, Santosh Divvala, Ross Girshick, Ali Farhadi, "You Only Look Once: Unified, Real-Time Object Detection," Jun 2015 [4] Tausif Diwan, G Anirudh & Jitendra V Tembhurne, "Object detection using YOLO: challenges, architectural successors, datasets and applications," 08 August 2022 [5] Yanming Guo, Yu Liu, Theodoros Georgiou & Michael S Lew, "A review of semantic segmentation using deep neural networks," 24 November 2017 [6] Fude Cao, Qinghai Bao, "A Survey On Image Semantic Segmentation Methods With Convolutional Neural Network," 20 November 2020 [7] H Bandyopadhyay, "An Introduction to Image Segmentation: Deep Learning vs Traditional," [Online] Available: https://www.v7labs.com/blog/image-segmentation-guide [8] Olaf Ronneberger, Philipp Fischer, and Thomas Brox, "U-Net: Convolutional Networks for Biomedical," 18 May 2015 [9] Albert Aarón Cervera-Uribe, Paul Erick Méndez-Monroy, "U19-Net: a deep learning approach for obstacle detection in self-driving cars," April 2022 [10] L Wang, "Basics of PID Control," in PID Control System Design and Automatic Tuning using MATLAB/Simulink, Wiley-IEEE Press, 2020, pp - 30 [11] Christoph Lütge, Christoph Bartneck, "Autonomous Vehicles," in An Introduction to Ethics in Robotics and AI, January 2021, pp 83-92 [12] Safaa Alaa Eldeen Hamza, Dr Amin Babiker A/Nabi Mustafa, "The Common Use of Pulse Width Modulation," Faculty of Engineering, Department of Electronic 66 Engineering, MSc in Communication and Data Networks, Sudan-Khartoum [13] Texas Instruments, "A Basic Guide to I2C," [Online] Available: https://www.ti.com/lit/an/sbaa565/sbaa565.pdf?ts=1687506980649&ref_url=https %253A%252F%252Fwww.google.com%252F [14] Texas Instruments, "KeyStone Architecture Universal Asynchronous Receiver/Transmitter (UART) User Guide," [Online] Available: https://www.ti.com/lit/ug/sprugp1/sprugp1.pdf?ts=1687537304378 [15] NVIDIA, "NVIDIA Deep Learning TensorRT Documentation," [Online] Available: https://docs.nvidia.com/deeplearning/tensorrt/ [16] Raspberry Pi, "Raspberry Pi Camera Module 2," [Online] Available: https://www.raspberrypi.com/products/camera-module-v2/ [17] Arduino, "Arduino Nano," [Online] Available: https://store.arduino.cc/products/arduino-nano [18] STMicroelectronics, "L298N Motor Driver Module," [Online] Available: https://components101.com/modules/l293n-motor-driver-module [19] Luigi Cocco and Muhammad Aziz, "Role of Bearings in New Generation Automotive Vehicles: Powertrain," in Advanced Applications of Hydrogen and Engineering Systems in the Automotive Industry, December 4th, 2020 [20] Bill Earl, "Adafruit PCA9685 16-Channel Servo Driver," [Online] Available: https://learn.adafruit.com/16-channel-pwm-servo-driver?view=all [21] "MG996R High Torque Metal Gear Dual Ball Bearing Servo," [Online] Available: https://www.electronicoscaldas.com/datasheet/MG996R_Tower-Pro.pdf [22] Ronneberger, O., Fischer, P., & Brox, "U-Net: Convolutional Networks for Biomedical Image Segmentation," in International Conference on Medical Image Computing and Computer-Assisted Intervention (MICCAI), Springer, 2015, pp 234-241 [23] Dillon Reis, Jordan Kupec, Jacqueline Hong, Ahmad Daoudi, "Real-Time Flying Object Detection with YOLOv8," 17 May 2023 [24] Chien-Yao Wang, Alexey Bochkovskiy, Hong-Yuan Mark Liao, "YOLOv7: Trainable bag-of-freebies sets new state-of-the-art for real-time object detectors," Jul 2022 67 [25] Alexey Bochkovskiy, Chien-Yao Wang, Hong-Yuan Mark Liao, "YOLOv4: Optimal Speed and Accuracy of Object Detection," 23 Apr 2020 [26] Sparsh Mittal, "A Survey on Optimized Implementation of Deep Learning Models on the NVIDIA Jetson Platform," 2019 [27] Yuxiao Zhou, Kecheng Yang, "Exploring TensorRT to Improve Real-Time Inference for Deep Learning," IEEE, 2022 [28] KUI WANG , "Velocity and Steering Control for Automated Personal Mobility Vehicle," 2021 [Online] Available: https://www.diva-portal.org/smash/get/diva2:1670919/FULLTEXT01.pdf [29] Rafsanjani, I P D Wibawa , and C Ekaputri, "Speed and steering control system for self-driving car prototype," IOP, 2019 [30] Mehran Pakdaman; M Mehdi Sanaatiyan; Mahdi Rezaei Ghahroudi, "A line follower robot from design to implementation: Technical issues and problems," IEEE, March 2010 68 APPENDIX Code for the system: Link GitHub: https://github.com/nghungthinh/Self_driving_car 69 S K L 0

Ngày đăng: 08/12/2023, 15:31

w