Tài liệu hạn chế xem trước, để xem đầy đủ mời bạn chọn Tải xuống
1
/ 137 trang
THÔNG TIN TÀI LIỆU
Thông tin cơ bản
Định dạng
Số trang
137
Dung lượng
14,75 MB
Nội dung
MINISTRY OF EDUCATION AND TRAINING HO CHI MINH CITY UNIVERSITY OF TECHNOLOGY AND EDUCATION GRADUATION PROJECT AUTOMOTIVE ENGINEERING DESIGN START STOP CIRCUIT THROUGH TRAFFIC DETECTION LECTURER: Assoc.Prof DO VAN DUNG STUDENT: LE CHAN PHAM VO HUY VU SKL 010586 Ho Chi Minh City, December 2022 HO CHI MINH CITY UNIVERSITY OF TECHNOLOGY AND EDUCATION FACULTY FOR HIGH QUALITY TRAINING GRADUATION PROJECT DESIGN START STOP CIRCUIT THROUGH TRAFFIC DETECTION Students: Le Chan Pham ID: 18145045 Vo Huy Vu ID: 18145080 Major: AUTOMOTIVE ENGINEERING Advisor: Assoc.Prof Do Van Dung Ho Chi Minh City, 24 December 2022 HO CHI MINH CITY UNIVERSITY OF TECHNOLOGY AND EDUCATION FACULTY FOR HIGH QUALITY TRAINING GRADUATION PROJECT DESIGN START STOP CIRCUIT THROUGH TRAFFIC DETECTION Students: Le Chan Pham ID: 18145045 Vo Huy Vu ID: 18145080 Major: AUTOMOTIVE ENGINEERING Advisor: Assoc.Prof Do Van Dung Ho Chi Minh City, 24 December 2022 THE SOCIALIST REPUBLIC OF VIETNAM Independence – Freedom– Happiness -Ho Chi Minh City, December, 2022 GRADUATION PROJECT ASSIGNMENT Student name: LE CHAN PHAM Student ID: 18145018 Student name: VO HUY VU Student ID: 18145030 Major: Automotive engineering technology Advisor: Assoc.Prof DO VAN DUNG Phone number: 0966879932 Date of assignment: Octorber 2022 Date of submission: December 2022 Project title: Design Start Stop circuit through object detection Equipment: Laptop with GPU, HD Camera, Arduino UNO Content of the project: Research convolutional neural networks, YOLO algorithm model, train YOLO model, evaluate the results and use output to control Arduino Final product: Traffic light detection system through webcam, videos and images CHAIR OF THE PROGRAM Sign with full name ADVISOR Sign with full name THE SOCIALIST REPUBLIC OF VIETNAM Independence – Freedom– Happiness Ho Chi Minh City, December, 2022 ADVISOR’S EVALUATION SHEET Student name: LE CHAN PHAM Student ID: 18145045 Student name: VO HUY VU Student ID: 18145080 Major: Automotive engineering technology Project title: Design Start Stop circuit through object detection EVALUATION Content of the project: Strengths: Weaknesses: Approval for oral defense? (Approved or denied) Overall evaluation: (Excellent, Good, Fair, Poor) Mark: ……… (In words: .) Ho Chi Minh City, … month, … day, … year ADVISOR (Sign with full name) THE SOCIALIST REPUBLIC OF VIETNAM Independence – Freedom– Happiness -Ho Chi Minh City, December, 2022 PRE-DEFENSE EVALUATION SHEET Student name: LE CHAN PHAM Student ID: 18145045 Student name: VO HUY VU Student ID: 18145080 Major: Automotive engineering technology Project title: Design Start Stop circuit through object detection Name of Reviewer: …………………………………………………………………… EVALUATION Content of the project: Strengths: Weaknesses: Approval for oral defense? (Approved or denied) Overall evaluation: (Excellent, Good, Fair, Poor) Mark: ……… (In words: .) Ho Chi Minh City, … month, … day, … year REVIEWER (Sign with full name) THE SOCIALIST REPUBLIC OF VIETNAM Independence – Freedom– Happiness Ho Chi Minh City, December, 2022 EVALUATION SHEET OF DEFENSE COMMITTEE MEMBER Student name: LE CHAN PHAM Student ID: 18145045 Student name: VO HUY VU Student ID: 18145080 Major: Automotive engineering technology Project title: Design Start Stop circuit through object detection Name of Defense Committee Member: ………………………………………………… EVALUATION Content of the project: Strengths: Weaknesses: Approval for oral defense? (Approved or denied) Overall evaluation: (Excellent, Good, Fair, Poor) Mark: ……… (In words: .) Ho Chi Minh City, … month, … day, … year COMMITTEE MEMBER (Sign with full name) DISCLAIMER The authors, Le Chan Pham and Vo Huy Vu confirm that the work presented in this thesis is ours All the data and statistics in the thesis are reliable and are not published in any previous studies or research Where information has been derived from other sources, we confirm that this has been indicated in the thesis i ACKNOWLEDGE Throughout our studies and graduation process, my team was always cared for, guided, and assisted by teachers from the Faculty of High Quality Training, as well as the support and assistance from friends and colleagues First and foremost, we want to thanks The Board of Directors of Ho Chi Minh City University of Technology and Education for creating all conditions in terms of facilities along with modern equipment and library system with a variety of documents, which is convenient for students in order to research information We would like to express our gratitude to the instructor Assoc.Prof Do Van Dung for assisting and leading us in complete this project Because of the team’s limited experience, this study will have errors when practicing and finishing the graduation thesis We are looking forward to hearing feedback and advice from professors to help us complete our report Sincerely thank you! Ho Chi Minh City, 24th December 2022 ii CONTENTS DISCLAIMER i ACKNOWLEDGE ii CONTENTS iii ABSTRACT xii CHAPTER 1: INTRODUCTION 1.1 Reason for choosing topic: 1.2 Scope of research: 1.3 Project structure: 1.4 Thesis limited: .2 CHAPTER 2: FUNDAMENTALS 2.1 Overview of traffic lights system: 2.2 Overview of Engine Start Stop system: 2.2.1 How does engine start stop system work? 2.2.2 What are the benefits of Stop-Start? 2.2.3 What are the downsides of Stop-Start? .5 2.3 Introduction to Deep Learning: 2.3.1 What is Deep Learning: 2.3.2 The difference between the Machine Learning and Deep Learning : 2.3.3 Some neural network in Deep Learning: 2.4 Overview of Convolutional Neural Network in image classification: 12 2.4.1 What is Convolutional Neural Network? 12 2.4.2 Convolutional Neural Network Architecture: 14 2.5 ResNet: 21 2.6 How to detect an object: .23 2.7 Introduction some object detection algorithm: 25 2.7.1 R-CNN (Region–based Convolutional Neural Networks): 26 2.7.2 Fast R-CNN: 27 2.7.3 Faster R-CNN: 28 2.7.4 SSD (Single Shot Multi-box Detector): 30 CHAPTER 3: YOLO ALGORITHM MODEL 32 3.1 What is YOLO? 32 iii Figure A file label of an image In which: the values are the center and the width and height dimensions of the bounding box which have been normalized by dividing the width and height of the image , so the date values are always in the range [0, 1] is the index value that marks classes In the case an image has many bounding boxes, the annotation file will include many lines, each bounding box is a line 5.1.2 Training Yolo on Google Colab: After create Colab notebook, we need to mount Google Drive to this notebook Then we upload training data file in the previous step to Google Drive Now we will run these script step by step on Colab: Figure 5 Clone Yolov7 from Github Figure 6.Install necessary library 104 Figure Try to detect with pretrain weight Figure Unzip training data from Drive Figure Reorganize the training data folder 105 Figure 10 Start to train YOLO It takes a lot of time to run this step After 200 epochs, mAP value is about 0.95 so we decide to stop training and try to detect example Figure 11 Try to detect after training and print out result 5.1.3 Operate Yolo on Windows: The next step is download Yolo file after training and create environment for running that file Then open the Yolo path and run these scripts in Anaconda prompt Figure 12 Open Yolo path and import library 106 Figure 13 Start detect model by run detect.py file Figure 14 Detect in real-time with image on mobile phone 5.1.4 Connect Arduino to Python: The next step is connecting Arduino to run in Python programming language We have to upload example StandardFirmata to Arduino 107 Figure 15 Upload example to Arduino Figure 16 Connect Arduino to Python Figure 17 Logic code for connecting arduino 5.2 Final result: The result after training is very positive, we have tried our model in many conditions and it can work good with high efficiency After training, first we test the image on the internet and of course it work perfectly with accuracy 108 Figure 18 Detect image on the internet Figure 19 Detect an image on the internet Our model can detect object even small and far away, in dark condition or in flared condition 109 Figure 20 Detect object small and far away 110 Figure 21 Detect in low bright condition Figure 22 Detect with flared condition Figure 23 Detect video 111 Figure 24 Detect video with obstacle Figure 25 Detect in real time 5.3 Conclusion 5.3.1 Strength: ⦁ Make good predictions in real-time based on a high speed detection model ⦁ Be able to detect objects with high frequency up to 60fps ⦁ The boundary box prediction determines the position of the object exactly 112 ⦁ YOLO prediction for 832x832 input gives more accurate results than 320x320 and 416x416 size images, especially for predicting small objects (far away from the camera) ⦁ YOLO can detect exactly in low light, over bright and backlit environments or the objects are half covered ⦁ Getting a high speed and accuracy results in detection when moving on the road 5.5.2 Weakness: ⦁ It takes a lot of time to train for about hours for 200 epochs ⦁ The model is quite large and heavy so our laptop is hot and slow while running this model so we need a computer with GPU ang strong hardware to run this model ⦁ It is unportable because we run model on Windows laptop, so it is not convenient to carry out on the road That means many requirements exist to be able to include a connected device in the electrical system of a complete vehicle ⦁ The model is not able to detect when moving too fast due to the camera not have anti vibration function ⦁ The model is not able to detect traffic jams at the moment so it can not make decision to control the engine 5.4 Future development Based on the Traffic Lights detection model, we can combine with other systems like Traffic Signs detection and use the output notify driver through vehicle audio Besides, if we train a model with another dataset about vehicles, it will be able to distinguish between traffic jams and waiting at red lights 5.4.1 Hardware: Besides, if all requirements are met, we will be able to connected model to the electrical system of a complete vehicle First of all, the size of laptop is a problem Laptop running heavy model need a lot of energy to maintain, so we can not use it in long term Model have to be run on a computer which should be as small as possible and need low electric power On the other hand, computer must be powerful and have enough GPU memory to run object detection model The solution is using small and powerful computer like Jetson Nano or Raspberry Pi instead Secondly, vibration when vehicle running at high speed prevent model makes good predictions It is the most difficult due to vibration is affected by many different factors The way to reduce it is to use a sports camera with an anti-vibration function 113 In terms of compatibility, the electronic devices on the vehicle operate under a variety of conditions, especially when operating in dust, moisture and foggy environments, the designed position for the camera must be reasonable to provide the best vision about input data Therefore, when connecting a new model into the system of a real-life vehicle, it is necessary to consider factors that may affect the ability of this device The equipment must be fully compatible with other systems on the vehicle, making reasonable design changes to ensure stable operation of the device when the vehicle is in operation 5.4.2 Software: Thanks to its outstanding advantages over others models, Yolo object detection algorithms are one of the preferred choices in many major areas where performance, accuracy, and precision are required Therefore, the development of algorithms for object detection along with the simplification of the prediction process has also been invested for many years 114 REFERENCES [1] (2022, Dec).[Online] Data Mining vs Machine Learning vs Artificial Intelligence vs Deep Learning Difference in Data Mining Vs Machine Learning Vs Artificial Intelligence (softwaretestinghelp.com) [2] B Grossfeld (2020, Jan) Zendesk [Online] https://www.zendesk.com/blog/machine-learning-and-deeplearning/#:~:text=To%20recap%20the%20differences%20between,intelligent%20decisio ns%20on%20its%20own [3] Wikipedia [Online] https://en.wikipedia.org/wiki/Artificial_neural_network [4] G Holmgren et al (2019, Aug) Journal of Intensive Care [Online] https://jintensivecare.biomedcentral.com/articles/10.1186/s40560-019-0393-1 [5] Wikipedia [Online] https://en.wikipedia.org/wiki/Recurrent_neural_network [6] S.Albawi and T.A.Mohammed (2017, Aug) ResearchGate [Online] https://www.researchgate.net/publication/319253577_Understanding_of_a_Convolutiona l_Neural_Network [7] J.M.Llamas et al (2017, Sep) ResearchGate [Online] https://www.researchgate.net/figure/Scheme-of-the-AlexNet-networkused_fig1_32005236 [8] Kaiming He et al (2015, Dec) Arxiv [Online] https://arxiv.org/pdf/1512.03385.pdf [9] R.Girshick (2014, Oct) Arxiv [Online] https://arxiv.org/pdf/1311.2524.pdf [10] R.Girshick (2015, Sep) Arxiv [Online] 115 https://arxiv.org/pdf/1504.08083.pdf [11] Shaoqing Ren et al (2016, Jan) Arxiv [Online] https://arxiv.org/pdf/1506.01497.pdf [12] Wei Liu et al (2016, Dec) Arxiv [Online] https://arxiv.org/pdf/1512.02325.pdf [13] J.Redmon et al (2016, CVPR) YOLO [Online] https://pjreddie.com/media/files/papers/yolo_1.pdf [14] J.Redmon and A.Farhadi (2017, CVPR) YOLO [Online] https://pjreddie.com/media/files/papers/YOLO9000.pdf [15] J.Redmon and A.Farhadi (2018, Apr) YOLO [Online] https://pjreddie.com/media/files/papers/YOLOv3.pdf [16] A.Kathuria (2018, Apr) PaperspaceBlog [Online] https://blog.paperspace.com/how-to-implement-a-yolo-object-detector-in-pytorch/ [17] Andrew Ng Coursera [Online] https://www.coursera.org/lecture/convolutional-neural-networks/anchor-boxesyNwO0 [18] A.Rosebrock (2016, Nov) pyimagesearch [Online] https://www.pyimagesearch.com/2016/11/07/intersection-over-union-iou-for-objectdetection/ [19] C.Szegedy et al (2014, Sep) Arxiv [Online] https://arxiv.org/pdf/1409.4842.pdf [20] S.Ioffe and C.Szegedy (2015, Mar) Arxiv [Online] https://arxiv.org/pdf/1502.03167.pdf [21] Engine Stop-Start Technology [Online] 116 Engine Stop-Start Technology Explained | Haynes Manuals 117