1. Trang chủ
  2. » Giáo Dục - Đào Tạo

Design of an automated plant disease detection and diagnosis system

87 5 0

Đang tải... (xem toàn văn)

Tài liệu hạn chế xem trước, để xem đầy đủ mời bạn chọn Tải xuống

THÔNG TIN TÀI LIỆU

Nội dung

MINISTRY OF EDUCATION AND TRAINING HO CHI MINH CITY UNIVERSITY OF TECHNOLOGY AND EDUCATION GRADUATION THESIS AUTOMATION AND CONTROL ENGINEERING TECHNOLOGY DESIGN OF AN AUTOMATED PLANT DISEASE DETECTION AND DIAGNOSIS SYSTEM LECTURER: TRAN VU HOANG, Ph.D STUDENTS: NGUYEN HUU THANG TRAN BINH TRONG SKL 011308 Ho Chi Minh City, July 2023 HO CHI MINH CITY UNIVERSITY OF TECHNOLOGY AND EDUCATION FACULTY FOR HIGH QUALITY TRAINING GRADUATION PROJECT DESIGN OF AN AUTOMATED PLANT DISEASE DETECTION AND DIAGNOSIS SYSTEM Student 1: NGUYEN HUU THANG Student ID: 19151080 Student 2: TRAN BINH TRONG Student ID: 19151090 Major: AUTOMATIC AND CONTROL ENGINEERING Advisor: TRAN VU HOANG, Ph.D Ho Chi Minh City, July, 2023 ACKNOWLEDGEMENTS Firstly, we would like to express our gratitude to our family for providing us with the opportunity to study at HCMC University of Technology and Education Next, we would like to thank the professors at the HCMC University of Technology and Education for their invaluable guidance and imparting both academic and social knowledge Their unwavering support has been instrumental in overcoming challenges throughout our educational journey and expanding our understanding We extend our heartfelt gratitude to our classmates, both within and outside the classroom, as well as the senior students who generously provided assistance and support during this time The solid foundation of our knowledge was established through the guidance of our teaching faculty, friends, and seniors, which greatly facilitated the development of a successful graduation project under the supervision of our advisor We would like to express our sincere gratitude to Dr Tran Vu Hoang, a current lecturer at the Department of Electrical - Electronics, HCMC University of Technology and Education Dr Hoang provided invaluable guidance throughout the completion of this graduation project, offering insightful suggestions and imparting previously unknown knowledge, which greatly contributed to the successful completion of the project Additionally, we extend our thanks to the members of the UTE-AI Laboratory for their wholeheartedly support during the finalization process of this project The process of researching and developing this project is not immune to mistakes, so we sincerely hope to receive valuable feedback from esteemed faculty members, the thesis defense committee, and our peers This feedback will enable us to further improve the project and apply the findings in practical scenarios, with the ultimate aim of alleviating the hardships faced by farmers We express our heartfelt appreciation to everyone! ABSTRACT The greenhouse tomato farming model was developed to solve the problem of low crop yields due to external environmental impacts due to climate change However, the model still faces several challenges and difficulties such as the need for a large horticultural workforce, and the suboptimal use of pesticides The use of labor to take care of each plant area in the garden is not optimal because diseases on tomato plants, especially brown spot disease, have a very rapid spread time To solve this labor problem, researchers have proposed solutions such as using real-time disease recognition algorithms and integrating them into Drones However, the actual implementation of these solutions with tomato plants so far has not shown effective According to the research team's survey, previous real-time disease identification projects on tomato plants mainly used different CNN architectures for research and development purposes The ability to identify diseases with those neural network structures is still limited in terms of recognition speed, and there is no specific proposed practical model for an application To tackle the challenges related to recognition speed, accuracy, and practicality in a greenhouse garden environment, our research team presents a novel system that leverages mechanically moving axes, similar to a CNC machine These axes are seamlessly integrated with a webcam, enabling flexible movement and monitoring across multiple locations for automated real-time disease identification and warnings The system is specifically designed to accurately pinpoint the affected areas of plants and display the corresponding disease types Notably, one of the primary advantages of this system is its capability to operate continuously and at high speeds over prolonged durations The system employs a YOLO v8n model-based detection structure on a dataset consisting of 10 common diseases of tomato plants from Kaggle, supplemented with manually collected data The achieved results demonstrate high accuracy in detecting leaf diseases of tomato plants, with an average precision of 92.6% and an achieved speed of 18-24 FPS on Jetson Nano Develop Board (with GPU:128-core NVIDIA Maxwell architecture-based GPU) CONTENT PAGE ACKNOWLEDGEMENTS ABSTRACT CONTENT PAGE LIST OF ABBREVIATIONS LIST OF TABLES LIST OF FIGURES, GRAPHS Chapter SUMMARY 10 1.1 Introduction 10 1.2 The Objective of the Research Project 16 1.3 Research and Contents 16 1.4 Limitation of Project 17 1.5 Subject and Scope of Research 17 1.6 Research Methodology 17 1.7 Structure of Project 18 Chapter OVERALL RESEARCHING 19 2.1 Growing conditions of tomato plants in a greenhouse model 19 2.1.1 Soil Quality and Composition 19 2.1.2 Temperature and climate 19 2.1.3 Water 19 2.1.4 Nutrient management 19 2.1.5 Pest and Disease Control 20 2.2 Introduction Object Detection 20 2.2.1 Overall of YOLO (You Only Look Once) 22 2.2.2 YOLOv1 23 2.2.3 YOLOv2 25 2.2.4 YOLOv3 26 2.2.5 YOLOv4 29 2.2.6 YOLOv5 30 2.2.7 YOLOv6 32 2.2.8 YOLOv7 33 2.2.9 YOLOv8 35 2.3 Mechanical Structure and Controller Methods 37 Chapter SYSTEM DESIGN 40 3.1 The requirements imposed on the system 40 3.1.1 Block diagram of the system and explanation 40 3.1.2 Functions and Hardware used in CAMERA block 41 3.1.3 Main Processor and Communication Unit 42 3.1.4 Detection Software, and Hardware used 49 3.1.5 Actuators 53 3.1.6 HMI Block, and Hardware 57 3.2 Overall Operation of System 59 3.3 Electrical Cabinet 61 Chapter EXPERIMENTAL RESULTS 64 4.1 Evaluation Criteria 64 4.2 Testing Environment 64 4.3 Evaluation Methodology 66 4.3.1 Evaluating on Kaggle Dataset and Fine-Tune Dataset 66 4.3.2 Assessing The Autonomous Operation of the Mechanical Model Without Object Detection .68 4.3.3 Assessing The Autonomous Operation of the Mechanical Model with Object Detection 68 4.4 Experimental Procedure 68 4.4.1 Evaluate the YOLOv8n Object Detection Model on Two Datasets .68 4.4.2 Evaluation of The Response of the Mechanical System 69 4.4.3 HMI Design 72 4.4.4 Evaluating All Functions of the System during Operate 74 Chapter CONCLUSION AND DEVELOPMENT 77 5.1 Conclusion 77 5.2 Development 77 REFERENCES 79 Evaluation of the training dataset: Precision and Recall: the precision measures the ratio of predicted bounding boxes that are accurate with the total number of predicted bounding boxes shown in Formula 4.1 And recall measures the ratio of true positive bounding boxes found with the total number of actual bounding boxes in the dataset shown in Formula 4.2 𝑇𝑃 , 𝑇𝑃 + 𝐹𝑃 𝑇𝑃 𝑅𝑒𝑐𝑎𝑙𝑙 = 𝑇𝑃 + 𝐹𝑁 (𝟒 𝟏) 𝑃𝑟𝑒𝑐𝑖𝑠𝑖𝑜𝑛 = (𝟒 𝟐) In (4.1) and (4.2), if we consider a disease A on tomato leaves, true positive (TP) will be the total number of leaves with disease A that the system detects correctly; false positive (FP) will be the total number leaves that not have disease A but the system recognizes as diseased A; and false negative (FN) is the total number of leaves with disease A but the system cannot detect it Evaluation of the test dataset: Intersection over Union (IoU) score: Measures the overlap ratio between the predicted bounding box and the ground truth bounding box, calculated as the intersection area divided by the union area of the two bounding boxes shown in Formula 4.3 𝐼𝑜𝑈 = 𝐴𝑟𝑒𝑎 𝑜𝑓 𝑂𝑣𝑒𝑟𝑙𝑎𝑝 𝐴𝑟𝑒𝑎 𝑜𝑓 𝑈𝑛𝑖𝑜𝑛 (𝟒 𝟑) In (4.3):  Area of Overlap is the area shared by the bounding box and the ground truth bounding box In object detection tasks, the predicted bounding box represents the region proposed by an algorithm as containing the object of interest, while the ground truth bounding box represents the actual location of the object in the image The Area of Overlap is the region where these two bounding boxes intersect  Area of Union represents the total area covered by both the predicted bounding box and the ground truth bounding box It is the combined area of the two boxes without considering their overlapping region Average Precision (AP) is calculated by Formula 4.4, which measures the accuracy of the model at different IoU thresholds 𝐴𝑃 = [𝑅𝑒𝑐𝑎𝑙𝑙𝑠(𝑛) − 𝑅𝑒𝑐𝑎𝑙𝑙𝑠 (𝑛 − 1)] ∗ 𝑃𝑟𝑒𝑐𝑖𝑠𝑖𝑜𝑛(𝑛), (𝟒 𝟒) 67 where, N is the number of IoU thresholds and n = 1, 2, , N Recall(n) and Precision(n) are respectively Recall and Precision calculated based on the nth IoU threshold (a detection is considered true when its IoU score is greater than the given IoU threshold and vice versa) Mean Average Precision (mAP) is the overall accuracy across all classes in the test dataset as calculated in Formula 4.5 𝑚𝐴𝑃 = 𝐶 𝐴𝑃 (𝟒 𝟓) Where:  C: the number of classes  APi is the Average Precision of ith class 4.3.2 Assessing The Autonomous Operation of the Mechanical Model Without Object Detection To perform the evaluation of the mechanical structure during actual operation, the following criteria are considered: - Response time to commands issued by the operator in Auto/Manual mode Response time to position changes when the model is running in real-time and visually displayed on the control screen 4.3.3 Assessing The Autonomous Operation of the Mechanical Model with Object Detection The evaluation team employed the following combined criteria for assessment: - The number of trees detected within a frame, along with the accuracy of the recognition model Feedback control during the experimental operation of the mechanical structure 4.4 Experimental Procedure In this part, the research team will evaluate all the criteria mentioned in section 4.3 for the whole operating model 4.4.1 Evaluate the YOLOv8n Object Detection Model on Two Datasets After training the tomato leaf disease detection and classification model using YOLOv8n, the training and prediction results on the test data were performed on the Kaggle dataset, which is shown in Table 4.1 We can observe that the YOLOv8 model exhibits better performance metrics compared to the other two models Despite having a slightly lower 68 accuracy than YOLOv7 at 0.011, it compensates with a significantly improved speed, exceeding 12 frames per second (FPS) Table 4.1: Performance on Kaggle Dataset on all class Model Precision Recall mAP (IoU = 0.5) mAP FPS (IoU =0.50.95) (average) YOLOv5 0.892 0.901 0.928 0.658 28 YOLOv7 0.964 0.931 0.956 0.661 31 YOLOv8 0.953 0.960 0.985 0.699 39 The training outcomes of the YOLOv8 model on the dataset were fine-tuned using realworld data collected as mentioned in Section 4.2, presented in Table 4.2 In this table, the Precision metric of YOLOv8 is still lower than that of version 7, but it has a higher speed However, collecting data based on real trees poses challenges in identifying various types of diseases Therefore, the research team utilized image data obtained from different sources on the Internet for fine-tuning purposes Table 4.2: Performance on fine-turn Dataset on all class Model Precision Recall mAP (IoU = 0.5) mAP FPS (IoU =0.5-0.95) (average) YOLOv5 0.831 0.729 0.83 0.66 28 YOLOv7 0.964 0.931 0.92 0.658 30 YOLOv8 0.953 0.920 0.901 0.682 39 4.4.2 Evaluation of The Response of the Mechanical System The team has run the model and recorded the response time, position accuracy, and delay time between the actual running position and the transmitted position and processed to display on the screen control screen as Table 4.3 Table 4.3: Mechanical System Evaluation Evaluation Criteria Auto Manual Response Time 0.8s 0.8s Position Accuracy ±2.8𝑚𝑚 ±2.8𝑚𝑚 69 The system has a delay of approximately 0.8 seconds whenever a new task is issued, primarily due to communication latency Serial communication is a one-way communication, so it can only receive or send signals from one device at a time For example, whenever a journey requested by Jetson Nano is completed, the Arduino MEGA2560 sends a feedback signal back to Jetson Nano for the next command When Jetson Nano receives the signal indicating the completion of the journey, it switches to the Serial interrupt state and immediately resumes operation Similarly, Arduino MEGA also needs to change its state back to be ready for receiving new requested positions However, after receiving the signal, it still takes some time (approximately 0.6 seconds) to convert the data from string format and extract the corresponding data for the position requirements of each axis and the camera's viewing direction Only then can the command be executed The model is constructed as a whole as shown in Figure 4.3 The running path of the X-axis and Y-axis are 82.7cm and 54.2cm, respectively, allowing for the arrangement of two rows of trees in parallel with two pots per row, which is suitable for the experimental process The electrical cabinet system is built as illustrated in Figures 4.4 and 4.5, including the exterior and interior components The external control section consists of basic functions such as a Green button used to activate the entire system, a Red button used to stop the entire system, and an Emergency button for emergency stoppage and power cutoff of the system Additionally, we have also added three status indicator lights, where Red indicates the system has connected to the 220V power source, Yellow signifies that the MCB has been closed, and Green indicates that the system is operating Figure 4.3: Practice Model 70 Figure 4.4: Surface of Electrical Cabinet Figure 4.5: Inside of Electrical Cabinet 71 4.4.3 HMI Design In this part, the team designed a user control interface to communicate between the model and the manager with the following features: The interface at startup in Figure 4.6: allows options for the operator including Checking All and Setpoint In these two functions work independently, the Checking All Mode will check the entire system with feedback to ensure stability before proceeding to run the disease test model on the leaves The Setpoint function allows the user to return to the starting point in the original state without skipping the system check step, when using this function, the user must ensure that the mechanical mechanism is not problematic Main console in Figure 4.7: After completing the system check (Checking All) step and returning to the initial point (Setpoint) without any problems, the system will allow to start control In the main interface, there are features that are Auto and Manual  When Auto mode is selected as shown in Figure 4.8, the system automatically follows a pre-set routine according to the tomato bed's setup and scans all views for warnings After running the entire journey, the Webcam will be returned to its original position  When Manual mode is selected as Figure 4.9, the system will display optional function keys for user control such as slide bars to control the position for the X, Y axes The bottom is to raise or lower the degree high for the Webcam, and finally the option of different viewing zones of the Webcam Figure 4.6: Start Screen 72 Figure 4.7: Main Control Screen Figure 4.8: Auto Mode 73 Figure 4.9: Manual Mode 4.4.4 Evaluating All Functions of the System during Operate In this section, the team will assess the entire system through the criteria integrated from sections 4.4.1 and 4.4.2 The evaluation criteria for the recognition model will be employed, the tomato plants in the pots will be categorized into five distinct states, as illustrated confusion matrix in Figure 4.8, and their mechanical structure will be evaluated based on the criteria provided in Table 4.4 The results after training the model in Figure 4.10 show that the model's prediction performance is consistent, although there is still confusion between classes, the highest is 3% and the lowest is 1% But the actual running results show that this is not completely good because the system only recognizes medium and large size images, small leaves will be ignored as shown in Figure 4.11 The downside is that there are some small mistakes in the Bounding Box, where the Healthy state is wrong on the outside of the leaves, in this case in Auto mode, the Webcam is moved with shaking will slight occur 74 Figure 4.10: Confusion Matrix on Fine-Tune Dataset Figure 4.11: Operating System on HMI Table 4.4: Mechanical Evaluation during System Running Evaluation Criteria Auto Manual Response Time 0.8s 0.8s Position Accuracy ±2.8𝑚𝑚 ±2.8𝑚𝑚 75 Based on the results of Figure 4.10 and Table 4.5, the team can summarize several conclusions Regarding the YOLOv8n detection model, it operates significantly faster than previous versions However, there are differences between various leaf types in the Kaggle dataset and the dataset collected by the group in terms of leaf shape and color In the Healthy state, most trees achieved good results because they were predominantly healthy Additionally, two diseases, Bacterial and Leaf Mold, were also detected 76 Chapter CONCLUSION AND DEVELOPMENT In this chapter, the group will present some conclusions upon completing the model and discuss future developments for the entire system 5.1 Conclusion The system has been constructed and meets the set criteria Based on the evaluation results in Chapter 4, the group has identified the following advantages and disadvantages: Advantages: - The system can detect unhealthy plants and provide disease warnings based on trained data The mechanical structure operates correctly according to the predefined path in Auto mode, with negligible positional errors in Manual mode The system has a control interface with basic features and displays the garden's status for the manager to monitor Disadvantages: - - The system is still simple and lacks comprehensive data on all tomato plant diseases The control interface is rudimentary, and lacks advanced features for the operator such as the task of administering the spray treatment directly into the area Bounding Box for Healthy status will mistakenly recognize the outside area when running the system with Auto mode causing slight vibration The mechanical structure's operation is not exactly 100% the position, lacking a self-treatment mechanism when detecting diseases 5.2 Development In this part, the group proposes the development directions of the topic based on the current ideas being implemented such as: - - The mechanical structure is developed more precisely, capable of long-term operation while optimizing costs Development's automatic disease treatment features not require user intervention For example, if the system detects that an area is likely to be infected, it will automatically spray Model accuracy is enhanced by introducing a feature to confirm the accuracy of disease detection If the model detects false, the user can use the defined false object to revalidate and store the data for the next training 77 - Models can develop a more intuitive control interface and allow intervention in many of the model's operating parameters such as speed 78 REFERENCES [1] R Lindsey and L Dahlman “Climate Change: Global Temperature” National Centers for Environmental Information (accessed July 5, 2023) [2] R Bhandari, N Neupane, and D P Adhikari, "Climatic change and its impact on tomato (Lycopersicum esculentum L) production in plain area of Nepal," ENVC, August 2021, doi: 10.1016/j.envc.2021.100129 [3] F Branthôme “Consumption: 2021 in the wake of 2020” The tomato news https://www.tomatonews.com/en/consumption-2021-in-the-wake-of2020_2_1618.html (accessed Jul 5, 2023) [4] M J Buschermohle and G F Grandle, Controlling the Environment in Greenhouses Used for Tomato Production, The University of Tennessee, September 2018 [5] "Wikipedia is a free online encyclopedia, created and edited by volunteers around the world and hosted by the Wikimedia Foundation," [6] Ch Usha Kumari, S Jeevan Prasad and G Mounika, "Leaf Disease Detection: Feature Extraction with K-means clustering and Classification with ANN", ICCMC, August 2019, doi: 10.1109/ICCMC.2019.8819750 [7] M Lechner, R Hasani, A Amini, T A Henzinger, D Rus, and R Grosu, "Deep Computer Vision," arXiv, January 20, 2021 [8] R Girshick, "Fast R-CNN," ICCV, Sep 27, 2015, doi: 10.48550/arXiv.1504.08083 [9] K He, X Zhang, S Ren, and J Sun, "Faster R-CNN: Towards Real-Time Object Detection with Region Proposal Networks," Extended tech report, Jan 2016, doi: 10.48550/arXiv.1506.01497 [10] T AI, "How did Binary Cross-Entropy Loss Come into Existence?," 2023 [11] M A Islam, S Naha, M Rochan, N Bruce and Y Wang, "Label Refinement Network for Coarse-to-Fine Semantic Segmentation", arXiv, March 2017, doi: 10.48550/arXiv.1703.00551 [12] Chien-Yao Wang, "Designing Network Design Strategies Through Gradient Path Analysis," arXiv, Nov 2022, doi: 10.48550/arXiv.2211.04800 [13] K He, X Zhang, S Ren and J Sun, "Deep Residual Learning for Image Recognition", CVPR, December 2016, doi: 10.1109/CVPR.2016.90 [14] G Huang and Z Liu, "Densely Connected Convolutional Networks," CVPR, November 2017, doi: 10.1109/CVPR.2017.243 [15] Kaustubh B, "Tomato leaf disease detection," 2020 79 [16] Qengineering, "YoloV7 Raspberry Pi 4," [Online] Available: https://github.com/Qengineering/YoloV7-ncnn-Raspberry-Pi-4 Apr 2022 [17] "Serial communication" from Wikipedia 80

Ngày đăng: 28/12/2023, 18:51

TÀI LIỆU CÙNG NGƯỜI DÙNG

TÀI LIỆU LIÊN QUAN