(Đồ án HCMUTE) design of the automatic attendance register and student concentration level monitoring system

58 1 0
(Đồ án HCMUTE) design of the automatic attendance register and student concentration level monitoring system

Đang tải... (xem toàn văn)

Tài liệu hạn chế xem trước, để xem đầy đủ mời bạn chọn Tải xuống

Thông tin tài liệu

MINISTRY OF EDUCATION AND TRAINING HO CHI MINH CITY UNIVERSITY OF TECHNOLOGY AND EDUCATION ELECTRONICS AND COMMUNICATIONS ENGINEERING TECHNOLOGY DESIGN OF AUTOMATIC ATTENDANCE REGISTER AND STUDENT CONCENTRATION LEVEL MONITORING SYSTEM LECTURER: TRUONG NGOC SON STUDENT: LE XUAN TUAN DAT TRUONG KHAC HUY SKL 009328 Ho Chi Minh City, August 2020 i HO CHI MINH CITY UNIVERSITY OF TECHNOLOGY AND EDUCATION FACULTY FOR HIGH QUALITY TRAINING GRADUATION THESIS MAJOR: ELECTRONICS AND COMMUNICATIONS ENGINEERING TECHNOLOGY DESIGN OF AUTOMATIC ATTENDANCE REGISTER AND STUDENT CONCENTRATION LEVEL MONITORING SYSTEM ADVISOR: Assoc Prof TRUONG NGOC SON STUDENT: LE XUAN TUAN DAT – 18161056 TRUONG KHAC HUY – 18161016 HO CHI MINH CITY – 08/2022 i HO CHI MINH CITY UNIVERSITY OF TECHNOLOGY AND EDUCATION FACULTY FOR HIGH QUALITY TRAINING GRADUATION THESIS MAJOR: ELECTRONICS AND COMMUNICATIONS ENGINEERING TECHNOLOGY DESIGN OF AUTOMATIC ATTENDANCE REGISTER AND STUDENT CONCENTRATION LEVEL MONITORING SYSTEM ADVISOR: Assoc Prof TRUONG NGOC SON STUDENT: LE XUAN TUAN DAT – 18161056 TRUONG KHAC HUY – 18161016 HO CHI MINH CITY – 08/2022 i THE SOCIALIST REPUBLIC OF VIETNAM Independence – Freedom– Happiness *** -Ho Chi Minh City, August 7th, 2022 GRADUATION THESIS ASSIGNMENT Student name: Le Xuan Tuan Dat Student ID: 18161056 Student name: Truong Khac Huy Student ID: 18161016 Major: Electronics and Communications Engineering Technology Class: 18161CLA Advisor: Assoc Prof Truong Ngoc Son Date of assignment: Date of submission: Thesis title: Design of automatic attendance register and student concentration level monitoring system Initial materials provided by the advisor Content of the thesis: - Refer to the document, read and summarize to find the development direction of the thesis - Flow chart design and description them - Running and testing the system for completion - Write a report and prepare slides Final product: The GUI uses two different models to attend to students using face recognition and tracks or monitors pupils of students in the classroom CHAIR OF THE PROGR ADVISOR (Sign with full name) (Sign with full name) i THE SOCIALIST REPUBLIC OF VIETNAM Independence – Freedom– Happiness *** -Ho Chi Minh City, August 7th, 2022 ADVISOR’S EVALUATION SHEET Student name: Le Xuan Tuan Dat Student ID: 18161056 Student name: Truong Khac Huy Student ID: 18161016 Major: Electronics and Communications Engineering Technology Thesis title: Design of automatic attendance register and student concentration level monitoring system Advisor: Assoc Prof Truong Ngoc Son EVALUATION Content of the thesis: - Thesis has chapters with 54 pages - Design of student attendance using face recognition and concentration tracking system - The GUI model is successfully completed following the objectives in the proposal Strengths: Weaknesses: Approval for oral defense? (Approved or denied) Overall evaluation: (Excellent, Good, Fair, Poor) Mark: ……………… (in words: ) Ho Chi Minh City, August 7th, 2022 ADVISOR (Sign with full name) i THE SOCIALIST REPUBLIC OF VIETNAM Independence – Freedom– Happiness *** -Ho Chi Minh City, August 7th, 2022 PRE-DEFENSE EVALUATION SHEET Student name: Le Xuan Tuan Dat Student ID: 18161056 Student name: Truong Khac Huy Student ID: 18161016 Major: Electronics and Communications Engineering Technology Thesis title: Design of automatic attendance register and student concentration level monitoring system Name of Reviewer: Ph.D Do Duy Tan Content and workload of the thesis: Strengths: Weaknesses: Approval for oral defense? (Approved or denied) Overall evaluation: (Excellent, Good, Fair, Poor): Mark: ………… (in works: ………….………….………….………….…) REVIEWER i ACKNOWLEDGE  The most crucial phase of every student's life is the process of finishing their graduation thesis The purpose of our graduation thesis is to provide us with essential information and research skills in preparation for our careers First of all, we would like to thank Ho Chi Minh City University of Technology and Education Particularly, during our time in the lecture hall, the professors of the Faculty of High-Quality Training passionately instructed and prepared us with vital information, providing the groundwork for our ability to finish the thesis We want to express our gratitude to Assoc Prof Truong Ngoc Son for his passionate assistance and guidance in terms of scientific thinking and effort Those are some really helpful ideas, both for the creation of this thesis and as a stepping stone for our future studies and job development Our team is hoping that Assoc Prof Truong Ngoc Son and the teachers would empathize and offer advice on how to improve if something is missing from the project HO CHI MINH CITY - 08/2022 i APPROVAL Thesis title: Design of automatic attendance register and student concentration level monitoring system Advisor: TRUONG NGOC SON, Assoc Prof Student 1: LE XUAN TUAN DAT Student’s ID: 18161056 Class: 18161CLA1 Email: 18161056@student.hcmute.edu.vn Student 2: TRUONG KHAC HUY Student’s ID: 18161016 Class: 18161CLA2 Email: 18161016@student.hcmute.edu.vn “We commit not to copy or reuse the results of other people's work as part of the graduation project A complete list of references has been provided.” Ho Chi Minh City, August 7th, 2022 GROUP MEMBER (Sign with full name) Le Xuan Tuan Dat i Truong Khac Huy ABSTRACT In a scientific and technological world that continues to grow and improve, areas such as artificial intelligence (AI) deserve our attention In recent years, artificial intelligence has been increasingly integrated into our everyday lives Traditional education has been impacted by the ongoing pandemic, causing a variety of challenges for students and instructors Since then, online learning has grown in popularity, solving the problem of the epidemic but creating an entirely new set of challenges Studying generally is not an active pursuit for students Instead, they prefer to complete their tasks, such as using their phones and napping For the above reasons, we have designed an online student monitoring system that uses MediaPipe library and image processing technology to recognize and track students' eyes in real time With many outstanding features such as face recognition, iris recognition and the ability to identify quickly and accurately Our team decided to choose this library to support the group's product perfection in the most optimal We have accurately recorded the student’s perspective in real-time and the use of artificial intelligence Finally, our group is working to design a model that can help teachers easily manage and monitor students in their classroom in the most convenient and optimal i TABLE OF CONTENT CHAPTER 1: INTRODUCTION 1.1 Introduction 1.2 Objective 12 1.3 Methodology 13 1.4 Thesis summary 13 CHAPTER 2: LITERATURE REVIEW 15 2.1 Introduction of MediaPipe 15 2.2 Architecture of Mediapipe 16 2.3 Applications in Mediapipe 17 2.3.1 Face Detection 18 2.3.2 Face Mesh 18 2.3.3 Hands Detection 19 2.3.4 Human Pose Estimation 20 2.3.5 Iris Detection 21 CHAPTER 3: A SYSTEM OF STUDENT ATTENDANCE AND CONCENTRATION TRACKING DESIGN 23 3.1 System overview 23 3.2 Software design 24 3.2.1 Login Page 24 3.2.2 Data Collection Page 25 3.2.2 Classroom Page 26 CHAPTER 4: RESULT AND DISCUSSION 28 4.1 Result 28 4.2 Discussion 37 i Figure 3: The get data page (1) There will be a camera frame and a frame that gets the recently captured image from the camera next to it on the page where the data will be obtained When users log into the classroom for the first time, they are sent to this page to provide data The database will contain this image and utilize it as facial recognition input Additionally, the student's private information includes this image 31 i Figure 4: The get data page (2) The image captured with the "Get Data" button's most recent recording will be replayed as previously mentioned in the next frame Repeating the previous process while pressing the "Get Data" button will allow the user to capture a photo if the image is unclear or has other issues After giving it some thought, the user clicks the "Confirm" button to add the image to the student's data and to save it as data for the facial recognition feature as well 32 i Figure 5: The personal information page Information about the user, such as student name, student number, class code, and even student's photo, will be updated on the student's information page When you first log in, there won't be an image of the student on their information page; instead, there will only be a temporary blank box with a "Capture" button Students can use this button to access the data collecting website and take images there The photograph will be saved and added to the personal information page as the student's representative image after it has been taken 33 i Figure 6: The checkin section of the classroom page The classroom page also serves two major purposes for the project that our team is working on, one of which is shown at the check-in area before students enter the class Here, the check-in component is in charge of determining whether or not this is the student's face on this account In other words, when a user hits the "Check In" button, the model will temporarily snap a picture and compare it to the picture it previously captured on the data collecting page The class page will show a panel with the user's personal information and a confirmation button if the image identification matches Basic user data such as complete name, student number, and, most importantly, the day and time the student joined the class will be shown on this information board To make it simple for teachers to oversee how much time pupils spend in class, the aforementioned data will be saved in an excel file On the other hand, a message board will represent the wrong face if the face during check-in does not match the previously acquired data And that student will not be checked-in to the class until the model correctly recognizes the face of that account, nor will attendance be recorded in the excel file 34 i Figure 7: Monitor and assess students' attention in the class page Student looks to the left and right This is the second key purpose of the online classroom model developed by our company, as previously mentioned The model will keep track of students' attendance in the class once they have checked in by gathering real-time photos to identify and track the user's students Therefore, whether or not the learner is gazing at the laptop screen, the model will always track the pupil and provide the student's field of vision From there, based on the amount of time the user spends not looking at the classroom screen, a result is provided to objectively evaluate the student's focus throughout class participation 35 i Figure 8: Monitor and assess students' attention in the class page Notice board for distracted students The model always keeps an eye on the students' pupils throughout class to see if they are looking at the screen A counter will start counting whenever it has detected a student's glance to the left or right The screen will display the message "You are not focused" after the timer starts counting up to the predetermined time threshold The user must sit up straight and fix their gaze on the screen in order to turn off that bulletin board, after which the notification panel will also be switched off Additionally, the amount of time that students are not paying attention will be noted in the excel file to create the data necessary to assess the students' level of attention throughout that lesson 36 i Figure 9: Information about the student’s lessons is recorded in an Excel file This excel document contains all pertinent time-related data as well as a critical evaluation on the caliber of the student's lesson Some significant personal data, including the student's name, student ID number, and the class they attended that day The check-in time and check-out time are the following details regarding the recorded time The following statistic shows how much time each student spent participating in class and how much time they spent looking away from the classroom screen Based on the two time parameters on the model will calculate the percentage of the time that the student is not focused on the classroom screen From there, make objective comments according to each calculated data as follows: Excellent (0% - 20%), Good (21% - 50%), Not Good (51% - 80%), Bad (> 80%) 4.2 Discussion From the time the user checks in until the conclusion of the class, it is all the functionality and how our team's model operates Our group will next present the model's performance component In general, the model performs admirably for quick facial recognition, and its accuracy is considerably above 90% Detecting and recognizing in real-time with an accuracy of over 90%, the pupil monitoring feature is also extremely reliable 37 i We tested the system on a variety of users under a variety of conditions in order to evaluate its effectiveness This dataset will contain a variety of circumstances, like luminance and expression, in order to support the system and for research purposes Besides analyzing real-time applications, we will also examine the system as a whole In previous research reviews, researchers concluded that maintaining system efficiency usually involves determining recognition accuracy Table Face detection and recognition accuracy testing Scenario No of test images Detection Rate Recognize Rate Sun Light 100 89.3 % 86.3% Room Light 100 91.2 % 92.8 % Dim Light 100 82.7 % 84.1 % Based on the test parameters, it is evident that the model can distinguish between strong sunlight and dim lighting quite well The use of this algorithm in practice can be demonstrated from there However, to achieve such a high level of performance, students must participate in online classes in an area with good lighting conditions and not too far from the computer, up to one meter Only in well-lit areas does the pupil detection and tracking feature perform at its finest Because the model won't be able to detect the pupil in the absence of light, its operation will be inaccurate or may even prevent it from being able to carry out that function Additionally, the pupil identification and tracking feature developed by our team are currently insufficient for users who wear glasses, and it will be especially difficult to identify them if they wear sunglasses Having students also makes it impossible to keep track of their focus These are also the restrictions placed on the model our group proposed in the subject restriction 38 i CHAPTER 5: CONCLUSION AND FURTHER WORK 5.1 Conclusion During the implementation of the project, our group has built an application model for teachers and students to teach and learn online most effectively and optimally To register pupils via facial recognition and monitoring, the product is only exhibited in the form of a Gui model and with the assistance of image processing tools, the OpenCV and MediaPipe open-source library Observe and assess the academic development of your students This model satisfies the criteria for replicating the process of attending a class with two functions: facial recognition to check in before entering the class, and the second is to detect and monitor The length of time a pupil spends staring at the classroom screen is used to determine their level of concentration Based on the model's output, face recognition technology was successfully used in that class to check students in, keep an eye on them, and gauge their level of attentiveness (in the environment of good lighting conditions, also as satisfying the conditions set out earlier) 5.2 Further work Research and develop more efficient, accurate, and ideal detection and identification algorithms A useful website that instructors and students may engage with and utilize can be created using the Holy Gui approach Create a system with a wider range of monitoring and warning features For instance, throughout the lesson, the teacher might check the student to determine whether they are still the same person or if someone else has taken their seat using the system's monitoring capabilities to keep an eye on students' online test-taking behaviour to prevent cheating 39 i REFERENCE [1] P Viola and M Jones, “Rapid object detection using a boosted cascade of simple features”, in Proceedings of the 2001 IEEE Computer Society Conference on Computer Vision and Pattern Recognition Conference 2001, DOI: 10.1109/CVPR.2001.990517, 2001 [2] Wansen Wang, Yaya Peng, “Face detection based on Adaboost and face contour in e-learning”, in 2013 Ninth International Conference on Natural Computation (ICNC), DOI: 10.1109/ICNN.2913.6818127, 2013 [3] Walaa M Abd-Elhafiex, Mohamed Heshmat and Seham Elaw, “Efficient Method for Face Recognition and Its Role in Supporting E-Learning Systems”, in 2015 Fifth International Conference on e-Learning (econf), DOI: 10.1109/ECONF.2015.21, 2015 [4] Sucianna Ghadati Rabiha, Aditya Kurniawan, Jurike Moniaga, Eric Wilson, Daud Iqram Wahyudi and Sasmoko, “Face Authentication in E-Learning using Local Binary Pattern and Haar Cascade”, in 2018 International Seminar on Research of Information Technology and Intelligent Systems (ISRITY) DOI: 10.1109/ISRITY.2018.8864321, 2018 [5] T Ojala, “Pergamon A COMPARATIVE STUDY OF TEXTURE MEASURES WITH CLASSIFICATION BASED ON”, vol 29, no.1, 1996 [6] T Ahonen, A Hadid, and M Pietik, “Face Recognition with Local Binary Patterns”, pp.469-281, 2004 [7] Y Wen, K Zhang, Z Li, and Y Qiao, “A discriminative feature learning approach for deep face recognition,” in Lect Notes Comput Sci (including Subser Lect Notes Artif Intell Lect Notes Bioinformatics), 2016, pp 499–515 [8] Arief Agus Sukmandhani, Indrajani Sutedja, “Face Recognition Method for Online Exams”, in 2019 International Conference on Information Management and Technology (ICIMTech), DOI: 10.1109/ICIMTech 2019.8843831, 2019 [9] J Ruiz-del-Solar and P Navarrete, “Eigenspace-based face recognition: a comparative study of different approaches”, IEEE Trans Syst Man, Cyber, Part C (Application Rev), vol 35, no 3, pp 315-325, Aug 2005 40 i [10] M A Turk and A P Pentland, “Face recognition using eigenfaces”, in Proceedings, 1991 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, 1991, pp 586-591 [11] M Yusuf, R V H Ginardi, and A S A, “Rancang Bangun Aplikasi Absensi Perkuliahan Mahasiswa dengan Pengenalan Wajah”, J Tek ITS, vol 5, no 2, pp 766-770, 2016 [12] M Geetha, R S Latha, S K Nivetha, S Hariprasath, S Gowtham and C S Deepak, “Design of face detection and recognition system to monitor students during online examinations using Machine Learning algorithms”, in 2021 International Conference on Computer Communication and Informatics (ICCCI), DOI: 10.1109/ICCCI50826.2021.9402552, 2021 [13] R Brunelli, T Poggio “Face recognition: features vs templates”, IEEE Trans on Pattern Analysis and Machine Intelligence, vol 15, no 10,1993, pp 1042-1052 [14] L Yuille, D S Cohen, and P W Hallinan, "Feature Extraction from Faces using Deformable Templates", Int Journal of Computer Vision, vol 8, no 2, 1992, pp 99-111 [15] T F Cootes, G J Edwards, and C J Taylor, “Active Appearance Models”, IEEE Trans on Pattern Analysis and Machine Intelligence, vol 23, no 6, 2001, pp 681 - 685 [16] BioID Face Database, Available: http://www.bioid.com/downloads/facedb/index.php [17] Amine Kacete, Jerome Royan, Renaud Seguier, Michel Collobert, and Catherine Soladie Real-time eye pupil localization using hough regression forest In 2016 IEEE Winter Conference on Applications of Computer Vision (WACV), pages 1–8 IEEE, 2016 [18] Alex Levinshtein, Edmund Phung, and Parham Aarabi Hybrid eye center localization using cascaded regression and hand-crafted odel fitting Image and Vision Computing, 71:17–24, 2018 41 i [19] Sebastian Vater and Fernando Puente León Combining isophote and cascade classifier information for precise pupil localization In 2016 IEEE International Conference on Image Processing (ICIP), pages 589–593 IEEE, 2016 [20] Shifeng Zhang, Xiangyu Zhu, Zhen Lei, Hailin Shi, Xiaobo Wang, and Stan Z Li Faceboxes: A CPU real-time face detector with high accuracy In 2017 IEEE International Joint Conference on Biometrics (IJCB), pages 1–9 IEEE, 2017 [21] Jun Ho Choi, Kang Il Lee, Young Chan Kim, and Byung Cheol Song Accurate eye pupil localization using heterogeneous CNN models In 2019 IEEE International Conference on Image Processing (ICIP), pages 2179–2183 IEEE, 2019 [22] Kang Il Lee, Jung Ho Jeon and Byung Cheol Song: Deep Learning-based Pupil Center Detection for Fast and Accurate Eye Tracking System, DOI:10.1007/978-3-030-58529-7_3 [23] Jun-Yan Zhu, Taesung Park, Phillip Isola, and Alexei A Efros Unpaired image-to-image translation using cycle-consistent adversarial networks In Proceedings of the IEEE international conference on computer vision, pages 2223– 2232, 2017 [24] Justin Johnson, Alexandre Alahi, and Li Fei-Fei Perceptual losses for real- time style transfer and super-resolution In European conference on computer vision, pages 694–711 Springer, 2016 [25] Al-Rahayfeh, A.; Faezipour, M Eye tracking and head movement detection: A state-of-art survey IEEE J Transl Eng Health Med 2013, [26] Horak, K Fatigue features based on eye-tracking for driver inattention system In Proceeding of the 34th International Conference on Telecommunications and Signal Processing (TSP), Budapest, Hungary, 18–20 August 2011; pp 593–597 [27] Harischandra, J.; Perera, M.U.S Intelligent emotion recognition system using brain signals (EEG) In Proceedings of the 2012 IEEE-EMBS Conference 42 i on Biomedical Engineering and Sciences, Langkawi, Malaysia, 17–19 December 2012; pp 454–459 [28] Hansen D.W., Ji Q In the eye of the beholder: A survey of models for eyes and gaze IEEE Trans Pattern Anal March Intell 2010; 32:478-500 DOI: 10.1109/TPAMI.2009.30 [29] Kawato S., Tetsutami N Detection and Tracking of Eyes for Gaze-Camera Control Img Vis Comp 2004;22:1031-1038.doi: 10.1016/j.imavis.2004.03.013 [30] Al-Rahayfeh A., Faezipour M.Eye tracking and head movement detection: A state-of-art survey IEEE j Transl Eng Health Med 2013;1 DOI: 10.1109/jTEHM.2013.2289879 [31] Hansen D.W., Hansen J.P.Robustifying Eye Interaction; Proceedings of the Conference on Vision for Human-Computer Interaction; New York, NY, USA 17-22 June 2006; pp.152-158 [32] He K., Zhang X., Ren S., Sun J Deep residual learning for image recognition; Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition; Las Vegas, NV, USA 27–30 June 2016; pp 770–778 [33] Mathis A., Mamidanna P., Cury K.M., Abe T., Murthy V.N., Mathis M.W., Bethge M DeepLabCut: Markerless pose estimation of user-defined body parts with deep learning Nat Neurosci 2018; 21:1281–1289 [34] Poulopoulos N., Psarakis E.Z., Kosmopoulos D PupilTAN: A Few-Shot Adversarial Pupil Localizer; Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition; Nashville, TN, USA 19–25 June 2021; pp 3134–3142 [35] Mediapipe, “Solution”, google.github.io, https://google.github.io/mediapipe/solutions/solutions.html [36] Camillo Lugaresi, Jiuqiang Tang, Hadon Nash, Chris McClanahan, Esha Uboweja, Michael Hays, Fan Zhang, ChuoLing Chang, Ming Guang Yong, Juhyun Lee, et al Mediapipe: A framework for building perception pipelines arXiv preprint arXiv:1906.08172, 2019 43 i [37] Valentin Bazarevsky, Yury Kartynnik, Andrey Vakunov, Karthik Raveendran, Matthias Grundmann, et al BlazeFace: Sub-millisecond Neural Face Detection on Mobile GPUs arXiv:1907.05047v2 [38] Yury Kartynnik, Artsiom Ablavatski, Ivan Grishchenko, Matthias Grundmann, et al Real-time Facial Surface Geometry from Monocular Video on Mobile GPUs arXiv:1907.06724 [39] Fan Zhang, Valentin Bazarevsky, Andrey Vakunov, Andrei Tkachenka, George Sung, Chuo-Ling Chang, Matthias Grundmann, et al MediaPipe Hands: On-device Real-time Hand Tracking arXiv:2006.10214 [40] Valentin Bazarevsky, Ivan Grishchenko, Karthik Raveendran, Tyler Zhu, Fan Zhang, Matthias Grundmann, et al BlazePose: On-device Real-time Body Pose tracking arXiv:2006.10204 [41] Artsiom Ablavatski, Andrey Vakunov, Ivan Grishchenko, Karthik Raveendran, Matsvei Zhdanovich Real-time Pupil Tracking from Monocular Video for Digital Puppetry arXiv:2006.11341 44 i i

Ngày đăng: 08/05/2023, 17:34

Tài liệu cùng người dùng

Tài liệu liên quan