Đồ án real time attendance management system based on face recognition with facial features techniques

73 3 0
Đồ án real time attendance management system based on face recognition with facial features techniques

Đang tải... (xem toàn văn)

Tài liệu hạn chế xem trước, để xem đầy đủ mời bạn chọn Tải xuống

Thông tin tài liệu

MINISTRY OF EDUCATION AND TRAINING HO CHI MINH CITY UNIVERSITY OF TECHNOLOGY AND EDUCATION FACULTY FOR HIGH QUALITY TRAINING CAPSTONE PROJECT ELECTRONICS AND COMMUNICATIONS ENGINEERING TECHNOLOGY REAL – TIME ATTENDANCE MANAGEMENT SYSTEM BASED ON FACE RECOGNITION WITH FACIAL FEATURES TECHNIQUES LECTURER: PHAM NGOC SON, Dr STUDENT: NGUYEN DAC DUONG NGUYEN QUOC HUNG SKL 0 Ho Chi Minh City, August 2022 HO CHI MINH CITY UNIVERSITY OF TECHNOLOGY AND EDUCATION FACULTY FOR HIGH QUALITY TRAINING CAPSTONE PROJECT REAL – TIME ATTENDANCE MANAGEMENT SYSTEM BASED ON FACE RECOGNITION WITH FACIAL FEATURES TECHNIQUES Student 1: NGUYEN DAC DUONG Student ID: 18161006 Student 2: NGUYEN QUOC HUNG Student ID: 18161018 Major: ELECTRONICS AND COMMUNICATIONS ENGINEERING TECHNOLOGY Advisor: PHAM NGOC SON, Dr Ho Chi Minh City, August 2022 THE SOCIALIST REPUBLIC OF VIETNAM Independence – Freedom– Happiness -Ho Chi Minh City, July 27, 2022 CAPSTONE PROJECT ASSIGNMENT Student name: NGUYEN DAC DUONG Student ID: 18161006 Student name: NGUYEN QUOC HUNG Student ID: 18161018 Major: Electronics and Communications Engineering Class: 18161CLA Technology Advisor: PHAM NGOC SON, Dr Phone number: 0966609555 Date of assignment: 06/03/2022 Date of submission: 20/07/2022 Project title: Attendance system using face recognition and face mask detection Initial materials provided by the advisor: Papers, book, regulations on ethics in scientific research Content of the project: Develop a system using OpenCV and machine learning that automatically takes attendance in class using facial recognition and converts data to an excel file for time attendance Final product: Software CHAIR OF THE PROGRAM ADVISOR (Sign with full name) (Sign with full name) THE SOCIALIST REPUBLIC OF VIETNAM Independence – Freedom– Happiness -Ho Chi Minh City, …, …, 2022 PRE-DEFENSE EVALUATION SHEET Student name: NGUYEN DAC DUONG Student ID: 18161006 Student name: NGUYEN QUOC HUNG Student ID: 18161018 Major: Electronics and Communications Engineering Technology Project title: Attendance system using face recognition and face mask detection Name of Reviewer: EVALUATION Content and workload of the project Strengths: Weaknesses: Approval for oral defense? (Approved or denied) Overall evaluation: (Excellent, Good, Fair, Poor) Mark: ……………… (in words: ) Ho Chi Minh City, … month … day, …… year REVIEWER (Sign with full name) THE SOCIALIST REPUBLIC OF VIETNAM Independence – Freedom– Happiness -Ho Chi Minh City, …, …., 2022 EVALUATION SHEET OF DEFENSE COMMITTEE MEMBER Student name: NGUYEN DAC DUONG Student ID: 18161006 Student name: NGUYEN QUOC HUNG Student ID: 18161018 Major: Electronics and Communications Engineering Technology Project title: Attendance system using face recognition and face mask detection Name of Defense Committee Member: EVALUATION Content and workload of the project Strengths: Weaknesses: Overall evaluation: (Excellent, Good, Fair, Poor) Mark: ……………… (in words: ) Ho Chi Minh City, … month … day, …… year COMMITTEE MEMBER (Sign with full name) ACKNOWLEDGMENTS The success and outcome of this project were made possible by the guidance and support of many people Our team is incredibly privileged to have got this all along with the achievement of our project It required a lot of effort from each individual involved in this project, and we would like to thank them No success is entirely self-made, because to achieve the best success, it must depend on many factors Firstly, our team would like to thank the School Board of the Ho Chi Minh City University of Technology and Education and the Faculty for High-Quality Training for creating wonderful conditions for us to complete our project Secondly, our team would like to express our deep and sincere gratitude to our research supervisor and advisor Dr Pham Ngoc Son, for giving us the opportunity to research and providing invaluable guidance throughout this project His dynamism, vision, sincerity, and motivation have deeply inspired us The lecturer taught us the methodology to carry out the project and to present the project work as clearly as possible It was a great privilege and honor to work and study under the guidance of Dr Pham Ngoc Son Once again, our team would like to thank the advisor Dr Pham Ngoc Son Thirdly, we are grateful to all of our friends in class 18161CLA who helped us out by giving us counsel and kind assistance whenever we needed it Finally, we thank go all the people who have supported me in completing the research work, directly or indirectly i Table of Contents ACKNOWLEDGMENTS i ABBREVIATIONS vi ABSTRACT vii CHAPTER 1: OVERVIEW 1.1 Introduction 1.2 Literature review 1.3 Project Objectives and Scope 1.4 Methodology 1.5 Thesis Outline CHAPTER 2: THEORETICAL FRAMEWORK 2.1 Deep Learning 2.1.1 Introduction to deep learning 2.1.2 Deep learning approaches 2.1.3 Deep learning frameworks 2.2 Convolutional Neural Network (CNN) 2.2.1 Overview of convolutional neural network CNN 2.2.2 The structure of the CNN 2.3 Facial Recognition 10 2.4 Classifiers 11 2.4.1 K-Nearest Neighbors (KNN) 11 2.4.2 Haar Cascade 13 2.5 Libraries and Implementations 15 2.5.1 TensorFlow 15 2.5.2 Keras 16 2.5.3 OpenCV 17 CHAPTER 3: SYSTEM DESIGN 18 3.1 Block Diagram 18 3.2 Detailed System Design 18 3.2.1 Image Acquisition for Liveness and Face Mask Detection 18 3.2.2 Training Model for Face Mask and Liveness Detection 21 3.3 Attendance System with Graphical User Interface (GUI) 27 3.3.1 Image Acquisition for Face Recognition 29 3.3.2 Facial Recognition System 31 ii 3.3.3 Face Mask Recognition and Time Attendance 37 CHAPTER 4: RESULTS AND DISCUSSION 45 4.1 Result 45 4.2 Discussion 50 CHAPTER 5: CONCLUSION AND FUTURE WORK 57 REFERENCE 58 iii List of Figures Figure 2.1 The architecture of Convolutional Neural Network [18] Figure 2.2 List of the activation functions that are used the most frequently [19] 10 Figure 2.3 KNN algorithm is shown visually [21] 12 Figure 2.4 Sample of Haar features 13 Figure 2.5 Types of Haar features 14 Figure 2.6 Illustration of how an integral image works [22] 14 Figure 2.7 TensorFlow example 16 Figure 3.1 Block diagram of attendance system using face recognition and face mask detection 18 Figure 3.2 Face mask detection dataset 19 Figure 3.3 Liveness detection dataset 19 Figure 3.4 Flowchart of image acquisition for liveness and face mask detection 20 Figure 3.5 Block diagram of face mask detection [25] 22 Figure 3.6 Block diagram of face liveness detection 24 Figure 3.7 The source code of Face Mask and Liveness training models 26 Figure 3.8 Deep learning architecture for LivenessNet [26] 26 Figure 3.9 Main menu GUI window 27 Figure 3.10 Flowchart for the Graphical User Interface 28 Figure 3.11 Facial recognition dataset (Duong and Hung) 29 Figure 3.12 Flowchart of image acquisition for face recognition 30 Figure 3.13 The whole process of face recognition 31 Figure 3.14 Facial feature extraction 32 Figure 3.15 Train directory for KNN algorithm 34 Figure 3.16 Flowchart for training model face recognition 35 Figure 3.17 The source code of facial recognition training 36 Figure 3.18 Verification in Face Recognition 36 Figure 3.19 Source code of main detector function 38 Figure 3.20 Flowchart for detection main function 40 Figure 3.21 Flowchart for the reading frame from the camera (threading 1) 41 Figure 3.22 Flowchart for predicting name ID check-in (threading 2) 42 Figure 3.23 Flowchart for predicting face mask and liveness (threading 3) 43 Figure 3.24 Flowchart for predicting name ID check-out (threading 4) 44 Figure 4.1 Face Mask Recognition GUI function 45 Figure 4.2 Register new user GUI window 45 Figure 4.3 Results of testing the actual experience of real Time Face Mask Recognition and an appropriate message is displayed (Automated) 47 Figure 4.4 Identification results are printed in the command window 48 Figure 4.5 Attendance information is saved in an excel sheet 48 Figure 4.6 Project folder management 49 Figure 4.7 Result of training model of Face Mask Detection 51 Figure 4.8 Result of training model of Liveness Detection 52 iv The entire program is tested well and in a different set of conditions Some of the conditions tested are displayed in the following diagram as detected by our system: − In the test case (Figure 4.3a): All real facial features, mouth, nose, and chin are visible so our system is indicating that the user has not worn a mask and valid face, the label will indicate “Name; Real Face; Please wear your mask” and then display on the screen − In the test case (Figure 4.3b): All fake facial features, mouth, nose, and chin are visible so our system is indicating that the user has not worn a mask and invalid face, the label will indicate “Name; Invalid; Fake Face” and then display on the screen − In the test case (Figure 4.3c): In this test case facial features are hidden due to wearing a mask and real facial so our system is indicating the proper mask and the label will be indicating “Name; Valid; Real Face” and then display on the screen − In the test case (Figure 4.3d): In this test case facial features are hidden due to wearing a mask and fake facial so our system is indicating the proper mask and the label will be indicating “Name; Invalid; FakeFace” and then display on the screen a b c d 46 e f Figure 4.3 Results of testing the actual experience of real Time Face Mask Recognition and an appropriate message is displayed (Automated) a Real face and no mask; b Fake face and no mask; c Real face and wear mask; d Fake face and no mask; e identification camera at the exit; f case of incorrect identification Figure 3.10 demonstrates how the procedure is automated after pressing the start button The face image is captured from the video recording frame and the face recognition is automated Figure 4.4 shows the current and historical identification results of successful check-ins and checkouts printed to the command window One thing to keep in mind at this stage is that the frontal face must be clearly seen and the photo must be taken in ambient lighting For best results, there should also be a little fluctuation in the posture of the user or facial expression in each photograph that is recorded The faces of the users are accurately caught under typical lighting circumstances and based on their correctly located posture The lighting around needs to be efficiently maintained A picture of a person is simply disregarded if their database is empty But in order to avoid any false detection, it is important to maintain optimum illumination Images may be recognized by the attendance system in a variety of lighting and angle situations Unknown faces are those that are not included in our training dataset Real-time attendance tracking is done for users with identifiable photos, satisfy the following conditions: the check-in image at the entrance camera must be a real face label, and the checkout time image at the exit camera (Figure 4.3e), then automated system saving and import to an Excel sheet (Figure 4.5) The system automatically generates an Excel file with the relevant name, date, and time as well as calculates the engagement time attendance 47 Figure 4.4 Identification results are printed in the command window Figure 4.5 Attendance information is saved in an excel sheet Not exactly easy to keep track of all the data you use for models and tests There is more involved than merely organizing and tracking data, and it takes a lot of time To maintain track of the most recent version, you must make sure that everyone is following updates concurrently and is on the same page Figure 4.6 describes in detail how to manage the folders of the project Folder management is very important so we divided the project folder into six big categories: FunctionContribution, GUI, Models, Dataset, Attendance_Result, and temp file In the FunctionContribution folder, there are programs to run the main functions of the project such as creating user datasets and training newcomers and detectors The GUI folder will contain the items needed to create a userfriendly interface The model folder will store all the models that have been trained by the system based on the available data set The Dataset folder is where the dataset of users will 48 be stored to be used for facial recognition training The Attendance_Result folder is the place to store the data of the attendance process including the pictures taken when checking in and out as well as the CSV file that summarizes the information Finally, the temp folder stores temporary data files created when the program runs Figure 4.6 Project folder management 49 4.2 Discussion The outcomes of the study employing the face identification system using KNearest Neighbor (KNN) are separated into two parts: training and testing In this study, the situation is tested with the parameter k= n neighbors (where n are the number of people who have been trained for the system previously) A predetermined number of classes, k, are provided in the partitioning procedure Training data are divided into k classes according to how far apart they are from one another The grouping process is repeated until a specified partitioning criterion is satisfied, maximizing the homogeneity of a particular cluster The two simplest methods for calculating the cluster distance are the minimum distance and maximum distance presentations The separation of the closest components from various classes serves as a proxy for the distance between clusters Single linkage is another way to define it The encoded face photos from the same person with comparable facial expressions, head poses, and lighting circumstances are supposed to be represented by the feature vectors inside a cluster for face identification When a live face is provided, it is compared to every cluster already in existence to determine the appropriate match group The approach requires significant computational work, despite being exact The cluster distances associated with a cluster must be recalculated whenever a new observation is added to the cluster as an element Unfortunately, the similarities between faces from the same person in various viewpoints are frequently greater than those between faces from different people in identical positions and expressions The main difficulty with this strategy is figuring out how many clusters there are and where they are The likelihood that this strategy will result in cluster overlaps in the real world is unavoidably high When incorrect clustered elements are present, the partition algorithm might no longer converge In other words, the overlapped boundary may gradually widen until two clusters are erroneously joined For recompense, extra work must be put in, which is a challenging endeavor In order to cluster the training sequence sets, it consequently appears to be impossible to use simply the minimum/maximum distance approaches The training process is also known as backpropagation The model must go through a training phase in order to identify the weights that most properly reflect the input data and correspond to the right output class As a result, their weights are continually adjusted and shifted toward their ideal output class The CNN model in this study was trained using face mask detection and Liveness Detection Training data was split into batches of 1024 for face mask detection and 2048 for liveness Detection during the training phase Batch size entails dividing the entire dataset into a series of smaller pieces of data put into the model one at a time It is possible to train the model more quickly and adjust the precision of the gradient error by dividing the training dataset into batches The size of a step toward reducing the loss function has also been determined using the learning rate of 0.0001 in a 50 similar manner The optimization algorithm, forward pass, loss function, backward pass, and weights updates are used to train the model using the labeled data During the training process of this paper, Adam optimizer has been applied to the CNN model Adam is a suggested adaptive learning rate optimization technique designed specifically for deep neural network training Every learning rate is computed by the Adam optimizer in accordance with various model parameters As a result, optimization of Adam techniques was quite helpful for this methodology as it employs assessments of the first and second gradient moments to adjust the learning rate for each of the neural network weights The MSE loss function was employed in this strategy to provide the greatest accurate results The term “MSE” (or “Mean Square Error”) refers to the total of the squared distances between the target variable and the predicted values The MSE loss function is derived using Equation 4.1, where n is the number of data points, Ytrue is the value actually obtained for Data Point I, and Ypredicted is the result of the model MSE = ∑𝑛𝑖=1(𝑌𝑡𝑟𝑢𝑒 − 𝑌𝑝𝑟𝑒𝑑𝑖𝑐𝑡)2 𝑛 (Eq 4.1) Every cycle a model makes through the whole training dataset is referred to as an epoch By providing the neural network with training data for more than one epoch, for instance, the results should improve in terms of predicting the provided unknown data, which is the test data In training face mask detection, 100 epochs were used to predict test data with the highest degree of accuracy The effect of epochs on the loss percentage is displayed in Figure 4.7 Figure 4.7 Result of training model of Face Mask Detection 51 Figure 4.8 Result of training model of Liveness Detection In the Liveness models (Figure 4.8), it is clear that the model is underfitting in the epoch intervals below The training loss is smaller than the validation loss This might mean that the model is not fitting the data well When a model is unable to effectively represent the training set of data, underfitting occurs and huge mistakes are produced Additionally, the outcomes of the scene show that additional training is required to lessen the loss experienced during training The validation loss diminishes but then starts to climb again at a certain time between the 0th and 13th epochs of the face mask model and the 50th epoch and beyond of the liveness models typically, this means that the model is overfitting and unable to generalize to fresh data More specifically, the model performs admirably on training data but poorly on the fresh data in the validation set The model may be too sophisticated for the data or it may have taken a long time to train, which is one major cause of this phenomenon In this situation, early stopping can be used to cease training while the loss is low and stable One of the various methods employed to prevent overfitting is early stopping Early stopping is a regularization technique that helps prevent overfitting on the training dataset The training ends if the loss stops reducing for numerous epochs in a row It monitors the validation loss A file is created that contains the necessary weights The patience parameter indicates the number of epochs to wait before an early stopping because the validation loss stopped decreasing after the 80th epoch for the face mask model and the 50th for the liveness model The early stopping should finish the training after exactly 80 epochs for the face mask detection model and 50 epochs for the liveness detection model 52 The test results on each module and integrated module are presented in Table 4.1 It is easy to see that the system is completed and has a track record of covering most of the requirements from the goal achieved before completion Table 4.1 System test list report No Action Capture Images Actual Output Test Result Images (In/Out) are Pass captured and stored Vector features are Pass created and Encoded values are stored The name of the Pass detected person is displayed on the screen Label (Real/Fake Pass face) of detected person is displayed on the screen Inputs Expected Output A person’s Images (In/Out) Face are captured and stored Train the image Stored Feature Extraction dataset images of a and Encoding face Face Recognition A live stream of a person’s face Liveness Face A live Detection stream of a person’s face Face Mask A live Detection stream of a person’s face Update time Time In/Out attendance to after ID CSV file recognition successful Detect mulifaces and update time attendance to CSV file Multiple faces from a live video stream The name of the detected person is displayed on the screen Label (Real/Fake face) of detected person is displayed on the screen Label (Mask/Without Mask) of detected person is displayed on the screen Save information (Name, Time In/Out, Time engaged) to the CSV file Update time attendance for all faces detected Label Pass (Mask/Without Mask) of detected person is displayed on the screen Save information Pass (Name, Time In/Out, Time engaged) to the CSV file Attendance is Fail updated only for a single face The prediction values utilizing the matching threshold of 0.5 and recorded True Positive (TP), True Negative (TN), False Positive (FP), and False Negative (FN) for face masks, face liveness detection of various scenarios are shown in Tables 4.2, 4.3, and 4.4, respectively In order to evaluate the performance, five additional volunteers were recruited whose faces were not stored in the database Because of this, the testing datasets include two scenarios: indoor and outdoor with varying levels of brightness Each subject is required to be present for the testing three times over The frontal face, right face (at an angle of ±30 degrees), and left face (at an angle of ±30 degrees) have all been tested from a distance of 50 to 100cm 53 To assess the effectiveness of the system, various datasets are employed To support the system and for research purposes, a dataset containing various variable circumstances, such as illumination and expression, will be employed We will also analyze the system for real-time applications using our own database According to the previous researchers' reviews of the literature, determining recognition accuracy is the usual way to defend the effectiveness of the system The following definitions provide the accuracy or recognition rate formula: Accuracy = 𝑡𝑜𝑡𝑎𝑙 𝑚𝑎𝑡𝑐ℎ𝑒𝑑 𝑖𝑚𝑎𝑔𝑒𝑠 𝑡𝑜𝑡𝑎𝑙 𝑡𝑒𝑠𝑡𝑒𝑑 𝑖𝑚𝑎𝑔𝑒𝑠 𝑥100% = 𝑇𝑃 + 𝑇𝑁 𝑇𝑃+𝑇𝑁+𝐹𝑃+𝐹𝑁 𝑥100% 𝑇𝑃 Precision = 𝑇𝑃+𝐹𝑃 𝑥100% (Eq 4.2) (Eq 4.2) 𝑇𝑃 Recall = 𝑇𝑃+𝐹𝑁 𝑥100% (Eq 4.4) 𝑃𝑟𝑒𝑐𝑖𝑠𝑖𝑜𝑛 ∗ 𝑅𝑒𝑐𝑎𝑙𝑙 F1 =2x 𝑃𝑟𝑒𝑐𝑖𝑠𝑖𝑜𝑛 + 𝑅𝑒𝑐𝑎𝑙𝑙 (Eq 4.5) The experimental test results are divided into many different cases and the number of input images will be changed from 200 images to 1000 images The number of images is divided equally for each label and will be retried times for each change in the number of input images According to the findings, the suggested attendance system using deep learning models got pretty good accurate prediction values The table below shows the total performance using the Confusion Matrix The suggested method obtained an overall accuracy of about 95.3% for the face mask detection model and 75.1% for the face liveness detection model (Equation 4.2) Precision for only face mask detection of 96.4 percent and 76.5 percent for only fake face detection (Equation 4.3), and recall of 94 percent and about 74.3 percent respectively (Equation 4.4) Table 4.2 Testing actual accuracy for Face Mask detection Test scenario Indoor Outdoor Test No Number of images 6 200 200 600 600 1000 1000 200 200 600 600 1000 1000 Mask TP 92 91 278 269 471 466 95 96 291 289 478 476 FN 22 31 29 34 11 22 24 Without Mask TN FP 94 94 288 12 293 488 12 491 96 97 290 10 287 13 488 12 489 11 Precision (Mask) Recall (Mask) F1Score (Mask) 93.9% 93.8% 95.9% 97.5% 97.5% 98.1% 96.0% 97.0% 96.7% 95.7% 97.6% 97.7% 92.0% 91.0% 92.7% 89.7% 94.2% 93.2% 95.0% 96.0% 97.0% 96.3% 95.6% 95.2% 92.9% 92.4% 94.2% 93.4% 95.8% 95.6% 95.5% 96.5% 96.8% 96.0% 96.6% 96.5% Accuracy 93.0% 92.5% 94.3% 93.7% 95.9% 95.7% 95.5% 96.5% 96.8% 96.0% 96.6% 96.5% 54 Table 4.3 Testing actual accuracy for Liveness Face detection Test scenario Test No Number of images Recall (Fake) F1Score (Fake) Accuracy FP Precision (Fake) TP FN TN 200 71 31 69 29 71.0% 69.6% 70.3% 70.0% 200 72 29 71 28 72.0% 71.3% 71.6% 71.5% 600 226 74 226 74 75.3% 75.3% 75.3% 75.3% 600 231 81 219 69 77.0% 74.0% 75.5% 75.0% 1000 431 99 401 69 86.2% 81.3% 83.7% 83.2% 1000 417 97 403 83 83.4% 81.1% 82.2% 82.0% 200 73 32 68 27 73.0% 69.5% 71.2% 70.5% 200 71 30 70 29 71.0% 70.3% 70.6% 70.5% 600 212 98 202 88 70.7% 68.4% 69.5% 69.0% 600 229 84 216 71 76.3% 73.2% 74.7% 74.2% 1000 402 103 397 98 80.4% 79.6% 80.0% 79.9% 1000 411 114 386 89 82.2% 78.3% 80.2% 79.7% Fake Real Indoor Outdoor Table 4.4 Testing actual accuracy for Face and Face Mask Recognition Test scenario Test No Number of images Positive Negative Accuracy 100 82 18 82.0% 100 86 14 86.0% 300 263 37 87.7% 300 274 26 91.3% 500 456 44 91.2% 500 444 56 88.8% 100 84 16 84.0% 100 86 14 86.0% 300 277 23 92.3% 300 281 19 93.7% 500 455 45 91.0% 500 468 32 93.6% Indoor Outdoor 55 The rigorous matching value employed, as stated in the Table above, results in a good accuracy rate Because there are fewer False Negative instances, the likelihood of False Positive cases is reduced However, the testing result can be varied if tested using different thresholds and distances from the camera or changing to the new camera Since each test subject is tested at a distance of 50 to 100 cm, the region of the facial resolution is relatively high, and the feature vectors acquired during the test are more accurate and include more information The testing, however, is carried out remotely utilizing a separate camera and computer technology Due to these considerations, performance testing may experience minor problems Performance testing must be carried out utilizing various matching criteria in order to establish the desired threshold for upcoming tasks 56 CHAPTER 5: CONCLUSION AND FUTURE WORK Before a project like this was developed The traditional technique of taking attendance has a lot of flaws, and most institutions have had a lot of problems with it Therefore, the facial recognition technology integrated into the attendance monitoring system can not only guarantee correct attendance taking but also remove the shortcomings in the prior system Technology may be used to eliminate flaws, which not only saves money but also minimizes the need for human interaction by delegating all the difficult tasks to machines This thesis addressed research on the detection of face masks and facial recognition using deep learning methods by creating a model with machine learning Despite the fact that half of the faces were hidden by face masks, this technique produced accurate and speedy results for facial recognition security systems The test findings reveal a high degree of accuracy in determining whether a person is wearing a face mask, not wearing one, or wearing one incorrectly A notable accomplishment was the ability of the model to reach performance accuracy of around 85% The study also provided a valuable method for preventing the COVID-19 disease by allowing everyone to wear a face mask while completing biometric verification However, there are several restrictions on the proposed approach The input image must be frontal and just one facial must be recognized initially Second, if there is a lighting issue, accuracy may suffer Third, if the image that was obtained is blurry or for other reasons, incorrect recognition could happen Additionally, this idea could help the public healthcare system It is indicated that wearing a mask properly can considerably limit coronavirus transmission during the coronavirus pandemic The finger biometric system needs placing a finger on the surface of the biometric attendance system, and coronavirus transmits through contact with an infected surface This project may be used at colleges, universities, or workplaces with automatic doors to determine if a student or member of the staff is wearing a mask or not and to record his or her attendance Future scaling of this system is possible in accordance with changing conditions and settings For instance, image processing may be scaled to recognize several people in a crowd who are not wearing masks on city streets Finally, the system not only resolves problems that existed in the previous model but also provides the user with the ease of accessing the information collected, which refined the presence of technology to aid human needs 57 REFERENCE [1] WHO, “Weekly epidemiological update on COVID-19 - 24 April 2022”, World Health Organization, Edition 63, Emergency Situational Updates, 26 October 2021 [2] Wati, V., Kusrini, K., Al Fatta, H., & Kapoor, N., “Security of facial biometric authentication for attendance system” Multimedia Tools and Applications, vol 80, pp 23625–23646, 2021 [3] Ahmed, S B., Ali, S F., Ahmad, J., Adnan, M., & Fraz, M M., “On the frontiers of pose invariant face recognition: a review” Artificial Intelligence Review, vol 53, pp 2571–2634, 2020 [4] Y Sun, X Wang, and X Tang, “Deeply learned face representations are sparse, selective, and robust”, CoRR, 2014 [5] G K Jakir Hussain, R Priya, S Rajarajeswari, P Prasanth, N Niyazuddeen, “The Face Mask Detection Technology for Image Analysis in the Covid-19 Surveillance System”, IOP Publishing Ltd, 2021 https://doi.org/10.1088/17426596/1916/1/012084 [6] Amrit Kumar Bhadani, Anurag Sinha, “A Facemask Detector Using Machine Learning and Image Processing Techniques”, Amity University Jharkhand Ranchi, Jharkhand, 2021 [7] Joseph, Jomon, and K P Zacharia “Automatic attendance management system using face recognition”, International Journal of Science and Research (IJSR) 2.11: 327330, 2013 [8] Patil, Ajinkya, and Mrudang Shukla, “Implementation of classroom attendance system based on face recognition in class”, International Journal of Advances in Engineering & Technology 7.3: 974, 2014 [9] Kanti, Jyotshana, and Shubha Sharm “Automated Attendance using Face Recognition based on PCA with Artificial Neural Network.” International journal of science and research IJSR, 2012 [10] MuthuKalyani, K., and A VeeraMuthu “Smart application for AMS using face recognition”, Computer Science & Engineering 3.5, 2013 [11] Deshmukh, Badal J., and Sudhir M Kharad “Efficient Attendance Management: A Face Recognition Approach”, International Journal of Computer Science Issues, 2014 [12] Rachel Draelos, MD, Ph.D., “The History of Convolutional Neural Networks Glassboxmedicine Retrieved December 26, 2020”, GLASS BOX, 2019, April 13 https://glassboxmedicin-e.com/2019/04/13/ashorthistory-of-convolutional-neuralnetworks/ [13] Deng L, Yu D “Deep learning: methods and applications”, now Publishers Inc, Foundations and Trends in Signal Processing 7, 2014 58 [14] Bernhard Scholkopf and Alexander J Smola “Learning with kernels: support vector machines, regularization, optimization, and beyond” MIT Press, 2001 [15] Anil K Jain “Data clustering: 50 years beyond K-means, Pattern recognition letters”, Volume 31, Issue 8, ISSN 0167-8655, 2010 [16] Volodymyr Mnih, Adria Puigdomenech Badia, Mehdi Mirza, Alex Graves “Asynchronous Methods for Deep Reinforcement Learning” International conference on machine learning, 2016 [17] Bengio, Y “Learning deep architectures for AI” Foundations and Trends in Machine Learning: Vol 2: No 1, pp 1-127, 15 Nov 2009 http://dx.doi.org/101561/2200000006 [18] Ajitesh Kumar, “Different Types of CNN Architectures Explained: Examples”, Data Analytics, Powered by WordPress, 2022 [19] J.K Rahul Jayawardana, T Sameera Bandaranayake, “Analysis of optimizing neural networks and artificial intelligent Models for guidance, control, and navigation systems”, International Research Journal of Modernization in Engineering Technology and Science, 2021 [20] W Liu et al., “SSD: Single Shot MultiBox Detector”, ECCV 2016, Computer Vision and Pattern Recognition, 2015 https://doi.org/10.48550/arXiv.1512.02325 [21] Srishilesh P S, “An Introduction to KNN Algorithm”, Slack community, 2021 [22] Zhengyou Zhang, “A Survey of Recent Advances in Face Detection”, Microsoft, 2010 [23] M Xin and Y Wang, “Research on image classification model based on deep convolutional neural network” vol 8, 2019 [24] OpenCV team, “OpenCV”, Opencv.org [Online] Available: https://opencv.org/ 2020 [25] Adrian Rosebrock, “COVID-19: Face Mask Detector Keras/TensorFlow, and Deep Learning”, May 4, 2020 with OpenCV, [26] Adrian Rosebrock, “Liveness Detection with OpenCV”, PyImageSearch All Rights Reserved, March 11, 2019 59 S K L 0

Ngày đăng: 11/11/2023, 10:49