1. Trang chủ
  2. » Luận Văn - Báo Cáo

UNDERGRADUATION THESIS Topic PERSONAL AUTHENTICATION BY SINGLE-CHANNEL ECG SIGNALS

96 0 0

Đang tải... (xem toàn văn)

Tài liệu hạn chế xem trước, để xem đầy đủ mời bạn chọn Tải xuống

THÔNG TIN TÀI LIỆU

Thông tin cơ bản

Định dạng
Số trang 96
Dung lượng 4,02 MB

Cấu trúc

  • CHAPTER 1: FUNDAMENTAL BACKGROUND INFORMATION (17)
    • 1.1 Biometrics authentication (17)
      • 1.1.1. Types of biometrics (17)
      • 1.1.2. Basic components of biometric system (26)
      • 1.1.3. Some criteria of biometrics (26)
    • 1.2. Electrocardiography (ECG/ EKG) (27)
      • 1.2.1. Definition (27)
      • 1.2.2. ECG waveform (28)
      • 1.2.3. Different noise in ECG signals (32)
    • 1.3. Wavelet Transform (WT) and Discrete Wavelet Transform (DWT) (34)
      • 1.3.1. Fundamental Concepts and Overview of Wavelet Transform (34)
      • 1.3.2. Multi-resolution Analysis and Continuous Wavelet Transform (39)
      • 1.3.3. Multi-resolution Analysis: Discrete Wavelet Transform (44)
      • 1.3.4. Wavelet families (51)
    • 1.4. Statistics data (59)
    • 1.5. Machine learning (ML) (59)
      • 1.5.1. Types of machine learning (60)
      • 1.5.2. Support Vector Machine (SVM) (62)
      • 1.5.3. K- nearest neighbor (KNN) (69)
  • CHAPTER 2. SIGNAL PREPARATION (74)
    • 2.1. ECG Acquisition (74)
      • 2.1.1. ECG Recording equipment: Kardia mobile (74)
      • 2.1.2. Web plot digitizer (76)
    • 2.2. Experiment set up (78)
  • CHAPTER 3: DESIGNING ECG BASED PERSONAL AUTHENTICATION SYSTEM (81)
    • 3.1. Block diagram (81)
      • 3.1.1. Pre-processing (82)
      • 3.1.2. Feature extraction Algorithm from DWT using Daubechies wavelet (83)
      • 3.1.3. Classification (86)
    • 3.2. Results and Discussion (87)

Nội dung

FUNDAMENTAL BACKGROUND INFORMATION

Biometrics authentication

Biometrics is a technology used to identify, analyze, and measure an individual’s physical and behavioral characteristics A biometric system is a technology, which takes an individual physiological, behavioral, analyzes it, and identifies the individual as a genuine or malicious user Biometric identification consists of determining the identity of a person The aim is to capture an item of biometric data from this person It can be a photo of their face, a record of their voice, or an image of their fingerprint This data is then compared to the biometric data of several other persons kept in a database In this mode, the question is a simple one: "Who are you?" Biometric authentication is the process of comparing data for the person's characteristics to that person's biometric "template" in order to determine resemblance The reference model is first store in a database or a secure portable element like a smart card The data stored is then compared to the person's biometric data to be authenticated Here it is the person's identity which is being verified In this mode, the question being asked is: "Are you indeed Mr or Mrs X?" [1]

Personal biometric authentication: In this problem, I just let one person enter the system, so I just need to identify the person who are you or not

Overall biometric authentication: The problem will allow many people who had database in the system to enter it.

In my project, I solve the Personal biometric authentication problem I indicate the signal that is of A, or not A person

Biometric devices are many types, but majorly there are five types of biometrics security which are commonly used Biometrics is basically the recognition of human being personality that are unique to each human, which includes facial recognition, fingerprints, voice recognition, retina scans, palm prints, and more has shown in Figure 1.1 Biometric technology are used to keep the devices safe in the best way to ensure that people stay out of their valuable assets and information, and will find that using any one of these five biometrics security, devices is a great way to keep things safe.[2]

Figure 1.1 Types of biometrics o Retina scanner

Retina scanning is a biometric verification technology that uses an image of an individual’s retinal blood vessel pattern as a unique identifying trait for access to secure installations The human retina is a thin tissue composed of neural cells that is located within the posterior part of the eye as shown in Figure 1.2.

Due to the complex shape of the capillaries that deliver the retina with blood, all and sundry's retina is unique The network of blood vessels within the retina is so complicated that even identical twins do not proportion a comparable sample Even though retinal styles can be altered in instance of diabetes, Glaucoma or retinal degenerative disorders, the retina typically remains unaffected from birth till dying Due to its unique and unchanging nature, the retina seems to be the maximum precise and dependable biometric [3]. o Iris Scanning

Iris recognition uses digital camera technology, with slight infrared illumination lowering specular reflection from the convex cornea, to create photographs of the detail-wealthy, elaborate systems of the iris as shown in Figure 1.3 Converted into digital templates, those snap shots offer mathematical representations of the iris that yield unambiguous wonderful identity of an individual Iris reputation efficiency is not often impeded by using glasses or contact lenses Iris technology has the smallest outlier (folks that cannot use/enroll) group of all biometric technologies

[4] Moreover, it has a small template size that allows speedy comparisons making iris recognition technology particularly well suited for one-to-many identifications Even genetically identical individuals have distinct iris textures which further confirm that it is a highly accurate and reliable technique Because of its pace of contrast, iris reputation is the handiest biometric technology nicely perfect for one- to-many identity Advantage of iris reputation is its balance, or template sturdiness, a single enrollment can closing an entire life There are few benefits of the use of iris as biometric identification: it's far an inner organ this is properly included against damage and wear by a rather obvious and touchy membrane (the cornea)

A new technology requires substantial investment and hence may not be suitable for small organizations It is quite difficult to perform iris recognition from a distance larger than a few meters and the subject to be identified needs to be co- operative The subject should hold his or her head still and look into the camera Iris recognition is also susceptible to poor quality of images as well as associated failure to enroll rates.

When it comes to biometrics, the iris has several major advantages compared to a fingerprint:

- You do not spread the information around every time you touch something.

- The iris stays virtually unchanged throughout a person’s life A fingerprint, on the other hand, can be dirtied, scarred or eroded.

- You cannot use a fingerprint with dirty or sweaty hands Irises, however, have no such problem.[6]

However, both retina scanners and iris scanners have proven to be easy to trick simply by using a high-quality photograph of the subject’s eyes or face.

Figure 1.3 Iris sample o Finger print scanner

Fingerprints are the graphical glide-like ridges gift on human palms Finger ridge configurations do no longer exchange for the duration of the life of a person besides due to accidents including bruises and cuts on the fingertips This belongings makes fingerprints a totally attractive biometric identifier Fingerprint-based (Figure 1.4) totally private identification has been used for a very long time As a long way as fee is going, the fingerprint scanning is on the lower stop of the dimensions The most inexpensive fingerprint scanners are those that best scan the actual print, though the dearer ones really experiment the presence of blood in the fingerprint, the scale and shape of the thumb, and plenty of different features as shown in Figure1.5 Those costlier structures in reality capture a 3D photo of the fingerprint, thereby making it a great deal more difficult for the fingerprint to be counterfeited

Tests conducted by the International Biometric Group on fingerprint systems of participating vendors found that the false acceptance rate of these systems ranged from 0% to 5% On the same day of enrolment, tests were conducted for false rejection rate and it ranged from 0% to 35% However, when the tests were conducted six weeks later the FRR’s ranged from 0% to 66% Systems from some vendors worked very well while others had some accuracy problems When vendor- independent tests were conducted by the FVG2006, it found that FAR was held constant at a rate of 0.01% This rate is considered sufficient for most authentication scenarios. o Facial biometrics

Each individual has a distinctly unique face, even two twins that cannot differentiate by human’s eye There are sure markers that enable these biometric acknowledgment scanners to in a split second recognize the uniqueness of every individual examining their facial elements Face Recognition System (Figure 1.6) measures and matches the unique characteristics for the purposes of identification or authentication Often leveraging a digital or connected camera, facial recognition software can detect faces in images, quantify their features, and then match them against stored templates in a database.

Figure 1.6 Automatic face recognition system.

Several factors can affect the accuracy of facial biometrics It might not work well under poor lighting conditions, the presence of sunglasses or other objects that partially cover the subject’s face and low resolution images Also the face of a person changes over time The accuracy of some facial recognition systems is also affected by the variation in facial expressions and for this reason most countries allow only neutral facial expressions in passport photos.

A real-world facial biometrics accuracy test verified passport photos of passengers against a lesser quality live scan photo taken within the border control gates itself The top performing vendor in the National Institute of Standards

Technology (NIST) test achieved a FRR of 1.1% Thus, face recognition systems definitely provide better accuracy when compared to live guards performing a manual comparison of passport photos with the passport holders. o Voice recognition

Voice or speaker recognition is the ability of a machine or program to receive and interpret dictation or to understand and carry out spoken commands Voice recognition has gained prominence and use with the rise of AI and intelligent assistants, such as Amazon's Alexa, Apple's Siri and Microsoft's Cortana Each person in the world has a unique voice pattern as shown in Figure 1.7, even though the changes are slight and hardly noticeable to the human ear On the other hand with uncommon voice recognition programming, those moment contrasts in every individual's voice can be noted, experienced and validated to enable the access to the individual that has quality pitch which is a correct one, and at the same time voice level also

Figure 1.7 Sample voice clip as shown in sound editor

Voice recognition systems enable consumers to interact with technology simply by speaking to it, enabling hands-free requests, reminders and other simple tasks Surprisingly it can be effective at differentiating two people who have almost identical voice patterns Voice recognition software on computers requires that analog audio be converted into digital signals, known as analog-to-digital conversion For a computer to decipher a signal, it must have a digital database, or vocabulary, of words or syllables, as well as a speedy means for comparing this data to signals The speech patterns are stored on the hard drive and loaded into memory when the program is run A comparator checks these stored patterns against the output of the A/D converter an action called pattern recognition.

Electrocardiography (ECG/ EKG)

The electrocardiogram (ECG), resulting from the electrical conduction through the heart needed for its contraction, is one of the most recent traits to be explored for biometric purposes [13, 14] Despite being far from as developed or widespread as face or fingerprint biometrics, the ECG offers unique advantages in terms of universality, uniqueness, permanence, and liveness assurance, that attest its potential for the recognition of individuals [15,16]

An EKG, also called an ECG or electrocardiogram, is a recording of the heart's electrical activity It is a quick and painless procedure EKGs captures a tracing of cardiac electrical impulse as it moves from the atrium to the ventricles These electrical impulses cause the heart to contract and pump blood [17].

The leads are placed on specific locations of the body of the person to record ECG either on graph paper or on monitors The human heart contains four chambers i.e., Right Atrium, Left Atrium, Right Ventricle and Left Ventricle The upper chambers are the two Atria’s and the lower chambers are the two Ventricles Under healthy condition, the heartbeat begins at the Right Atrium called Sino Atria (SA) node and a special group of cells send these electrical signals across the heart This signal travels from the Atria to the Atrio Ventricular (AV) node The AV node connects to a group of fibers in Ventricles that conducts the electrical signal and transmits the impulse to all parts of the lower chamber, the Ventricles To ensure that the heart is functioning properly this path of propagation must be traced accurately [18].The basic structure of heart is depicted in Figure 1.10.

Figure 1.10 Schematic anatomy of the human heart

To determine the potential use of ECG as a biometric, it is necessary to evaluate how ECG satisfies the requirements for biometric characteristics.

A "perfect" biometric characteristic should be: o Universal, i.e., each individual possesses this characteristic. o Easily measured, i.e., it is quite easy technically and convenient for an individual to obtain the characteristic. o Unique, i.e., there are no two individuals with identical characteristics. o Permanent, i.e., the characteristic does not change over time.

"Good" biometric characteristics can to a greater or lesser extent satisfy these requirements, depending on the purpose and application of biometric system.

Each heart beat displayed is a sequence of electrical waves characterized by peaks and valleys ECG mainly provides two kinds of information One is the duration of the electrical wave passing through the heart and it will decide whether the electrical activity is normal ,or slow, or irregular Second is the amount of electrical activity passing through the heart muscle that helps to find whether the parts of the heart are too large or overworked The frequency range of an ECG signal is 0.05– 100 Hz and its dynamic range is 1–10 mV The ECG signal is characterized by five peaks and valleys represented by the letters P, Q, R, S, T Sometimes U wave is also present The performance of ECG analysis is based on the accurate and reliable detection of the QRS complex as well as T- and P waves

[19] [20] An ideal ECG wave is as shown in Figure 1.11:

Figure 1.11 Typical cardiac waveform o P waves are the wave of atrial depolarization ( amplitude A< 0.25 mV, time

1 , eg x1 and x3 If yi = ± 1 is the label of xi in unsafe areas then ξi = | wTx+b – yi | With the Soft Margin SVM, the target function will have a further number to help minimize sacrifice From there we have a goal function:

Where C is a positive constant and ξ = [ ξ1 , ξ2 , , ξN ]

Constant C is used to adjust the importance of margin and sacrifice This constant is predetermined by the programmer or can be determined by cross-validation Optimal problem in standard form for Soft-margin SVM:

SIGNAL PREPARATION

ECG Acquisition

2.1.1 ECG Recording equipment: Kardia mobile

In a 12-lead ECG, the recordings are are collected by placing electrodes or small, sticky patches on the fixed position on skin Then, each electrode is connected to one input of a differential amplifier Functionally, these amplifiers amplify the voltage between the active electrode and the reference In a 12-lead ECG, there are

12 leads calculated using 10 electrodes; six of which is placed on the chest, and the rest is placed on limbs The 12-lead ECG is the gold standard for ECG diagnosis and is used for both resting and stress ECGs, thus, it can diagnosis almost cardiac diseases In addition, 1-lead ECG, 3-lead ECG and 6-lead ECG also can record ECG with different time reading Although 1-lead ECG has limitations in disease recognize, it brings features of ECG signals that is enough ability to recognize individuals Nowadays, there are many small, portable devices recording ECG signals in the market All of them use 1- lead ECG

In this research, I used the Kardia mobile with two metal electrodes designed by Aliver Cor Company to acquire ECG signals This device is FDA-cleared, clinical grade personal EKG monitor Kardia captures a medical-grade anywhere, anytime Furthermore, I used MATLAB software (R2017b) to further analyze these signals.

The AliveCor KardiaMobile ECG is a single-channel cardiac event recorder It consists of a device and app that enables you to record and review electrocardiograms (ECG) anywhere, anytime The device (Figure 2.1) attaches to the back of most iOS and Android devices, and communicates wirelessly with the free Kardia app, providing powerful display, analysis and communication capabilities It simply rests on your fingers or chest to record an ECG AliveCor’s proprietary technology converts electrical impulses from user’s fingertips into ultrasound signals transmitted to the mobile device’s microphone Fast, efficient signal transmission results in minimal battery drain [35].

The manufacture provides some specifications listed below.

- Frequency Resp: 0.5Hz to 40 Hz

- Battery life: 12 months typical use

The free software can download from Google Play or App Store (Figure 2.2) This software combines some different applications such as EKG, Blood pressure, Weight and Heart Rate However, the sensor above only is used for EKG and Heart Rate applications The acquired signals are saved as pdf files The records from application accord to the standard ECG record paper After recoding, app allows to fill patient information and save files in mobile or send them through email

The value of scale is printed with signal So that time and amplitude can be defined

Figure 2.2 Kardia software o Set up

Firstly, download the Kardia app on Google Play or App Store Next, set up your account with information such as name, age, height, weight, location The default measure duration is 30 seconds; however, it can be adjusted in range into 60 seconds. o Accuracy

The problem with Kardia mobile is that it is limited to measure near electric source Two meters is at least distance for normal measure Moreover, sometimes signals flicker and it leads to measure more difficultly.

Because the obtain signals are saved as pdf file, I need to use a software to digitize the signals into numeric format, then save in excel files After trying several similar soft wares, I identify that Web plot digital is the best software with high accuracy This software digitizes graphs through their image by filtering colors between graph and background.

This software is downloaded from website https://automeris.io/WebPlotDigitizer/ Extract file and use directly

There are steps to digitize a signals:

- Open signal file, zoom to 162.67% to get the best resolution

- Because saved signal is cut into rows as Figure 1.47, use snipping tool to cut and save each rows into images So that, the total number of each image is 8.

- Open Web plot digitizer, load an image (attend the order of image) o Load one image, choose 2D (XY) plot, set up calibration x1, x2, y1 ,y2, respectively according to scale of ECG paper o Use ‘Pen’ to cover the signal o Set up parameter as the right box in Figure 1.48 Then, run o Click ‘view data’, adjust number formatting : 5 digitals, fixed number, and column separator by space o Click ‘copy to clipboard’ to copy data in excel file. o Perform similarly with the rest of image and save data sequentially

In this software, I digitize signals with sample rate 350 Hz that is higher than Kardia’s one, that can decrease error when reconstructing signal

Figure 2.3 Obtained ECG from Kadia

Experiment set up

In comparison with many projects, many studies will have some available dataset for analysis and characterization, but with this thesis I set myself a process for ECG signal acquisition via Kardia, thus forming a database of a set of people regardless of age.

Database contains about 120 records of ECG signal belonging to 12 different normal persons with age from 16 to 45 with five different statuses: getting up, after eating, after taking a shower, after running, going to bed Some necessary information about recording environment has to be care such as: Experiments’ period, Room temperature, Equipment, Location, Room conditions…

Some notes for participants before experiments:

- Do not use Kardia Mobile while charging your phone.

- Do not take a recording while driving or during physical activity.

- Do not take a recording if the electrodes are dirty Clean them first.

- When recording, relax your arms and hands to reduce muscle noise Rest the forearms and hands on a flat surface.

- Should sit down and fixed arms to avoid shake.

- Far away from electric source at least 2 meters

- Surrounding environment need to be quite quiet

Object sit on the chair, put two hands on the table, the device is in front of the object and next to the phone

The total time of experiment is 70 seconds In the first 8s, participants record to draft ECG to evaluate good or bad signal (clear peak, stable) If it is good, record the actual signal as steps below Inversely, if signal is bad, perform again testing until signal is better or check notes I listed above

The object takes action as steps below.

- Step 1: Press on “Record your EKG” in Kadia app.

- Step 2: Put your fingers on device as Figure 2.5 and adjust posture until having continuous signal to start to run There are 2 seconds to stabilize device and relax

- Step 3: After that, keep posture in 1 minute recording.

- Step 4: When recording finishes, fill up your individual information as instructor in app.

Each person has 10 records for each two statuses The same status of each person must record in different day The total of obtain signal is 120 I label according to the person’s names and order numbers Base on the order numbers are defined I routine each day following time that includes getting up, after eating, after taking a shower, after running, going to bed and repeat in the next day With authenticated person, I recorded 75 times in 15 days

For each data, I cut into smaller segments from 20s to 50s to ensure that no noise from movements It meant that from 7000 th sample to 17500 th sample.

Figure 2.5 Putting fingers on device

DESIGNING ECG BASED PERSONAL AUTHENTICATION SYSTEM

Block diagram

After having digitized data (Figure 3.1), I did the pre-processing to remove the noise, took the useful data Then, selected the strength feature, transferred to classify

Figure 3.1 Raw signal after digitizer

Preprocessing Feature extraction ClassificationECG recording

First of all, for each data, I cut it into segment from 20s to 50s After that, these segments are filtered The purpose of signal pre-processing is to reduce noise and unnecessary samples The low-frequency cutoff (high-pass filter) was set at 0.05 and 0.5 Hz Low-pass filters were set at 40, 100, and 150 Hz (high-frequency cutoff).Thus, there are several filter configurations: 0.05–40 Hz, 0.5–40 Hz, 0.05–

100 Hz, 0.5–100 Hz, 0.05–150 Hz, and 0.5–150 Hz [36] However, according to device manual the manufacturer used enhanced filter with cut off frequency 0.5 and

40 Hz Thus, I still used the same filter configuration The signal will remove baseline wander noise and power line wander.

Because the acquire signals are pre-processing by manufacturer, I tested constructing histogram of the original signal and the reconstructed signal by using DWT (Figure 3.2) It is very clear that two histograms are nearly same Thus, I will not reconstruct signals to remove the other noise

Figure 3.2 Histogram of origin signal (above) and synthesis signal (below)

3.1.2 Feature extraction Algorithm from DWT using Daubechies wavelet

The purpose of this section is extract features contains original ECG Feature extraction is a dimensionality reduction process, where an initial set of raw variables is reduced to more features for processing, while still accurately and completely describing the original data set In previous researches, P Sasikala et al used Daubechies 4 to detect QRS complex, T wave and P wave [37] The method of

M P Nageswari et al is two types of feature extraction: Morphological (R-R peak) and Statistical data ( Mean, Kurtosis, Skewness) [39] Woo- Huyk J et al used window removal method for ECG Identification with high accuracy ( 95.23%) [42].

Biomedical signals typically consist of short-duration high-frequency components spaced in time, which are accompanied by long-duration low- frequency components spaced in frequency Wavelets are considered suitable for analyzing such signals because they have good frequency resolution along with finite time resolution The choice of wavelet depends upon the type of signal to be analyzed The wavelet similar to the signal is usually selected Among the existing wavelet approaches, (continuous, dyadic, orthogonal, biorthogonal), I use real dyadic Wavelet Transform because of its good temporal localization properties and its fast calculations [37] Daubechies 4 (Db4) (Figure 3.3) and Daubechies 6 (Db6) (Figure 3.4) of Daubechies family are similar in shape to QRS complex and their energy spectrum is concentrated around low frequencies [37-41] In this thesis, Db4 is chosen to for extracting features because it has the more similar shape than Db6

This first decomposes ECG signals into several sub bands by using Db4 with level 4 as a tree in Figure 3.5 The process of wavelet decomposition down samples the signal, that essentially means taking the samples at a much lower frequency than the original signal Therefore, details are reduced and QRS complex is preserved.

Figure 3.5 Frequency components of each decomposition level and the

The node ‘s’ is original signal band The yellow nodes are lowpass filters, and the blue nodes are highpass filters Through each level, signal will have less number of samples than the actual signal due to downsampling 2 n ( n is level) compared original signal 2 nd level has exactly half number of samples that of 1 st level, 3 rd level has exactly half number of samples than the 2 nd level Moreover, the frequencies also decomposed that are shown in Figure 3.5

Detect R peak in the down sampled signals

The process of wavelet decomposition down samples the signal,which essentially means taking the samples at a much lower frequency than the original signal

Therefore, details are reduced and QRS complex is preserved Once R peak is detected in 3 rd level reconstructed signal, it must be cross validated in the actual signal.

First of all, find the values which are greater than 60% of the max value of the decomposed signal Invariably these are R peaks As the decomposed signals are noise free signals, First R peak needs to be detected in the noise free signal

However, the ultimate goal is to detect the Peak in the original Signal The sample values in Original Signal will be different than the decomposed signal Thus, my strategy here will be to first detect the R peaks in the down sampled signal than cross verify those points the actual signal Because of noise or higher amplitude T waves can be falsely detected as R wave In order to avoid this, minimum interval is choose for subsequent R wave occurrence below which spurious R wave is eliminated

After that, detect R peaks in original signal Map R location in decomposed signal with R location in original by multiplying with decomposing level R amplitudes are defined corresponding to R locations.

Detect the P, Q, R, S, T peaks with reference to R peaks

From R-Peak, search for Minima and Maxima, these are P, Q, T, S peaks respectively So loop in R-location and search for the other peaks Because of available intervals between R peaks and other peaks, Minima and Maxima are calculated The left points are P peaks and Q peaks The right points are S peaks and T peaks.

Although ECG segments have the same time, the heartbeats of each data may be different Thus, I calculate mean value of amplitudes of each peak types Moreover, mean value, mean absolute deviation, standard deviation, median are calculated with all segments to increase accuracy higher After that, label features into two class: not A and A where A is person needed to authenticate

For the classification process, support vector machine (SVM) is utilized as its better generalizing capabilities and it is common used technique for ECG classification In nonlinear SVM model, input data is mapped into a high dimensional featured space that can be linearly separated SVM model finds the best hyper-plane that can separate all data points of one class from those of the other class During training process, SVM simultaneously maximizes classification performance while minimizing the possibility of over fitting with specified data set

In our classification problem, I trained SVM classifier with the 120 dataset as describe above section with the choosing ratios train/test are equal to 90/10, 80/20, 70/30, 60/40 There are four steps in classification listed in below.

- Choose types of SVM that have best accuracy by choose ‘all SVM’.

- Change ratio train/ test to find the most accurate result.

Results and Discussion

- Preprocessing: band- pass filter [0.5- 50 Hz]

Figure 3.6 Signal and power spectrum before and after filtering

It is clearly to recognize that signal after filtering is smoother than original signal Thus, noises from digitizer process are removed.

- Features: I decompose signals using Db4 at level 4 Figure 3.6 shows the coefficient of signal at 4 levels The frequency bands are separated and ca1, ca2, ca3 and ca4 are cleaner signal However, clearly the number of samples is reduced 2 level Cd4 coefficient does not describe peaks clearly, that is demonstrated the original signal is less noises

My method is applied to 120 train databases that I experimented Eleven features are meanR, meanP, meanQ, meanS, mean, mean value, median value, MAD, skewess , kurtosis and SD Hence, the total of features is 1320

I divide 120 data into two group that ratios train/test are equal to 90/10, 80/20, 70/30, 60/40 After I investigate with the above ratios between train and test and use all SVM methods, I realize Medium Gaussian SVM and Weighted KNN have the best results The accuracy based on tool in MATLAB which will random the number test sample in model are shown in the following table:

Table 3.1 Investigation the accuracy with rations test / train

Table 3.1 indicates that the ratio test/ train 30/70 will have the best results

Therefore, I used this ratio to classify in actual signals

In order to verify the result, I continued to carry out the actual survey by checking random 32 new samples, which do not belonging the trained model In which, 16 samples are A objects, and the rest is others to know how much accuracy, FRR and FAR in case test/train ratio equal 30/70 and extract model Medium

Gaussian and Weighted KNN with highest accuracy that we investigated in Table 3.2 and 3.3

Table 1 shown the results from MATLAB tool about Training and Testing.

Table 3.2 The actual results in survey

Table 3.3 The actual accuracy, FAR, FRR in survey

From the results of Table 3.3, the FAR and FFR of SVM are not close, thus, the authenticated results are not good The accuracy is approximately 87.5% This value is not feasible for a biometric

Based on some results I have done, I have some evaluation as well as comment The accuracy changes in different classifications, and it is not really high There are many reasons causing this problem, and affect to the result.

Firstly, the obtained data are not got directly from device I digitized them in software, thus, the data may be not exactly although the sample rate is higher than the one in device Because reconstructing signals, the interval between waves cannot suitable for this data The other reason is that the step I used in the method cannot optimal for this data

Biometrics refers to metrics related to human characteristics Biometrics is a realistic authentication used as a form of identification and access control It is also used to identify individuals in groups that are under surveillance Biometric identifiers are then measurable, distinctive characteristics used to label and describe individuals Biometric authenticators are frequently labeled as behavioral as well as physiological characteristics Physiological characteristics are related to the shape of the body By utilizing biometrics a man could be distinguished in view of "who she/he is" instead of "what she/he has" (card, token, scratch) or "what she/he knows" (secret key, PIN).In my thesis, I research about parameters of ECG signals, focus on features of ECG signals After that, these data will be processed to serve biometrics In my method, I found the features as waves of ECG signal, and the accuracy is not really high (87.5%) compared to published paper (about 98- 100%)

In the future work, I will research the suitable extract features for this data and compare to available data from Physionet

[1] https://www.gemalto.com/govt/inspired/biometrics

[2] Srivastava HA (2013) Comparison Based Study on Biometrics for Human Recognition IOSR Journal of Computer Engineering (IOSR-JCE) 15: 22-29.

[3] Duarte T (2016) Biometric access control systems: A review on technologies to improve their efficiency Power Electronics and Motion

[4] Bowyer, Kevin W, Hollingsworth KP, Flynn PJ (2016) A survey of iris biometrics research: 2008-2010 Handbook of iris recognition Springer, London 23-61.

[5] Surekha B, Jayant KN, ViswanadhaRaju S, Dey N (2017) Attendance

Recognition Algorithm, Intelligent techniques in signal processing for multimedia security

[6] https://heimdalsecurity.com/blog/biometric-authentication/

[7] Kalyani , CH(2017) Various Biometric Authentication Techniques: A

Review Journal of Biometrics & Biostatistics

[8] Shradha T, Chourasia NI, Chourasia VS (2015) A review of advancements in biometric systems International Journal of Innovative Research in Advanced Engineering 2: 187-204.

[9] https://www.bayometric.com/biometrics-face-finger-iris-palm-voice/

[10] Amrutha, N , Arul,V H., “A Review on Noises in EMG Signal and its Removal”, International Journal of Scientific and Research Publications, Volume 7, Issue 5, May 2017 23 ISSN 2250-3153.

[11] Qunjian, W., Ying, Z., Chi Z., Li T.,& Bin,Y.,” An EEG-Based Person Authentication System with Open-Set Capability Combining Eye Blinking

[12] https://www.tutorialspoint.com/biometrics/biometrics_overview.htm

[13].https://pdfs.semanticscholar.org/5fa5/93e6743ec0bc85254165ca19ab9dcc bb6abf.pdf

[14] Abo-Zahhad M., Ahmed S.M., Abbas S.N Biometric authentication based on PCG and ECG signals: Present status and future directions Signal Image Video Process 2014;8:739–751 doi: 10.1007/s11760-013-0593-4.

[15] Agrafioti F., Bui F.M., Hatzinakos D Secure Telemedicine: Biometrics for Remote and Continuous Patient Verification J Comput Netw

[16] Li M., Narayanan S Robust ECG Biometrics by Fusing Temporal and Cepstral Information; Proceedings of the 2010 20th International Conference on Pattern Recognition (ICPR); Istanbul, Turkey 23–26 August 2010; pp 1326–1329.

[17] https://www.practicalclinicalskills.com/ekg

[18] Nagendra H, S Mukherjee and Vinod Kumar, “Application of Wavelet Techniques in ECG Signal Processing: An Overview”, International Journal of Engineering Science and Technology (IJEST), October 2011, Vol.3, No.10, 7432-7443.

[19] K.V.L Narayana, A Bhujanga Rao,” Wavelet based QRS detection in ECG using MATLAB”, Innovative Systems Design and Engineering, 2011, Vol.

[20] B Anuradha, K Suresh Kumar and V.C Veera Reddy, ”Classification of Cardiac signals using Time Domain Methods”, ARPN Journal of Engineering and Applied Sciences, June 2008, Vol.3, No.3, 7-12.

[21] C Saritha, V Sukanya, and Y Narsimha Murthy,” ECG Signal Analysis Using Wavelet Transforms”, Bulg.J.Phys.35, 2008, 68-77

[22] Rajiv Ranjan, V.K Giri, “A Unified Approach of ECG Signal Analysis”, International Journal of Soft Computing and Engineering (IJSCE), July 2012, Volume-2, Issue-3, 5-10

[23] https://www.physionet.org/pn3/ecgiddb/biometric.shtml

[24].http://www.iosrjournals.org/iosr-jece/papers/ICETEM/Vol.

[25] Snehal Thalkar, Prof Dhananjay Upasani “ Various Techniques for Removal of Power Line Interference From ECG Signal” International Journal of Scientific & Engineering Research, Volume 4, Issue 12, December-2013 12 ISSN 2229-5518

[26] P Raphisak, S.C Schuckers, A.J Curry, An algorithm for EMG noise detection in large ECG data, Comput Cardiol, 31 (2004) 369– 372.

[27].Manivel, K., Samson, R.R., “Noise Removal for Baseline Wander and Power Line in Electrocardiograph Signals” Pubmed, Scholar Google.

[28] Gary M,Friesen Thomas C Jannett, Manal Afify Jadallah,Standford L Yates, Stephen R.Quint, H.Troy N Nagle,1990,“A Comparision of the Noise Sensitivity of Nine QRS Detection Algorithms”, IEEE Transactions on

Biomedical Engineering,Vol,37, No 1, March 1990.

[30] Rajaee, T., Nourani, V., Mohammad, Z K., and Kisi, O (2011) “River suspended sediment load prediction: Application of ANN and wavelet conjunction model.” J Hydrol Eng., 10.1061/(ASCE)HE.1943-5584

[31].Addison, P S., Murrary, K B., and Watson, J N (2001) “Wavelet transform analysis of open channel wake flows.” J Eng Mech 58–70.

[32] Da-Chuan, C., “Haar wavelet analysis”, (2011).

[33] Ayush, D., Bhawna, G., Sunil, A Performance Comparison of Different Wavelet Families Based on Bone Vessel Fusion Panjab University, India.

[34].https://www.analyticsvidhya.com/blog/2018/03/introduction-k- neighbours-algorithm-clustering/

[35] https://www.alivetec.com/pages/alivecor-heart-monitor

[36].Buendía-Fuentes, F.,et al (2012).“High-Bandpass Filters in

Electrocardiography: Source of Error in the Interpretation of the ST Segment”.

[37].Sarikala, P., Wahidabanu, R S D.,(2010),”Identification of Individuals using Electrocardiogram”.Vol.10, No.12, International Jounal of Computer Science and Network Security.

[38].Paul S Addison, “The illustrated wavelet transform handbook”, (IOP Pub., 2002).

[39].Nageswari, M P., et al (2013) “Feature etraction of ECG using

Daubechies wavelet and classification based on Fuzzy C- mean clustering technique”.

[40].I Daubechies (1992),”Ten Lectures on Wavelets”, CBMS- NSF Lecture Notes nr.61, SIAM, Philadelphia.

[41].Athia, B., Ganesan, M., & Sumesh, E P., “Daubechies algorithm for highly accurate ECG feature extraction” Applsci,2017.

[42].Wo-Huyk, J., Sang-Goog, L., “ECG Identification based on Non-Fiducial features extraction using window removal method”,

[43].https://searchenterpriseai.techtarget.com/definition/machine-learning-ML

[44].Tiệp, V.H , “Machine Learning cơ bản” 22 April 2017 [Online]

Available: https://machinelearningcoban.com/2017/04/22/kernelsmv

[45]Witsarut, S (2013),“Implemetetion of Real Time feature extraction of ECG

Ngày đăng: 28/04/2023, 16:52

TÀI LIỆU CÙNG NGƯỜI DÙNG

TÀI LIỆU LIÊN QUAN

w