1. Trang chủ
  2. » Luận Văn - Báo Cáo

Development of a wearable ai based device for wrist pulse diagnosis

104 1 0

Đang tải... (xem toàn văn)

Tài liệu hạn chế xem trước, để xem đầy đủ mời bạn chọn Tải xuống

THÔNG TIN TÀI LIỆU

Thông tin cơ bản

Tiêu đề Development of a Wearable AI-based Device for Wrist Pulse Diagnosis
Tác giả Dao Thanh Quan, Le Quoc Tuan
Người hướng dẫn PhD. Bui Ha Duc
Trường học Ho Chi Minh City University of Technology and Education
Chuyên ngành Mechanical Engineering
Thể loại graduation thesis
Năm xuất bản 2023
Thành phố Ho Chi Minh City
Định dạng
Số trang 104
Dung lượng 7,54 MB

Cấu trúc

  • CHAPTER 1.INTRODUCTION (17)
    • 1.1. Motivation (17)
    • 1.2. Scientific and practical significances (18)
    • 1.3. Objectives (18)
    • 1.4. Research methods (18)
    • 1.5. Structure of the report (19)
  • CHAPTER 2.LITERATURE REVIEW (20)
    • 2.1. Introduction to Traditional Medicine (20)
    • 2.2. Introduction to the use of sensors in collecting pulse wave signals (23)
      • 2.2.1. Photoplethysmography (PPG) sensors (24)
      • 2.2.2. Piezoresistive sensors (25)
      • 2.2.3. Capacitive pressure sensors (27)
      • 2.2.4. Piezoelectricity sensors (29)
    • 2.3. Research overview (31)
    • 2.4. Introduction to noise-removing technique used for pulse signal (33)
      • 2.4.1. Finite Impulse Response (FIR) filter (34)
      • 2.4.2. Infinite Impulse Response (IIR) filter (35)
      • 2.4.3. Wavelet-based filter (36)
    • 2.5. Introduction to metrics used in time series (37)
      • 2.5.1. Introduction to signal-to-noise ratio (37)
      • 2.5.2. Introduction to mean squared error (38)
    • 2.6. Introduction to Machine Learning in Traditional Medicine (38)
      • 2.6.1. Application of machine learning (38)
      • 2.6.2. Introduction to statistical model used in time series forecasting (39)
      • 2.6.3. Introduction to deep learning model used in time series forecasting (41)
  • CHAPTER 3.DESIGN MECHANICAL SYSTEM (45)
    • 3.1. Hardware objectives (45)
    • 3.2. Technical requirements (45)
    • 3.3. Design options (46)
      • 3.3.1. Design the bracelet (46)
      • 3.3.2. Design the electrical box (54)
  • CHAPTER 4.DESIGN ELECTRONICS – CONTROL SYSTEM (56)
    • 4.1. Electronics – control system’s objectives (56)
    • 4.2. Technical requirements (56)
    • 4.3. Design options (57)
      • 4.3.1. Cetral control block (57)
      • 4.3.2. Power supply block (57)
      • 4.3.3. Sensor block (58)
      • 4.3.4. Signal processing block (58)
    • 4.4. Hardware experiment (64)
      • 4.4.1. Experiment set up (64)
      • 4.4.2. Experimental result (66)
  • CHAPTER 5.DESIGN PULSE WAVE FORECASTING ALGORITHM (69)
    • 5.1. Objectives (69)
    • 5.2. Data Preparation (69)
      • 5.2.1. Equipment (69)
      • 5.2.2. Patient recruitment and protocol (69)
    • 5.3. Preprocessing algorithm (72)
      • 5.3.1. Applying wavelet transform in filtering signal (72)
      • 5.3.2. Baseline wander removal (75)
    • 5.4. Time series forecasting model (78)
      • 5.4.1. Time2Vec (79)
      • 5.4.2. Gating mechanism (79)
      • 5.4.3. Feature selection network (82)
      • 5.4.4. Interpretable multi-head attention (83)
      • 5.4.5. Locality enhancement (84)
    • 5.5. Experiment (84)
      • 5.5.1. Training procedure (84)
      • 5.5.1. Experimental result (85)
  • CHAPTER 6.BLOOD GLUCOSE MEASUREMENT (88)
    • 6.1. Objectives (88)
    • 6.2. Blood glucose measuring model (89)
    • 6.3. Experiment (91)
      • 6.3.1. Dataset preparation (91)
      • 6.2.5. Training procedure: Blood glucose measuring model (92)
      • 6.2.6. Experimental result: Blood glucose measuring model (93)

Nội dung

Motivation

Pulse diagnosis has been a vital component of traditional medicine for thousands of years, particularly in Eastern countries, where it serves as a noninvasive method for disease analysis In this technique, physicians place their fingers near the radius bones on the patient's wrist, applying varying levels of pressure to detect the response from the blood vessels Key properties assessed during this examination include pulse rate, strength, and width, which help doctors evaluate the patient's overall balance and formulate an accurate diagnosis.

Traditional medicine practitioners have historically spent years honing their skills, yet the lack of formal documentation for these ancient techniques leads to a lack of standardization in their knowledge Consequently, diagnostic results can vary significantly between practitioners due to subjective interpretations based on individual experience Additionally, a doctor's emotional state can influence their assessments, resulting in inconsistent outcomes even from the same practitioner This reliance on personal experience also complicates the ability to clearly explain diagnoses to patients.

Pulse diagnosis is a quick process that can result in practitioners losing focus and overlooking crucial details, ultimately impacting the accuracy of diagnoses Additionally, the inability to save pulse wave data hinders deeper analysis and prevents the creation of comprehensive medical records for patients.

Figure 1.1 Wrist pulse diagnosis in traditional medicine (Source: https://beta.parkwayshenton.com/healthplus/article/tcm-misconceptions)

Telemedicine is increasingly appealing globally due to technological advancements, offering time savings for both doctors and patients while enhancing accessibility, particularly for those in remote areas It has the potential to lower hospital costs while maintaining high-quality medical services However, the reliance on large and costly modern equipment for accurate clinical examinations poses a significant barrier to the growth of telemedicine By integrating traditional medicine practices, telemedicine can overcome these challenges and continue to thrive.

In response to identified challenges and opportunities, we are developing a wearable device designed to record pulse waves from the wrist, integrating principles of traditional medicine with modern technology This innovative device transforms the recorded signals into valuable health monitoring information for individuals.

Scientific and practical significances

This study highlights the significance of researching and developing artificial intelligence applications in healthcare to enhance quality of life The self-collected dataset serves as a vital resource for future research on AI applications in diagnosing and alerting potential diabetes-related risks.

• Diagnosing and maintaining patient medical records.

Objectives

We have a strong desire to contribute to comprehensive research in Traditional Medicine as a whole, with a specific focus on its application in the management of diabetes.

• Develop a compact and user-friendly device capable of accurately capturing pulse wave signals

• Create a dataset consisting of pulse wave signals and blood glucose levels (BGL) to serve the project’s objectives and make it publicly available for future research

• Forecast future pulse signals to estimate future blood glucose levels and provide alerts for abnormal symptoms Extract and visualize the factors influencing the prediction results.

Research methods

• Research and compile literature on Traditional Medicine techniques and the characteristics of diabetes

• Based on the principles of TM, explore suitable sensors and devices for designing the device according to the project's objectives

• Conduct a comprehensive study on digital signal processing techniques and apply appropriate methods to the wrist pulse signals in order to obtain a complete dataset

To select a deep learning model that meets project requirements, it is essential to study, analyze, and evaluate various models Implement experimental methods for real-world testing, allowing for data-driven conclusions This iterative process will enable the progressive implementation and refinement of the project, grounded in a well-researched theoretical foundation.

Structure of the report

Chapter 1 INTRODUCTION: Brief introduction to the study

Chapter 2 LITERATURE REVIEW: Theoretical knowledge related to the study: Traditional

Medicine, sensors, pulse wave analysis, deep learning models in Traditional Medicine

Chapter 3 DESIGN MECHANICAL SYSTEM: Design and fabricate the mechanical system Chapter 4 DESIGN ELECTRONICS-CONTROL SYSEM: Explain the block diagram in the electronics-control system

Chapter 5 DESIGN AI ALGORITHM: Objectives of the AI algorithm, step by step preprocess data and build the AI models

And the last chapter is the conclusion and discussion

REVIEW

Introduction to Traditional Medicine

Traditional Medicine employs four key diagnostic methods to assess human health: inspection, auscultation, olfaction, interrogation, and palpation These techniques gather diverse information, including external conditions, family history, dietary habits, and living environments, to evaluate patients' overall wellness.

The inspection process in traditional medicine involves a thorough examination of the patient's physical appearance, including complexion, facial expressions, body movements, and posture Practitioners focus on the tongue's color and shape, skin condition, and any abnormal characteristics to gain insights into the patient's overall health and organ function The tongue serves as a reflective map of the body's torso, with different sections corresponding to various organ systems and emotions For instance, the tip of the tongue relates to the heart, lungs, and emotional state, while the center is linked to the digestive system, providing crucial information about both physical health and emotional imbalances.

Analyzing the tongue's surface can provide valuable insights into the health of the stomach, spleen, and intestines, which is crucial for diagnosing digestive disorders Additionally, the sides of the tongue are thought to indicate the status of the liver and gallbladder, with any changes or abnormalities serving as important diagnostic clues.

Five key areas of the body can signal potential liver dysfunction or gallbladder issues Additionally, the base of the tongue is linked to the kidneys, bladder, and intestines, making it an important area for assessing the health of these organs and detecting possible imbalances or diseases.

Figure 2.2: Tongue describes organ system map on tongue (Source: https://www.wellnessprinciple.com/blog/traditional-chinese-medicine-tongue-diagnosis)

Auscultation is the practice of listening to bodily sounds, offering crucial insights into organ health and aiding in disease diagnosis In Traditional Medicine (TM), practitioners assess various sounds, including breath, voice, and abdominal noises, to evaluate internal organ function and overall body balance For instance, breath quality can reveal respiratory issues, as abnormal sounds like wheezing may indicate lung imbalances Changes in a patient's voice, such as tone or clarity, can signal conditions affecting the throat or vocal cords Additionally, abdominal sounds during digestion, like rumbling, provide valuable information about the digestive system's state.

Auscultation in Traditional Medicine (TM) is conducted by skilled practitioners who can detect and interpret subtle sound variations in the stomach and intestines This technique is typically combined with other diagnostic approaches, including inspection, inquiry, and palpation, to create a holistic view of the patient's health and inform effective treatment strategies.

Olfaction, the act of detecting odors from a patient, offers crucial insights into their internal health and aids in diagnosing diseases or imbalances In Traditional Medicine (TM), practitioners focus on specific odors from a patient's body, breath, or excretions, which may indicate underlying health issues For instance, foul breath odors like a "rotten egg" smell can suggest digestive problems, while sweet or fruity breath may signal blood sugar imbalances Additionally, body odors can reveal issues related to sweat glands, skin conditions, or metabolic processes Trained practitioners in TM utilize their sense of smell as a diagnostic tool to interpret subtle odor variations Complementing olfaction, interrogation involves detailed questioning to gather comprehensive information about the patient's symptoms, medical history, lifestyle, diet, and emotional well-being, helping practitioners identify health patterns and imbalances.

This study emphasizes wrist pulse wave diagnosis, a key palpation method for evaluating overall health and identifying bodily imbalances Traditional Medicine (TM) practitioners believe that wrist pulses reflect the flow of qi, the essential life force that permeates all existence Qi circulates through the body and its organs via meridian channels, manifesting at various acupuncture points, including cun, guan, and chi positions Thus, wrist pulse diagnosis serves as a means to gather information about qi through these points According to TM theory, the stability and balance of qi flow are crucial indicators of an individual's wellness By analyzing the wrist pulse, TM practitioners can gain comprehensive insights into a person's internal organ health Recent studies further support these findings.

Research indicates that during respiration, the heart produces oscillations that travel through the body's arterial system These oscillations cause side-branch organs to respond by generating harmonic forces that resonate at their natural frequencies, which are then sent back into the arterial system This "frequency match" theory suggests a correlation between the frequencies at various acupuncture points and the natural frequencies of several organs.

To assess the wrist pulse, Traditional Medicine (TM) practitioners use their index, middle, and ring fingers to palpate the radial artery, corresponding to the cun, guan, and chi positions that reflect the internal organ states Specifically, the left hand's cun, guan, and chi relate to the heart, liver, and kidney, while the right hand corresponds to the lung, stomach, and kidney By applying varying levels of pressure—light, medium, and heavy—practitioners evaluate pulse characteristics such as rate, strength, and width This assessment enables experienced practitioners to identify imbalances and determine the overall health status of the individual.

Figure 2.3: The positions of Cun – Guan – Chi and their equivalent organs

Introduction to the use of sensors in collecting pulse wave signals

The integration of pulse wave sensors in modern healthcare has gained significant traction, as these non-invasive devices effectively measure and detect changes in arterial pulse signals.

Wearable devices and medical equipment enable continuous monitoring of pulse wave signals through various sensor technologies These include photoplethysmography (PPG) sensors, laser Doppler sensors, piezoresistive sensors, and piezoelectric sensors, each playing a crucial role in capturing accurate pulse wave data.

PPG sensors are non-invasive devices that use light-emitting diodes (LEDs) and photodetectors to monitor changes in blood volume and flow Commonly positioned on the skin at the fingertip or wrist, these sensors effectively capture pulse wave signals for health monitoring.

PPG sensors operate on the principle of optical absorption and reflection, where an LED emits light onto the skin, and a photodetector measures the intensity of light absorbed or reflected by blood vessels The changes in light intensity, driven by variations in blood volume with each heartbeat, create a pulsatile waveform known as the PPG signal.

Figure 2.5: PPG reflection type working principle

• Non-invasive: PPG sensors are non-invasive, as they can be placed on the skin surface without the need for penetration or discomfort

• Continuous Monitoring: PPG sensors allow for continuous monitoring of pulse wave signals, providing real-time information about heart rate and other cardiovascular parameters

• Cost-Effective: PPG sensors are generally more cost-effective compared to invasive methods or more complex diagnostic equipment

• Portable and Wearable: PPG sensors can be integrated into wearable devices or mobile applications, enabling convenient and portable monitoring for personal health tracking

• Limited Depth of Measurement: PPG sensors primarily capture signals from superficial tissues and are not able to assess deep-seated vascular structures

• Sensitivity to Motion Artifacts: PPG signals can be affected by motion artifacts, such as movement of the body or the sensor itself, leading to inaccuracies in the measurements

• Subject to Environmental Factors: External factors such as ambient light, temperature, and skin pigmentation can influence the performance and accuracy of PPG sensors

Piezoresistive strain gauges are commonly utilized as effective pressure sensors, operating on the principle that a material's electrical resistance changes with deformation These sensors feature a strain gauge made of conductive material, which alters its resistance when stretched Typically attached to a diaphragm that deforms under applied pressure, the sensitivity of the material is measured by the gauge factor, defined as the ratio of relative resistance change to strain.

(2.1) Where strain is defined as the relative change in length:

Strain gauge elements are constructed from various materials, including metals like constantan, karma alloy, and nickel, as well as semiconductors such as silicon and germanium While metallic strain gauges provide excellent mechanical stability, semiconductor options typically exhibit a higher gauge factor, resulting in greater sensitivity However, semiconductor gauges are more susceptible to temperature fluctuations, necessitating precise temperature compensation In practical applications, a Wheatstone bridge circuit is utilized to measure resistance changes in the strain gauge sensor, effectively converting minor resistance variations into a measurable output voltage.

Figure 2.6: Connected circuit between Piezoresistive sensor and Wheatstone Bridge (Source: https://www.avnet.com/wps/portal/abacus/solutions/technologies/sensors/pressure-sensors/core- technologies/piezoresistive-strain-gauge/)

The Wheatstone bridge circuit operates with an applied excitation voltage, producing a zero output voltage when all resistors are balanced and no strain is present However, the application of pressure alters the resistance within the bridge, resulting in a measurable output voltage or current The output can be calculated using the specified formula.

• High Sensitivity: Piezoresistive sensors exhibit high sensitivity, allowing them to detect even small changes in pressure or strain

• Fast Response Time: Piezoresistive sensors have a fast response time, enabling real-time monitoring and measurement of dynamic pressure changes

• Cost-Effective: Piezoresistive sensors are relatively cost-effective compared to other pressure sensing technologies, making them accessible for a wide range of applications and industries

Piezoresistive sensors offer seamless compatibility with electronic circuits and microcontrollers, facilitating efficient signal processing and data acquisition Their ability to integrate with digital interfaces enables straightforward incorporation into larger systems, enhancing overall functionality.

• Temperature Sensitivity: Piezoresistive sensors can be sensitive to temperature changes, which may affect their accuracy and require temperature compensation techniques to ensure reliable measurements

• Non-Linearity: The output response of piezoresistive sensors may exhibit non- linear behavior, especially at higher pressure ranges Calibration may be necessary to obtain accurate and linear measurements

• Drift: Over time, piezoresistive sensors can experience drift in their output readings, requiring periodic recalibration to maintain accuracy

Piezoresistive sensors are vulnerable to mechanical fragility, making them sensitive to stress and potentially damaging when subjected to excessive force or shocks To enhance their durability and ensure long-term functionality, it is essential to handle and protect these sensors properly.

Capacitive pressure sensors measure pressure variations by detecting changes in capacitance between two conductive plates separated by a dielectric material As pressure is applied, the capacitance value alters, providing an accurate measurement of pressure changes.

• 𝜀 𝑟 is the dielectric constant of the material between the plates (this is 1 for a vacuum)

• 𝜀 0 is the electric constant (equal to 8.854x1012 F/m),

• A is the area of the plates

• d is the distance between the plates

Capacitive pressure sensors operate using a diaphragm that deforms under pressure, functioning as one electrode, while a fixed backplate serves as the other electrode The gap between these electrodes is filled with a dielectric material, and when pressure is applied, the diaphragm's deformation alters the distance between the electrodes, resulting in a change in capacitance This change is directly related to the effective area and separation of the electrodes; capacitance increases as spacing decreases The variation in capacitance is measured through specialized electronics and signal processing, with the sensor connected to an external circuit that applies AC or DC voltage Ultimately, this capacitance change is converted into an output signal, typically in the form of voltage or frequency, representing the applied pressure.

Figure 2.7: Capacitive pressure sensor structure (Source: https://www.avnet.com/wps/portal/abacus/solutions/technologies/sensors/pressure-sensors/core- technologies/capacitive/)

- Advantages of capacity pressure sensors:

• High Sensitivity: Capacitive sensors can achieve high sensitivity, allowing them to detect small pressure changes accurately

• Wide Measurement Range: Capacitive pressure sensors can be designed to measure a wide range of pressures, from very low to high values

• Good Linearity: These sensors often exhibit good linearity, meaning the output response is proportional to the applied pressure, allowing for precise measurements

• Low Power Consumption: Capacitive pressure sensors typically consume low power, making them suitable for battery-operated devices and applications where power efficiency is essential

- Disadvantages of capacity pressure sensors:

• Temperature Sensitivity: The capacitance of the sensor can be influenced by temperature variations, necessitating temperature compensation techniques for accurate measurements

• Limited Overload Capacity: Exceeding the maximum pressure limit can damage the diaphragm or affect the sensor's performance

• Susceptibility to Environmental Factors: External factors like humidity, moisture, and contaminants can affect the dielectric properties and introduce measurement errors

Piezoelectric sensors are innovative electronic devices that convert mechanical or thermal energy into electrical signals through electromechanical coupling This process, known as piezoelectricity, occurs in specific materials that generate electrical voltage under mechanical stress and, conversely, create mechanical stress when an electrical voltage is applied Due to their unique properties, piezoelectric sensors are highly effective for applications in Traditional Medicine (TM).

Piezoelectric sensors offer remarkable sensitivity, enabling them to detect minuscule pressure waves, including those generated by the heartbeat This capability facilitates precise monitoring and analysis of physiological signals associated with the pulse By effectively capturing and interpreting these subtle pressure variations, piezoelectric sensors provide valuable insights into an individual's health and well-being.

Piezoelectric sensors are known for their high sensitivity and exceptional response time, making them ideal for accurately capturing the rapid fluctuations of pulse waves Their ability to provide real-time data is essential for evaluating pulse characteristics in TM, allowing practitioners to make informed diagnoses and customize treatments effectively.

Figure 2.8: Piezoelectric pressure sensor construction

A piezoelectric sensor is made using piezo film, typically composed of materials like Zinc oxide or Lead zirconate titanate (PZT), known for their significant piezoelectric effect When pressure or acceleration is applied to the PZT material, it produces an electrical charge proportional to the applied force across its crystal surfaces This generated electrical charge can be transmitted through a metalized plate to be read by an electrical device To ensure durability, a protective coating covers the metallization and piezoelectric film inside the sensor.

• High Sensitivity: Piezoelectric sensors exhibit high sensitivity, allowing them to detect even small changes in pressure, strain, or acceleration They can provide precise and accurate measurements

• Wide Frequency Range: Piezoelectric sensors are capable of measuring dynamic events with high frequency response They can capture fast-changing signals and vibrations accurately

• Broad Measurement Range: Piezoelectric sensors can be designed to measure a wide range of pressures, forces, and accelerations, making them versatile for different applications

• Rugged and Durable: Piezoelectric sensors are known for their robustness and durability They can withstand harsh environments, high temperatures, and mechanical stress without significant loss in performance

• Fast Response Time: Piezoelectric sensors have a rapid response time, enabling real-time monitoring and measurement of dynamic events

• Wide Application Range: Piezoelectric sensors find applications in various fields, including automotive, aerospace, robotics, medical devices, structural analysis, and industrial monitoring

• Limited Linearity: The output response of piezoelectric sensors may exhibit non- linear behavior, especially at higher input levels Calibration or compensation techniques may be necessary to improve linearity

• Temperature Sensitivity: Piezoelectric sensors can be sensitive to temperature variations, which may affect their accuracy Temperature compensation techniques may be required for precise measurements

Research overview

Traditional Medicine (TM) in Vietnam is widely trusted for its accuracy in medical examination and treatment; however, there is a significant lack of research on integrating technology to enhance its application in hospitals Current practices in TM facilities predominantly rely on conventional methods and the expertise of medical professionals, leading to limited management and documentation of medical records.

The rapid advancements in science and technology have led to a surge in studies analyzing wrist pulse signals in healthcare Notably, Chung et al utilized three sensors to simultaneously capture wrist pulse data, employing Three-Dimension Pulse Mapping to effectively simulate real-time pulse information.

The device effectively captures the touch sensation and finger-reading skills of physicians, enabling the extraction of pulse data from Three Positions Nine Indicators It specifically focuses on Fu-Zhong-Chen displacements located at the Cun-Guan-Chi positions.

Figure 2.9: Bi-Sensing Pulse Diagnosis Instrument and holder of pulse-taking posture (Source: [13])

Chuangly Chen and colleagues developed a sensor array consisting of 12 MEMS sensors arranged in 3 rows and 4 columns to capture wrist pulse pressure, translating this data into a 3D format This innovative approach allows for the extraction of significant features from the pulse, including depth, enhancing the understanding of pulse wave characteristics.

Figure 2.10: (a) Proposed system, (b) Real photograph of the system with real-time UI

17 length… These devices open up the potential in the collection of medical data, thereby applying deep learning models to the development of related applications.

Introduction to noise-removing technique used for pulse signal

Pulse wave signals, which reflect heart activity, can be compromised by various types of noise that affect the quality and accuracy of the recorded response These noises can stem from both physiological and non-physiological factors Common types of noise include interference from external sources, movement artifacts, and electronic disturbances.

Baseline wander, also known as baseline drift, refers to low-frequency variations in a signal that can lead to distortion, reduced signal quality, and missed detections This type of noise is typically caused by factors such as breathing, body movement, or inadequate electrode connections.

Power line interference is a prevalent source of noise in electrical devices, arising from the power supply or nearby power lines and electrical equipment This interference generates a 50-60 Hz sinusoidal waveform, varying by region, which can severely compromise recorded data quality As a result, this contamination can obscure and distort the signals, making it challenging to accurately analyze and interpret physiological data.

Figure 2.11: Signal with power line interference

Figure 2.12: Signal with baseline wander

In order to remove these types of noises, many different approaches were proposed such as the finite impulse response filter, infinite impulse response filter, and the wavelet-based filter.

2.4.1 Finite Impulse Response (FIR) filter

A Finite Impulse Response (FIR) filter is characterized by an impulse response that is of finite duration, meaning it returns to zero within a limited timeframe This type of filter is non-recursive, indicating that it operates without feedback in its mathematical formulation.

FIR filters are inherently stable due to their lack of feedback, preventing any unbounded or oscillatory output Furthermore, they can be designed for linear phase response by ensuring the coefficient sequence is symmetric, which results in a consistent delay across all frequencies.

For a causal discrete-time FIR filter of order N, the output is calculated by convolving its input with its impulse response The operation is described by the following equation:

In digital signal processing, the input signal is represented as 𝑥[𝑛], while the output signal is denoted as 𝑦[𝑛] The filter order is indicated by 𝑁, and the values of the impulse response at each instant are represented as 𝑏𝑖 for 0 ≤ 𝑖 ≤ 𝑁 in an 𝑁-th order FIR filter.

Figure 2.13: A direct form discrete-time FIR filter of order N (Source: Wikipedia)

2.4.2 Infinite Impulse Response (IIR) filter

An IIR filter, or Infinite Impulse Response filter, is a recursive filter that computes its output using current and previous inputs along with previous outputs through a feedback loop, allowing for efficient frequency response shaping The filter's frequency response is determined by its transfer function, represented as a polynomial ratio in the complex variable "z." The poles and zeros of this transfer function, which are the points where the numerator and denominator polynomials equal zero on the z-plane, play a crucial role in defining the filter's characteristics The positions of the poles influence the overall shape of the frequency response, while the zeros can enhance or attenuate specific frequencies By strategically selecting the poles and zeros, IIR filters can be tailored to achieve various frequency response types, including low-pass, high-pass, band-pass, and band-stop.

−𝑎 1 𝑦[𝑛 − 1] − 𝑎 2 𝑦[𝑛 − 2] − ⋯ − 𝑎 𝑄 𝑦[𝑛 − 𝑄]) (2.6) Where: o P is the feedforward filter order o 𝑏 𝑖 are the feedforward filter coefficients o 𝑄 is the feedback filter order o 𝑎 𝑖 are the feedback filter coefficients o 𝑥[𝑛] is the input signal o 𝑦[𝑛] is the output signal

Wavelets are localized wave-like oscillations characterized by two key properties: scale and location The scale property refers to the dilations of the wavelet, while the location property pertains to its translations A fundamental component of wavelet analysis is the mother wavelet, which is defined by two essential criteria: it must have a zero mean and unit energy.

(2.8) Based on the mother wavelet, the wavelet basis functions are derived as:

Where: o a is a translation constant o b is dilation constant

The wavelet transform, through the adjustment of translation and dilation constants, serves as an effective method for analyzing signals across various frequencies and resolutions This technique is known as multiresolution analysis.

The Discrete Wavelet Transform (DWT), also known as the dyadic wavelet transform, is a widely utilized technique that employs a bank of two-channel filters at varying levels By discretizing the scale and displacement of the continuous wavelet transform according to powers of two, DWT effectively decomposes the original signal into approximate and detail coefficients The approximate coefficients function as a low-pass filter, while the detail coefficients capture high-frequency information This process can be iteratively applied to the approximate coefficients to obtain lower resolution components Each decomposition level reduces the frequency bands of the signal by half compared to the original sample rate, while also down-sampling the signal by a factor of two after each level.

Different types of mother wavelets can be utilized based on the specific problem at hand Common wavelet families used in biomedical signals, such as EEG, ECG, and pulse wave analysis, include Daubechies, Symlet, Coiflet, and Morlet wavelets.

Figure 2.14: Wavelet Families of discrete wavelets and continuous wavelets (Source: [4])

Normally, researchers have to conduct a comparison based on several metrics such as signal-to-noise ratio or mean squared error to choose the best wavelet families for the problem.

Introduction to metrics used in time series

When developing solutions, it's essential to quantitatively assess their performance using appropriate metrics In the context of time series analysis, various metrics serve distinct purposes, including the signal-to-noise ratio and mean squared error.

2.5.1 Introduction to signal-to-noise ratio

The signal-to-noise ratio (SNR) is a crucial metric in digital signal processing, computer vision, and artificial intelligence, measuring the relationship between desired signal power and unwanted signal power It serves as an essential parameter for assessing the performance and quality of systems that process, transmit, or filter signals While SNR does not detail specific filter characteristics, it effectively indicates the separation between signal and noise based on their relative power levels To compute SNR, the following formulas are utilized.

• 𝑆𝑁𝑅 𝑑𝑏 : is signal-to-noise ratio calculated in decibel

• 𝑉 𝑆𝑖𝑔𝑛𝑎𝑙 : is the measured signal voltage

• 𝑉 𝑁𝑜𝑖𝑠𝑒 : is the measured noise voltage

2.5.2 Introduction to mean squared error

Mean squared error (MSE) is a widely utilized metric in regression analysis, quantifying the average squared deviation between a regression line and the actual values A higher MSE indicates a greater disparity between the predicted and true values Additionally, MSE serves as a prominent loss function in regression tasks, as it disproportionately penalizes larger errors due to the squaring of differences.

To calculate MSE, we use the following formulas:

Where: o MSE: is mean squared error o n: is number of data points o 𝑌 𝑖 : are observed values o 𝑌̂ 𝑖 : are predicted values

Introduction to Machine Learning in Traditional Medicine

Time series datasets like electronic health records (EHR), electrocardiograms (ECG), and electroencephalograms (EEG) are crucial sources of health information EHRs provide a comprehensive view of a patient's lifelong care, documenting new morbidities, comorbidities, diagnoses, treatment regimens, and their effectiveness, as well as genetic and lifestyle risks In contrast, ECG and EEG data offer specific insights by capturing the electrical activity of the heart and brain By leveraging machine learning on these datasets, we can gain a deeper understanding of individual health trajectories, enabling the development of adaptive patient management programs that address multiple risks and evolve with each patient’s unique history The growing interest in machine learning in healthcare stems from its diverse applications, making it an increasingly valuable tool in the industry.

- Dynamic forecasting: Cardiovascular disease, cancer, diabetes are serious and chronic diseases that gradually progress throughout the lifetime of a patient This progress can be

23 segmented into several stages that manifest through clinical observations Dynamic forecasting aims for building a disease progression models from EHR and other informative datasets that can issue personalized dynamic forecasts

Survival analysis is a statistical method that examines the time duration between two or more events, often focusing on identifying risk factors that influence survival rates This technique is commonly employed to compare the risks faced by different patients over a specified period and to determine the most cost-effective strategies for data collection.

Screening and monitoring in clinical examinations can be costly, making it challenging to identify the specific clinical information needed, the timing for acquisition, and the frequency for each individual Utilizing machine learning can effectively optimize the balance between the costs of information acquisition and the value derived from that information.

Early diagnosis of serious diseases like cancer and cardiovascular conditions is crucial for effective treatment and can save lives Identifying these diseases early is challenging due to the limited information available about a patient’s current state and the unclear progression of the disease However, by developing a machine learning model that analyzes disease trajectories, we can gain insights into the various stages of diseases and how individuals progress differently through them This knowledge enables us to predict and diagnose diseases earlier by monitoring changes in patient characteristics, symptoms, and comorbidities.

2.6.2 Introduction to statistical model used in time series forecasting

ARIMA, or Autoregressive Integrated Moving Average, is a widely used statistical model for time series forecasting This method analyzes historical data to predict future values by integrating autoregressive (AR), integrated (I), and moving average (MA) components, effectively capturing the essential patterns and trends within the dataset.

ARIMA model is a combination of 3 components:

The Autoregressive model, denoted as AR(p), utilizes lagged values of the dependent variable y as predictors, extending back up to the p-th time period This AR component is essential for effectively capturing temporal dependencies and trends present in the data.

Here, p = the number of lagged observations in the model, ε is white noise at time t, c is a constant and φs are parameters.

- Integrated I(d): The difference is taken d times until the original series becomes stationary

A stationary time series is one whose properties do not depend on the time at which the series is observed

Where B is called a backshift operator

Thus, a first order difference is written as:

In general, a d th-order difference can be written as:

The left graph illustrates Google's stock price over 200 consecutive days, representing a non-stationary time series In contrast, the right graph depicts the daily changes in Google's stock price for the same period, which is stationary since its values remain consistent regardless of the time of observation This example demonstrates that the order of differencing required to achieve stationarity is one, as the first-order differenced series is stationary.

Figure 2.15: Non-stationary and stationary series example (Source: https://otexts.com/fpp2/arima.html)

- Average MA(q) - A moving average model uses a regression-like model on past forecast errors Here, ε is white noise at time t, c is a constant, and θs are parameters

The ARIMA(p,d,q) model integrates three types of models to forecast future values, represented by the equation 𝑦̂ 𝑡 ′ = 𝑐 + 𝜃 1 𝑦 𝑡−1 ′ + 𝜃 2 𝑦 𝑡−2 ′ + ⋯ + 𝜃𝑝𝑦 𝑡−𝑝 ′ + 𝜃 1 𝜀 𝑡−1 + 𝜃 2 𝜀 𝑡−2 + ⋯ + 𝜃 𝑞 𝜀 𝑡−𝑞 + 𝜀 𝑡 Exponential smoothing (ETS) is another effective time series forecasting method that predicts future values by assigning exponentially decreasing weights to past observations, prioritizing more recent data This approach calculates a weighted average of historical data, emphasizing the relevance of recent observations for improved forecasting accuracy.

Exponential smoothing generates a refined forecast by considering both recent and historical data patterns Its simplicity and computational efficiency make it ideal for applications needing real-time or near real-time predictions Among its various forms, single exponential smoothing is the most widely utilized, defined by a specific formula.

• 𝑠 𝑡 , 𝑠 𝑡−1 is the smoothed statistic (forecasted value) at time t and t – 1

• 𝛼 is the smoothing factor, and 0 ≤ 𝛼 ≤ 1

Exponential smoothing, commonly referred to as an exponentially weighted moving average (EWMA), can be technically classified as an autoregressive integrated moving average (ARIMA) (0,1,1) model that lacks a constant term.

In optimizing the EST model, alpha values are typically selected at random, with the sum of squared errors (SSE) serving as the primary optimization criterion The forecast errors, defined as \( e_t = y_t - \hat{y}_{t|t-1} \) for \( t = 1, \ldots, T \), represent one-step-ahead within-sample forecast errors The goal is to determine the alpha values that minimize these errors effectively.

2.6.3 Introduction to deep learning model used in time series forecasting

Recurrent Neural Networks (RNNs) are a specialized type of neural network that utilize the output from previous steps as input for the current step, making them particularly effective for processing sequential data.

Recurrent Neural Networks (RNNs) are increasingly popular in natural language processing and time series analysis due to their capability to capture crucial temporal information from prior time steps, enabling accurate future predictions Mathematically, RNNs can be effectively described to illustrate their functioning and applications.

Where: o 𝑥(𝑡) is the input at time step t o ℎ(𝑡) is the hidden state memory at time step t o 𝑊, 𝑈, 𝑉 are weight matrices o 𝑏, 𝑐 are bias vectors o 𝑓, and 𝑔 are activation functions applied element-wise

Long Short-Term Memory (LSTM) networks are a specialized type of Recurrent Neural Network (RNN) designed to address the challenges of vanishing and exploding gradients They excel in capturing long-term dependencies in data sequences, making them ideal for tasks that require understanding context over extended periods An LSTM cell comprises several essential components that facilitate its unique capabilities.

• Cell state (𝐶 𝑡 ): Cell state is a memory cell that retains information from earlier time steps This information is selectively updated or forgotten

The input gate (𝑖) regulates the amount of information that can be stored in the cell state by processing the current input along with the past hidden state, ultimately producing a value between 0 and 1.

0 to 1 as the amount of information to be stored

• Forget gate (𝑓): Contrary to input gate, forget gate decides what information to be removed from the cell state

• Output gate (𝑜): The role of output gate is to determine how much of the cell states to be served as an output

MECHANICAL SYSTEM

Hardware objectives

- Develop a compact system that meets the technical requirements, it is necessary to simulate the process of measuring the wrist pulse wave performed by traditional medicine doctors

- The final system should be durability, robustness, and ease of installation, mobility, maintenance, and repair.

- Maximize cost-effectiveness in design and manufacture

Technical requirements

In this article, we emphasize wrist pulse wave diagnosis, a technique within palpation methods Traditional medicine practitioners utilize their index, middle, and ring fingers to assess the patient's radial artery at specific points known as cun, guan, and chi, applying varying levels of pressure to detect different pulse qualities However, the subtle nature of these vibrations can complicate individual differentiation Consequently, our device prioritizes the precise collection of pulse wave data to enhance diagnostic accuracy.

Figure 3.1: Wrist pulse wave diagnosis method

Simulating the wrist pulse wave assessment process presents challenges, including the need for sensors that are accurately sized for fingers and can deliver varying results under different force levels Moreover, it is essential for these sensors to maintain high sensitivity while minimizing minor fluctuations to ensure that no vital information is lost.

After assessing various sensor alternatives in section 2.2 and taking into account the specified requirements, we have decided to implement a piezoelectric pressure sensor For an in-depth discussion on this choice, please refer to section 4.3.

Design options

The original piezoelectric sensor requires lead attachment techniques for effective wiring connections and reliable output signal transmission According to Measurement Specialist Inc's technical manual, there are two methods: penetrative and non-penetrative The penetrative method involves piercing the film and affixing rivets or eyelets to the conductive traces, ensuring a strong connection through mechanical pressure In contrast, the non-penetrative method uses tape to secure soldered wires to the film, which can be less stable For our project, we prefer the penetrative method due to its long-term stability and ease of application.

Figure 3.2: (a) Penetrative method, (b) None-penetrative method (Source: https://www.sparkfun.com/datasheets/Sensors/Flex/MSI-techman.pdf)

After completing the sensor selection and lead attachment, we test the sensor's response capability by simulating periodic vibrations using a CAM mechanism For signal reading and interpretation, we employ the NI USB 6341 module from National Instruments Inc., which seamlessly integrates with LabVIEW, allowing for straightforward reading and visualization of analog signals without complex setups.

The piezoelectric sensor effectively captures real-time signals, demonstrating its ability to detect even minimal fluctuations This capability confirms that the sensor meets the specific requirements of the application.

Figure 3.4: Visualization of the response signals on Labview Figure 3.3: CAM mechanism description and design

Initially, we adopted a design that attached the sensor directly to the wrist to simplify data collection from arterial vibrations However, experimental results showed that the signals collected were consistently near zero, indicating that this approach may not be effective for capturing the desired data.

Figure 3.5: The first design option

Through extensive experimentation, we discovered that the electrostatic charge of the human body interferes with signal accuracy This leads us to explore methods for detecting vibrations without direct contact between the pressure sensor and the skin To address this challenge, we developed an isolator element that allows for indirect measurement The sensor tips mimic fingers that touch the wrist, effectively transmitting vibrations to the piezo sensor while remaining free to vibrate due to the hollow space within the isolator Additionally, we created a bracelet frame designed to comfortably fit the human wrist and accommodate the isolator.

Figure 3.6: The second design option

This version effectively addresses electrostatic issues and facilitates data collection from the wrist's artery, yet it faces challenges The upper restrictor lacks adequate space for the sensor tip's movement, and the sensor's tail is not securely fixed, leading to unwanted oscillations and signal noise To overcome these problems, we enhanced the isolator element by adding a fixing PCB and increasing the space within the upper restrictor.

Figure 3.7: The modified isolator element

This innovative design effectively tackles the issues of electrostatic interference and unexpected sensor movement, ensuring accurate wrist pulse data retrieval Acknowledging the various forces impacting the sensor, we integrate a rotatory axis into the bracelet's frame, which enhances the flexibility of the isolator element Furthermore, we can apply different forces to this isolator using an M6 bolt strategically placed at the designated application point on the upper section of the bracelet A detailed illustration of this design is included below.

Figure 3.8: (a) The bracelet’s frame, (b) Vertical half-view of the isolator element

Figure 3.9: The proposed assembly design of the system

Conventional machining techniques struggle to produce complex block designs effectively, particularly due to the high priority placed on insulation requirements To address these challenges, we have opted for the 3D printing plastic processing method, which allows for the efficient handling of intricate components while ensuring cost-effectiveness and high accuracy We have analyzed several popular 3D printer filaments, including PLA, ABS, and TPU, and compiled their properties in the following table.

Table 3.1: Common 3D printer filaments comparison

Bed temp ( o C) Stiffness Printability Durability

After evaluating various 3D printer filaments, we focused on the insulation capabilities essential for our project PLA, ABS, and TPU all demonstrate satisfactory insulation properties at reasonable prices, with ABS being the most affordable and easy to print, despite its high energy requirements during the printing process TPU, while offering excellent flexibility, durability, and impact resistance, lacks the stiffness needed for our project, which prioritizes structural integrity and rigidity Therefore, we require a material that can endure specific loads and stresses without significant deformation or performance compromise.

After evaluating various factors, we selected PLA as the optimal filament for our project due to its excellent insulating properties and energy efficiency during printing Additionally, PLA provides sufficient stiffness, which is essential for our application, and although its durability is considered medium, it meets our project requirements effectively.

By utilizing PLA, we strike a balance between insulation capacity, energy efficiency, stiffness, and cost-effectiveness, making it the optimal choice for our project

Finally, we have used Elegoo Nepture 3 Plus Machine with parameters:

Figure 3.10: Printing process on Elegoo Neptune Plus 3 3D Printer Machine

The 3D printed bracelet is almost complete, requiring only the removal of the support from its bottom layer This bracelet features an adjustable strap, enabling it to accommodate a variety of wrist sizes for a comfortable fit.

The electrical box, designed as a compact and portable wearable system, measures 150 mm x 100 mm x 60 mm (length x width x height) to effectively house all essential electrical components.

The electrical box is designed with effective heat management in mind, featuring a built-in cooling fan on one side to ensure optimal thermal regulation and prevent overheating during operation Additionally, a strategically placed heat outlet hole on the opposite side allows for efficient dissipation of excess heat, thereby maintaining the system's stability and performance over extended periods.

The box features two sets of GX connectors, known for their high durability and reduced interference compared to other pin input-output types One connector links to the sensor's signal wire, while the other connects to the UART wire for signal transmission to a computer for analysis and storage The GX installation facilitates easy disassembly and replacement of damaged components without impacting the other parts of the device.

The final design of electrical box on Inventor is shown as figure below:

Figure 3.12: The proposed design of the electrical box

We have also used Elegoo Nepture 3 Plus Machine with parameters:

Figure 3.13: The completed electrical box

ELECTRONICS – CONTROL SYSTEM

Electronics – control system’s objectives

- Design and complete electronics-control system to satisfy technical requirements

- The system has to ensure stable and continuous operation, free from electrical signal interference to prevent operational errors

- Prioritize user safety by eliminating any potential hazards

Technical requirements

The proposed wearable device is designed to balance compactness, ease of maintenance, real-time operation, and affordability by selecting small, low-maintenance electrical components These components will be optimized for real-time processing to ensure accurate and timely data readings Key system functions include a central control unit, a sensor for collecting wrist pulse signals, and a user interface for signal collection and processing The system's architecture can be represented by a detailed block diagram based on these requirements.

Figure 4.1: Block diagram of electronics – control system

Figure: Block diagram of electronics - control system

The Central Control Block receives denoised signals from the Signal Processing Block for storage and implements forecasting and measurement models Additionally, it plays a crucial role in displaying information on the user interface, facilitating interaction with users.

- Power supply block (2): Provide 220VAC and 5VDC for other blocks in the system

- Sensor block (3): Place in the patient’s wrist position to retrieve pulse wave signals and transmit retrieved signals to a signal processing block

- Signal processing block (4): Receive raw signal from sensor block, apply digital signal processing techniques to preprocessing signal, and then send denoised data to central block control

Design options

The central control block consists of a robust computer or laptop designed to store collected data in a database and deploy AI models for forecasting pulse wave signals and measuring blood glucose levels For our project, we utilize a personal laptop equipped with an Intel Core i5 chip, 256 GB SSD storage, and 8 GB RAM This laptop effectively communicates with the signal processing block through the UART protocol.

The power supply block is essential for delivering energy to the entire system, utilizing 220V AC for PCs and laptops, alongside 5V DC for the Raspberry Pi Zero and High-precision AD HAT Notably, the Piezoelectricity sensor within the sensor block operates independently without requiring an external power source.

Signal processing serves as a crucial link between the sensor block and the central control block in our system, making its stable operation essential for overall functionality.

To avoid the drop in voltage and lack of current for peripherals, we use an AC - DC 5V - 3A adaptor with micro USB type B output

Figure 4.2: Power supply adaptor for Raspberry Pi (Source: Internet)

The Sensor block, as detailed in section 4.2, is essential for collecting data from the patient's wrist pulse, allowing doctors to apply varying pressure on the artery to assess pulse status and diagnose internal conditions Given the small amplitude of pulse wave signals, the sensors must be highly sensitive to minor vibrations While several sensor types were evaluated, limitations were found; for instance, Photoplethysmography (PPG) struggles with force variation, and capacitive pressure sensors fail to capture small vibrations effectively Additionally, piezoresistive sensors require complex external circuits Ultimately, we opted for the piezoelectric sensor (ID: DT1-052K) from TE Connectivity Inc., which measures 16mm x 41mm, closely resembling the size of a human finger.

Figure 4.3: Piezoelectric sensor (ID: DT1-052K)

Outstanding properties of DT1-052K Piezoelectric sensor in capturing pulse wave signals:

- Suitable size: 16mm x 41mm (width x height) and the total thickness is 40 μm

- High sensitivity with small pressure wave

- Require no external power supply for operation

The signal processing block utilizes the ADS1263 analog-to-digital converter to read analog signals from a piezoelectric sensor These signals are converted into digital format and transmitted to a Raspberry Pi Zero embedded computer using the SPI protocol The Raspberry Pi processes the raw signals to eliminate any unexpected elements and subsequently sends the refined data to the central control block via the UART protocol.

An embedded computer is a specialized system integrated into larger devices to perform specific functions, commonly found in products like cell phones, home appliances, and medical equipment These computers feature custom hardware and software tailored to the host system's needs, offering advantages such as compact size, low power consumption, and real-time operation They efficiently execute tasks autonomously, making them reliable for various applications For our project, we chose the Raspberry Pi Zero due to its cost-efficiency and user-friendly operation.

The Raspberry Pi Zero is an affordable single-board computer featuring a powerful 1GHz single-core CPU and 512MB of RAM It includes essential ports such as a Mini HDMI® port, a Micro USB OTG port, and a Micro USB power connection, along with a HAT-compatible 40-pin header These specifications enable efficient real-time computational speed, making it ideal for digital signal processing applications.

Figure 4.4: Raspberry Pi Zero (source: internet)

The Raspberry Pi Zero features SPI protocol support, enabling seamless communication with the ADS1263 analog-to-digital converter for data retrieval Additionally, its UART port facilitates real-time data transmission to a PC or laptop without any delay.

We utilize the High-Precision AD HAT from Waveshare Corporation, which features the ADS1263 chip and is designed for seamless integration with Raspberry Pi Detailed specifications are provided in the accompanying figures and tables.

Figure 4.5: Onboard High-Precision AD HAT (source: https://www.waveshare.com/18983.htm)

• (1) Raspberry Pi GPIO header: for connecting Raspberry Pi

• (2) AD input (screw terminal): general purpose

• (3) AD input (header): for connecting sensor modules, Waveshare standard compliant

• (4) Control input (header): allows to be controlled by other hosts

• (7) ADC ground reference configuration: COM is used as negative input terminal on

AD single-end input mode, connect it to GND or external reference voltage

• (12) ADS1263: 32-bit high precision ADC, 10-ch (5-channel differential input)

Table 4.1: High-Precision AD HAT specifications

BUS SPI STRUCTURE Delta-Sigma

Single-end REFERENCE VOLTAGE Internal,

RANGE (MAX) 2.5V, 5V INPUT VOLTAGE RANGE

Table 4.2: High-Precision AD HAT Pinouts

PIN Raspberry Pi (BCM) Raspberry Pi

DRDY P17 P0 ADS1263 data output ready, low active

CS P22 P3 ADS1263 chip select, low active

The AD HAT uses an SPI interface, the working principle is shown in the figure below:

Figure 4.6: Serial Interface Timing Requirements (Source: https://www.ti.com/document- viewer/ads1263/datasheet )

- CS, which stands for Chip Select, is a signal that activates a chip when it is set to a low state

- SCLK is the clock pin used in Serial Peripheral Interface (SPI) communication It provides timing for data transfer

DIN, or MOSI (Master Output Slave Input), is the data input pin utilized in SPI communication for transmitting data from the master device to the slave device.

DOUT, or MISO (Master Input Slave Output), is the data output pin utilized in SPI communication to transmit data from the slave device to the master device.

The DRDY pin, or data-ready output pin, is essential for signaling when the data from an Analog-to-Digital Converter (ADC) is ready for output When the data is prepared, the DRDY pin transitions to a low state, indicating that the information from ADC1 can be accessed.

- SPI communication follows a specific timing sequence determined by two parameters: CPHA and CPOL

• CPOL controls the idle state of the serial synchronous clock When CPOL is set to

0, the clock level is low during the idle state However, CPOL has minimal impact on the actual data transmission

• CPHA determines when the data is sampled or collected during the clock cycle When CPHA is set to 0, data is sampled at the first clock edge

Figure 4.7: Serial Interface Switching Characteristics (Source: https://www.ti.com/document- viewer/ads1263/datasheet)

SPI communication consists of four modes, with SPI0 being widely utilized, characterized by CPHA and CPOL both set to 0 In SPI0 mode, data transmission initiates on the first falling edge of the SCLK, transferring 8 bits of data per clock cycle The transmission occurs bit by bit, starting with the Most Significant Bit (MSB).

The ADS1263 offers various data retrieval modes, allowing continuous or pulsed data reading controlled by the START pin or serial commands In our application, continuous reading of pulse wave data is essential to prevent information loss, which is achieved by using ADC1 to read directly from the piezoelectric sensor This method allows ADC1 conversion data to be shifted out from the output shift register without an opcode It is critical to avoid any serial activity from the moment the DRDY signal goes low until the readback is complete, as this can invalidate the data The serial interface supports full duplex operation, enabling command decoding while data is being read To prevent unintended command execution during readback, the DIN pin must remain low Data readback should be completed at least 16 fCLK cycles before the next DRDY signal to prevent new data from overwriting old data Proper synchronization with the DRDY signal ensures data is captured before the next falling edge If new ADC1 conversion data is available during a read/write operation, it is stored in the data holding register until a read command is issued However, writing new data to registers will restart the conversion cycle, clearing the contents of the data-holding register, making prior conversion data unavailable Therefore, it is advisable to read conversion data before executing any register write operations.

Figure 4.8: Data red directly by ADC1(Source: https://www.ti.com/document- viewer/ads1263/datasheet)

As shown in the figure above, the length of the ADC1 data field can vary between 4, 5, or

The data field consists of an optional status byte, four bytes of conversion data, and an optional checksum byte, totaling 6 bytes depending on the programming After reading all the bytes, the data-byte sequence can be repeated by continuing the SCLK signal, restarting from the first byte in the sequence.

Hardware experiment

To evaluate the accuracy of our completed system, we conducted experiments comparing its output signals with ECG signals, recognized as reliable reference points in real-life scenarios This comparison allows us to assess the system's precision and its capability to deliver accurate measurements aligned with established standards Although we noted a slight time difference between the pulse peak and the ECG signals, we considered this discrepancy negligible for our initial analysis Our evaluation focuses on the number of peaks in both signals and their temporal alignment, which helps determine the effectiveness of the wrist pulse data measurement from our device.

In this experiment, we use an additional electrocardiogram sensor to evaluate the signal recorded by our device Because both signals reflect the activity of the heart, our hypothesis is

49 that the timings of systolic peaks have to aligns in both signals Hence, we record ECG signal simultaneously with our device to assess this alignment

Figure 4.9: Electrocardiogram sensor SEN0213 (Source: digikey.com)

In this experiment, we utilized the SEN0213 3-wire ECG sensor from DFRobot, recording ECG signals independently on an Arduino Uno microcontroller to minimize device alterations To ensure simultaneous operation of both sensors, we constructed an additional circuit connecting the Raspberry Pi and Arduino Uno through an optocoupler A button was implemented to control the recording time; when pressed, it triggers a signal from the Arduino to the Raspberry Pi via the optocoupler, initiating simultaneous recording on both devices.

Figure 4.10: Block diagram of an additional circuit for device validation

To facilitate evaluation, high-frequency components are removed from both signal types, leading to somewhat contaminated filtered data due to the incomplete preprocessing However, the critical features, particularly the systolic peaks, remain discernible for hypothesis testing Figures 4.11, 4.12, 4.13, and 4.14 illustrate 5 seconds of simultaneously recorded ECG and piezo signals from four different subjects, with red circles highlighting each systolic peak The alignment in both the number and timing of the systolic peaks across the signals supports our hypothesis and validates the quality of our device.

Figure 4.11: 5-second filtered signal of ECG and the proposed device on subject 1

Figure 4.12: 5-second filtered signal of ECG and the proposed device on subject 2

Figure 4.13: 5-second filtered signal of ECG and the proposed device on subject 3

Figure 4.14: 5-second filtered signal of ECG and the proposed device on subject 1

PULSE WAVE FORECASTING ALGORITHM

Objectives

Time series forecasting is an effective technique for predicting future trends and patterns, enabling us to anticipate unexpected events and gain valuable insights However, as discussed in chapter 2.4, the signals used for forecasting are often compromised by various types of noise, such as power line interference and baseline wander To enhance the accuracy of forecasting models, it is crucial to eliminate these noises from the data.

Hence, the proposed algorithm has to fulfill the following works:

To enhance the performance of deep learning models applied to wrist pulse signals, an effective preprocessing algorithm is essential This algorithm aims to mitigate power line interference and baseline wander while minimizing distortion Specifically, a wavelet-based filter is utilized to process the raw signal, followed by a least square regression algorithm that identifies and removes the polynomial curve associated with baseline wander from the filtered signal.

• Forecasting wrist pulse signal: After achieving a noise-free signal, it is fed into a deep learning model to generate more data into the future.

Data Preparation

The acquisition process utilizes our proposed device in conjunction with an invasive blood glucose measuring machine from the Sinocare brand to accurately record the subject's blood glucose levels following pulse wave data collection This blood glucose information is integral to the application of our device, as detailed in chapter 6 By directly measuring blood glucose, we minimize output label delays, with measurements provided in mass concentration units of mg/dL.

A group of eight healthy individuals was recruited to begin data collection, during which they were comprehensively informed about the data acquisition process, including the finger-pricking method for blood extraction.

Each subject participates in a session four times a day over four days At the start of each session, individuals must sit still for five minutes to stabilize their heart rate Following this, a 10-minute acquisition period begins, during which blood glucose levels are measured at the end of each session.

Throughout the day, various activities, particularly eating, significantly influence blood glucose levels, which can vary notably before and after meals To investigate this, we conducted an experiment to record pulse wave data at four key times: before breakfast (BB), after breakfast (AB), before lunch (BL), and after lunch (AL) We extracted important features from the pulse wave signals, including diastolic peak height, systolic peak height, heart rate variability, and pulse wave period, as illustrated in figure 5.1 The analysis revealed distinct distributions of these features at different times of the day, as shown in figure 5.2 These findings indicate that the recorded data varies significantly across these time points, providing valuable information for our deep learning model to enhance its performance Consequently, we opted to conduct data acquisition during these specific times.

Figure 5.1: Important features inside 2 consecutive pulse wave

Figure 5.2: The boxplots showing distribution of heart rate variability, pulse period, diastolic peak height, systolic peak height on 4 different times including Before Breakfast (BB), After Breakfast (AB),

Before Lunch (BL), After Lunch (AL)

In the initial session of the day, participants are required to fast for 8 hours prior to data recording Subsequent sessions are conducted after breakfast and before and after lunch.

Preprocessing algorithm

5.3.1 Applying wavelet transform in filtering signal

To eliminate signals at a specific frequency, the Fourier transform is a widely used method Like the wavelet transform, it is a reversible process that allows us to extract the necessary information from the transformed signal The mathematical expression of the Fourier transform is as follows:

In the equations, the signal x(t) is multiplied by an exponential term at specific frequencies “f” and integrated over all time This process highlights frequency components that align with “f,” yielding a higher value, while other components diminish towards zero Consequently, Fourier analysis effectively captures the frequency information of a signal but fails to provide timing details for specific frequency components While this limitation may not affect stationary signals with stable frequencies, it poses a significant challenge for non-stationary signals, where timing information is crucial.

To address the challenge of missing timing information in signal analysis, the Short-Time Fourier Transform (STFT) has been introduced as a modified version of the traditional Fourier Transform Instead of analyzing the entire signal at once, STFT segments the signal into smaller parts, applying the Fourier Transform to each segment individually This process begins with the selection of a fixed window, which is then shifted across the signal, allowing for repeated transformations until the end of the signal is reached.

Where: o 𝑥(𝑡): is the input signal o 𝑤(𝑡): is the window function o ∗: is complex conjungate

However, STFT requires a suitable window size to perform well Because the window size is fixed, the tradeoff between time resolution and frequency resolution is inevitable.

Wavelet transform enables the simultaneous extraction of time and frequency information by decomposing signals into wavelets of varying frequencies at multiple resolutions This capability enhances the effectiveness of our algorithm.

The proposed device records data at a sample rate of 512 samples per second (SPS) Utilizing discrete wavelet decomposition, the frequency of the data is effectively halved with each level of decomposition To achieve a frequency component of 1 Hz, the appropriate level of decomposition must be selected.

10 and the mother wavelet to be ‘sym5’ Afterward, we decided to keep a signal that is between

To filter data effectively, coefficients corresponding to frequencies outside the range of 1 Hz to 16 Hz are set to zero, followed by an inverse wavelet transform to recover the filtered data The outcome of this filtering process is illustrated in Figure 5.6.

Figure 5.4: Frequency bands division of wavelet transform

Figure 5.5: ‘Sym5’ wavelet used in the proposed algorithm

Figure 5.6: The comparisons between raw and filtered data on 5 different subjects with each row belongs to a subject

Despite the removal of low-frequency components from the data, baseline wander persists and cannot be completely eliminated Therefore, an additional algorithm is employed to identify the baseline wander polynomial for effective removal.

A pulse wave consists of a rising and falling period generated by heart activity, featuring key fiducial points like the systolic peak and pulse onset, as illustrated in figure 5.7 In this study, we utilize the pulse onset to estimate the baseline wander polynomial, beginning the detection process with systolic peak identification This approach is effective because systolic peaks are prominent and easily distinguishable from other peaks.

To identify systolic peaks in the data, we first compute the differences between consecutive data points, recognizing that a peak occurs when the data transitions from an upward to a downward trend We identify local peaks by locating points where these differences are negative To differentiate systolic peaks from other local peaks, we establish a threshold based on the 80th percentile of the amplitudes of the identified local peaks Despite this threshold, some outliers may still exist among the systolic peaks, prompting us to implement a rule that prohibits the detection of any peaks within 200 ms following each identified peak Following the identification of systolic peaks, we analyze a 200 ms search back window for each peak to determine the pulse's onset by locating the minimum points within these windows.

Figure 5.7: The positions of pulse’s onset and systolic peak inside pulse wave

Figure 5.8: Pulse’s onset detection algorithm

After identifying the pulse onsets, we apply Least Square Regression to determine the optimal polynomial curve that represents the baseline wander Initially, we establish the polynomial equation as follows:

𝑦 = 𝑎 0 + 𝑎 1 𝑥 + 𝑎 2 𝑥 2 + 𝑎 𝑏 ∗ 𝑥 𝑛 + 𝜖 (5.4) where 𝑎 0 , 𝑎 1 , 𝑎 2 , … 𝑎 𝑛 are the coefficients that we want to determine.

The least squares method aims to identify coefficients that minimize the sum of squared residuals, representing the differences between observed data points (x1, y1) and the algorithm's predicted values (𝑦̂) To find these coefficients, one must solve a system of linear equations derived from the polynomial residual function.

The system of linear equations is rewritten in matrix notation as:

Pre-multiplying the transpose XT to the system, we get

We finally solve this equation to achieve the coefficients of the polynomial

Figure 5.9: Baseline wander removal algorithm

To ensure the accuracy of deep learning models, it is crucial to scale datasets recorded from multiple subjects to a consistent range Variations in value ranges can lead to discrepancies in how different datasets influence the model, with larger value ranges disproportionately impacting the results Therefore, we transform all datasets to a uniform scale of 0 to 1, promoting balanced contributions from each subject's data.

To do this, we apply the following equations:

Time series forecasting model

Time series forecasting algorithms are designed to predict future piezo signal values by analyzing historical data In addition to the historical values, these algorithms incorporate additional variables, such as the time elapsed since the start of recording and the hour of the day, to enhance prediction accuracy.

In our analysis, we define the input time series as 𝑋 𝑡 𝑖, where 𝑖 represents the variable in the dataset and 𝑇 indicates the number of time steps For a piezo dataset of length T, represented as (𝑥 1 𝑖, 𝑥 𝑇 𝑖), our goal is to predict L future values (𝑋 𝑇 𝑖, 𝑋 𝑇+𝐿 𝑖) To optimize our model, we categorize additional variables, such as the hour of the day and time from the start date, into two subsets: historical features and known future features, which we process separately to enhance model performance.

To effectively utilize the limited three types of inputs in our problem, we have developed a model that maximizes the available information The architecture of our model comprises several key components: Time2Vec for temporal representation, a gating mechanism for enhanced control, feature selection for optimal input relevance, interpretable multi-head attention for better understanding, and locality enhancement to improve contextual awareness.

Figure 5.10: Block diagram of the forecasting model

The processing of piezo signal and hour of day data involves two dense layers, while time data utilizes a Time2Vec layer to ensure all inputs are converted into vector spaces of matching dimensions for skip connections These processed inputs are then directed to a feature selection network to evaluate their contributions and relationships to the targets The additional variables are categorized into historical and known future features, which are separately processed by an LSTM Encoder and LSTM Decoder The LSTM encoder captures the temporal patterns, while the decoder focuses on initial future information The outputs from both cells are concatenated and passed through a gating layer and multi-head attention for further processing before data forecasting begins.

2 skip connections are applied in order to improve the information flow inside the model

Time series data, defined as observations recorded over time, highlights the significance of temporal information as a crucial feature Analyzing this data allows for the identification of trends and patterns, offering valuable insights into various problems Certain events occur only at specific times, making it essential to consider factors like holidays when predicting daily sales, or seasonal variations in electricity prices, which tend to be higher in summer compared to winter To enhance the processing of time information, the Time2Vec mechanism transforms temporal data into a learnable vector representation, seamlessly integrating with the proposed model Mathematically, Time2Vec can be expressed as follows:

ℱ(𝜔 𝑖 𝜏 + 𝜑 𝑖 ), 𝑖𝑓 1 ≤ 𝑖 ≤ 𝑘 (5.9) where: o ℱ is a periodic activation function o 𝜔, 𝜑 are learnable parameter

In this study, we select the sine function as the activation function ℱ due to its periodic nature, which aligns well with our periodic data Utilizing the sine function allows us to effectively capture periodic behaviors without the need for extensive feature engineering.

The precise connection between exogenous inputs and targets is often uncertain, posing a challenge in identifying the relevant variables Additionally, the complexity of the relationships

The relationship between variables can vary significantly, with some exhibiting strong non-linear dependencies that necessitate advanced modeling techniques to extract valuable insights Conversely, simpler modeling approaches can be effective, especially with smaller or noisier datasets Due to the inherent complexity, the decision to utilize complex or simple modeling techniques cannot be predetermined and requires careful evaluation of factors such as noise levels and dataset size To address this complexity, the gating layer is introduced as a mechanism to regulate information flow within the network, providing the flexibility to implement non-linear processing only when necessary.

𝜂 2 = 𝐸𝐿𝑈(𝑊 2,𝜔 𝑎 + 𝑊 3,𝜔 𝑐 + 𝑏 2,𝜔 ) (5.12) where ELU is the Exponential Linear Unit activation function [19], n1 and n2 are intermediate layer, Normalization is standard layer normalization and w denote weight sharing, and GLU is gated linear unit

Figure 5.11: Block diagram of the gated residual network

GLU is used to enhance the flexibility to suppress the unnecessary part of the architecture GLU, and ELU can be described as follow:

𝐺𝐿𝑈 𝜔 (𝛾) = 𝜎(𝑊 4,𝜔 𝛾 + 𝑏 4,𝜔 ) ⊙ (𝑊 5,𝜔 𝛾 + 𝑏 5,𝜔 ) (5.13) where 𝑊 5,𝜔 𝛾 + 𝑏 5,𝜔 applies non-linear processing to the input, and 𝜎(𝑊 4,𝜔 𝛾 + 𝑏 4,𝜔 ) computes the selection weights GLU allows our model to control the contribution of the input

Figure 5.12: Block diagram of gated linear unit

In our study, we analyze three types of inputs: piezo signals, time from start, and hour of the day, with their relevance to the output remaining unknown Each input's contribution varies and is not immediately apparent To address this, we implement instance-wise variable selection using variable selection networks, which help identify the significance of each input and eliminate unnecessary noisy inputs that may hinder network performance Our feature selection mechanism consists of multiple Gene Regulatory Networks (GRNs), with the quantity determined by the number of inputs.

Before entering the feature selection network, all inputs are converted into uniform dimensional vectors to align with the subsequent layers for skip connections These vectors undergo non-linear processing through an equivalent Gene Regulatory Network (GRN) Concurrently, the transformed inputs are flattened and fed into the GRN, followed by a softmax layer The softmax output, ranging from 0 to 1, facilitates the calculation of variable selection weights, determining the contribution of each variable to the network.

Let 𝜁 𝑡 𝑖 be the transformed input of the i-th variable at time t, and Ξ 𝑡 = [𝜁 𝑡 1𝑇 , 𝜁 𝑡 2𝑇 , 𝜁 𝑡 3𝑇 ] 𝑇 be the flattened vector of three aforementioned inputs at time t The variable weight is calculated as follow:

Additionally, each 𝜁 𝑡 𝑖 is fed into its own GRN to apply further non-processing as follow:

Finally, the processed inputs are multiplied by their variable selection weights equivalently to determine the contribution of each variable to the model

Figure 5.13: Block diagram of feature selection network

Multi-head attention is a robust technique for time series forecasting, effectively capturing complex temporal dependencies and extracting relevant information from sequential data By utilizing multiple attention heads, it enhances the modeling of temporal interactions, leading to improved accuracy and efficiency in forecasting models Additionally, multi-head attention computes attention weights that signify the importance of each time step or element in the input sequence, thereby highlighting their contributions to the output and enhancing the model's interpretability.

The attention mechanism used in this study is “Scaled Dot-Product Attention” In general, attention mechanisms calculate the relationship between keys 𝐾 ∈ ℝ 𝑁 × 𝑑 𝑎𝑡𝑡𝑛 and queries 𝑄 ∈

ℝ 𝑁 × 𝑑 𝑎𝑡𝑡𝑛 and then scale the values 𝑉 ∈ ℝ 𝑁 × 𝑑 𝑎𝑡𝑡𝑛 based on this relationship

𝐴𝑡𝑡𝑒𝑛𝑡𝑖𝑜𝑛(𝑄, 𝐾, 𝑉) = 𝑆(𝑄, 𝐾)𝑉 (5.17) where 𝑑 𝑎𝑡𝑡𝑛 is the dimension of 𝑄, 𝐾, 𝑉, and 𝑆() is normalization function which is scaled dot-product in this case:

Multi-head attention, which employs multiple heads for different representation subspaces, is used to increase the learning capacity

𝐻 ℎ = 𝐴𝑡𝑡𝑒𝑛𝑡𝑖𝑜𝑛(𝑄𝑊 𝑄 (ℎ) , 𝐾𝑊 𝐾 (ℎ) , 𝑉𝑊 𝑉 (ℎ) ), (5.20) where 𝑊 𝐾 (ℎ) ∈ ℝ 𝑑 𝑚𝑜𝑑𝑒𝑙 ×𝑑 𝑎𝑡𝑡𝑛 , 𝑊 𝑄 (ℎ) ∈ ℝ 𝑑 𝑚𝑜𝑑𝑒𝑙 ×𝑑 𝑎𝑡𝑡𝑛 , 𝑊 𝑉 (ℎ) ∈ ℝ 𝑑 𝑚𝑜𝑑𝑒𝑙 ×𝑑 𝑉 are weights for keys, queries, and values weights for h-th head, and 𝑊 𝐻 ∈ ℝ (𝑚 𝐻 ∙𝑑 𝑉 )×𝑑 𝑚𝑜𝑑𝑒𝑙 is the outputs concatenated from all heads

Events such as vacations or severe weather can significantly influence time series data patterns, making the context of data points essential for identifying anomalies, change points, or underlying patterns Traditional Transformer models rely solely on the individual values of queries and keys to assess similarities, often neglecting local context and data shape To enhance the performance of attention-based architectures, we implement an LSTM encoder-decoder that incorporates local context along with point-wise values.

Experiment

To effectively analyze the data for each subject over time, we segmented it into three distinct parts: a training set for learning, a validation set for hyperparameter tuning, and a test set for subsequent evaluation This division follows an 8-1-1 ratio based on the chronological order of the data For instance, if the original dataset spans 600 seconds, it would be allocated accordingly to ensure optimal model performance and evaluation.

In this study, the dataset is categorized into three segments: the training dataset, which includes recordings shorter than 480 seconds; the validation dataset, comprising recordings that range from 480 to 540 seconds; and the testing dataset, which consists of all remaining recordings.

Additionally, we conduct random searching to find optimal hyperparameters for our model After searching for 60 iterations, we achieve the final hyperparameters including:

Besides, several fixed parameters are chosen including:

• Loss function: Mean Square Error

To prepare for the training procedure, all three datasets are segmented into windows For each iteration, we randomly select 300,000 windows from the training dataset and 30,000 from the validation dataset, with these quantities determined by our computer's processing capacity.

The forecasting model training procedure utilizes a NVIDIA RTX 3060 GPU and spans three days, while inference is conducted on a NVIDIA RTX 2060 GPU Additionally, the training duration can be optimized further through hardware-specific enhancements.

The effectiveness of the forecasting model is assessed using two key metrics: mean squared error (MSE) and mean absolute error (MAE) These metrics provide a quantitative measure of the model's accuracy in predicting outcomes The formulas for calculating MSE and MAE are outlined below.

(5.21) where N is the number of blood glucose values, 𝑦̂ 𝑖 and 𝑦 𝑖 are predicted and actual blood glucose level, respectively

To ensure a fair evaluation of our model's performance, we selected several popular models and algorithms commonly used in time series forecasting for comparison All models, including ours, were trained and tested on the same datasets The chosen models for this comparative analysis include:

• Long short-term memory networks

The evaluation results are summarized in the table below:

Table 5.1: The comparison between forecasting models

Our model outperforms current popular algorithms and deep learning models, as demonstrated in Table 5.1 While it performs well, there is potential for further optimization and enhancement with additional data We conducted an iterative inference experiment, starting with historical data to predict subsequent time steps and feeding the predictions back into the model This approach can lead to error accumulation, necessitating a halt before reaching unacceptable levels Ultimately, our model successfully forecasts up to 2 seconds into the future without distortion, as illustrated in Figure 5.14.

Figure 5.14: Results of iterative inference on 4 different subjects

GLUCOSE MEASUREMENT

Objectives

Diabetes is a chronic disease that poses significant global health challenges, leading to complications such as cardiovascular, respiratory, and kidney diseases that can severely impact a patient's internal organs and overall quality of life The International Diabetes Federation reported approximately 424.9 million people living with diabetes in 2017, with projections suggesting this number could rise to 628.6 million by 2045 In 2017 alone, diabetes was responsible for around 4.0 million deaths and incurred a staggering global expenditure of USD 727 million The condition arises when the body either fails to produce sufficient insulin or cannot utilize it effectively, resulting in elevated blood glucose levels Diabetic patients must adhere to strict dietary guidelines and consistently monitor their blood glucose to maintain acceptable levels, as fluctuations can lead to severe consequences Symptoms of abnormal blood glucose levels include nausea, fatigue, headaches, confusion, and shakiness, underscoring the need for prompt management.

Figure 6.1: The global number of people with diabetes in 2017 and 2045 (Source: IDF Diabetes Atlas

Continuous glucose monitors (CGMs) are widely available today, enabling individuals to effectively manage their blood glucose levels around the clock, including during sleep Unlike traditional methods, CGMs do not measure blood glucose directly; instead, they utilize interstitial fluid—the fluid found between blood vessels and cells—to obtain readings This process involves a delay, as glucose must diffuse across capillary walls and into the interstitial space before it can be measured accurately.

The delay in continuous glucose monitoring (CGM) measurements, which can range from 5 to 10 minutes due to the space between the sensor and the physiological conditions, may lead to delayed responses in managing blood glucose levels.

Figure 6.2: Visualization of fluid compartments (Source: https://courses.lumenlearning.com/suny- ap2/chapter/body-fluids-and-fluid-compartments-no-content/)

We are developing a deep learning model to measure blood glucose levels using forecasted piezo signals According to the International Organization for Standardization (ISO 15197:2013), the accuracy of the system must be within ±15 mg/dL for concentrations under 100 mg/dL and within ±15% for concentrations of 100 mg/dL or higher Therefore, it is essential that our blood glucose measurement model meets these stringent accuracy standards.

Blood glucose measuring model

In this section, we aim for building a deep learning model that can measure blood glucose level based on a sequence of pulse wave data

In this study, we define the input time series as \( X_{t}^{i} \), where \( i \) represents the variable in the dataset and \( t \) indicates the number of time steps Our focus is on a sequence of length \( T \), denoted as \( (x_{1}^{i}, x_{T}^{i}) \), to effectively measure the predicted blood glucose level \( \hat{y} \).

Figure 6.3: Block diagram of blood glucose measuring model

A temporal convolutional network processes the input sequence, serving as a crucial operation in time series forecasting By utilizing a convolution kernel on the time series data, the model effectively captures local patterns and dependencies, enhancing its predictive capabilities.

In other words, temporal convolutional network acts as a sliding filter across the input sequence This filter can capture local patterns by performing element-wise multiplication and summing

75 the results Hence, the model can extract relevant features from the sequence, recognize patterns at different time scales

The process data is input into an LSTM-based encoder-decoder, where the encoder summarizes the information and the decoder generates the output LSTM's capability to capture long-term dependencies and detect temporal patterns enables the model to handle long sequences of data effectively while mitigating issues like exploding and vanishing gradients A skip connection is implemented to integrate the original input with the decoder's output, enhancing the model's access to earlier time steps and improving its ability to capture long-term dependencies This connection also facilitates better gradient flow, allowing for more efficient information propagation and faster convergence Following the skip connection, a dropout layer is applied to further optimize the model's performance.

Finally, we apply an additional non-linear process layer to the output of dropout layer and then another dense layer for the output.

Experiment

In this application, we utilize the dataset discussed in chapter 5.2, where the piezo signal serves as the model's input and the corresponding blood glucose level acts as the label Acknowledging the limitations of our dataset size, we incorporate an additional dataset, D1NAMO [2], to pre-train our model effectively.

The D1NAMO dataset is an open multi-modal resource that includes ECG, breathing, accelerometer signals, blood glucose measurements, and annotated food intake from real-life conditions Collected from 20 healthy individuals and 9 patients with type-1 diabetes, the data acquisition was self-administered, with subjects activating the sensors upon waking and deactivating them before bedtime over a four-day period Healthy participants recorded their blood glucose levels six times daily using the Bayer Contour XT glucose meter and Bayer Next strips, while diabetes patients utilized the iPro Professional CGM sensor for their measurements.

For pre-training purposes, this dataset exclusively utilizes ECG and blood glucose data The ECG, which captures heart activity, is somewhat correlated with wrist pulse wave signals Data is collected using a 1-lead sensor equipped with two silver-coated nylon electrodes, at a sampling rate of 250 Hz However, since each subject independently acquires this data, issues such as poor electrode contact can compromise the results.

The data is highly noisy and disconnected, necessitating a manual review to eliminate erroneous entries After filtering out the failures, we apply a wavelet filter and remove baseline wander, followed by scaling the data Finally, the cleaned data is resampled to match our desired sample rate.

Figure 6.4: Zephyr Bioharness 3 device used in D1NAMO project (Source: [2])

In this dataset, blood glucose levels are expressed in millimoles per liter (mmol/L), necessitating a conversion to match the units used by our device.

6.2.5 Training procedure: Blood glucose measuring model

We split the two prepared datasets into training, validation, and test sets using a 9-1-1 ratio, consistent with our forecasting tasks To optimize model performance, we perform random searching for hyperparameters over 60 iterations Each dataset is then divided into windows for training preparation, mirroring the training process used for the forecasting model We randomly select 300,000 windows for the training dataset and 30,000 for the validation dataset during each training iteration The following list details the optimal hyperparameters identified.

The fixed parameters for this model are listed below:

• Loss function: Mean Square Error

The training and inference procedure for this model is operated on NVIDIA T4 GPU on Google Colab The time it takes for training process is a whole day

6.2.6 Experimental result: Blood glucose measuring model

The performance of the blood glucose measuring model is assessed using two key metrics: mean absolute error (MAE) and mean absolute percentage error (MAPE) The test dataset is divided into two subsets: one for blood glucose levels below 100 mg/dL, evaluated using MAE, and another for levels at or above 100 mg/dL, assessed with MAPE The formula for MAPE is defined as follows:

(6.2) where N is the number of blood glucose values, 𝑦̂ 𝑖 and 𝑦 𝑖 are predicted and actual blood glucose level, respectively

The results of the evaluation process are as follow:

While our model does not yet reach state-of-the-art performance, it meets our proposed objectives, making its current performance satisfactory Future enhancements are possible through the incorporation of additional data and optimization of both the architecture and hyperparameters.

We successfully achieved all our proposed objectives by developing a wearable device that accurately records pulse signals from the wrist This device's precision has been validated against a reliable ECG sensor Additionally, we created a deep learning model capable of predicting pulse wave signals two seconds ahead, along with another model that utilizes this forecasted data to estimate blood glucose levels.

Our project faces limitations, particularly regarding our dataset, which lacks both size and diversity, especially in relation to diabetes patients due to recruitment challenges Additionally, we currently lack the necessary architecture to develop a model capable of long-term forecasting Moving forward, we aim to address these issues in our future work.

• Acquire a larger and more diverse dataset including data from normal subjects, diabetes patients of different ages as well as different compilations of diabetes

• Increase the scale of model to make it able to forecast longer

Leverage the interpretability features of the attention mechanism in forecasting models to enhance the understanding of blood glucose measurement models, offering valuable insights into the decision-making processes of these models.

[1] Chuanglu Chen et al, A 3D Wrist Pulse Signal Acquisition System for Width Information of Pulse Wave, Sensors, pp 1-16, vol 20, MDPI, 2019

[2] Fabien Dubosson et al, The open D1NAMO dataset: A multi-modal dataset for research on non-invasive type 1 diabetes management, Informatics in Medicine

[3] Cuiwei Li et al, Detection of ECG characteristic points using wavelet transforms, IEEE Transactions on Biomedical Engineering, pp 1-8, vol 42, IEEE, 1995

[4] Timibloudi Enamamu at el, Continuous m-Health Data Authentication Using Wavelet

Decomposition for Feature Extraction, Sensors, pp 1-22, vol 20, MDPI, 2020

[5] Arthur de Sá Ferreira, Resonance phenomenon during wrist pulse-taking: A stochastic simulation, model-based study of the ‘pressing with one finger’ technique,

Biomedical Signal Processing and Control, pp 1-2, vol 8, Elsevier, 2013

[6] Tiantian Guo et al, A Review of Wavelet Analysis and Its Applications: Challenges and

Opportunities, IEEE Access, pp 1-35, vol 10, IEEE, 2022

[7] International Diabetes Federation, IDF Diabetes Atlas, Belgium: International Diabetes

[8] Ming-Yie Jan et al, The Physical Conditions of Different Organs Are Reflected Specifically in the Pressure Pulse Spectrum of the Peripheral Artery, Cardiovascular

Engineering: An International Journal, pp 1-2, vol 3, Springer, 2023

[9] Nan Li et al, The Correlation Study of Cun, Guan and Chi Position Based on Wrist

Pulse Characteristics, IEEE Access, pp 1-13, vol 9, IEEE, 2021

[10] Ganesh R Naik, Computational Intelligence in Electromyography Analysis – A Perspective on Current Applications and Future Challenges, InTech, 2012

[11] Shi Zhen Li, Shih-chen Li, Pulse Diagnosis, Paradigm Press, 1993

[12] Jiapu Pan, Willis J Tompkins, A Real-Time QRS Detection Algorithm, IEEE

Transactions on Biomedical Engineering, pp 2-3, vol BME-32, IEEE, 1985

[13] Salvador Quiroz-González et al, Acupuncture Points and Their Relationship with Multireceptive Fields of Neurons, Journal of Accupunture and Meridian Studies, pp 2-

[14] Gunther Schemelzeisen-Redeker et al, Time Delay of CGM Sensors: Relevance, Causes, and Countermeasures, Journal of Diabetes Science and Technology, pp 1-2, vol 9, Sage, 2015

[15] Vijayakumari et al, Analysis of noise removal in ECG signal using symlet wavelet,

International Conference on Computing Technologies and Intelligent Data Engineering, pp 1-6, IEEE, 2016

[16] Tanuj Yadav, Rajesh Mehra, Denoising ECG Signal Using Daubechies and Symlet Wavelet Transform Techniques, International Journal of Advanced Research in

Computer and Communication Engineering, pp 1-6, vol 5, 2016

[17] Yu-Fend Chung, New vision of the pulse conditions using Bi-Sensing Pulse Diagnosis

Instrument, International Conference on Orange Technologies, pp 1-4, IEEE, 2013

[18] Changbo Zhao et al, Advances in Patient Classification for Traditional Chinese Medicine: A Machine Learning Perspective, Evidence-Based Complementary and

Alternative Medicine, vol 2015, pp 1-19, vol 2015, Hindawi, 2015

[19] Djork-Arné Clevert, Thomas Unterthiner, and Sepp Hochreiter, Fast and Accurate Deep

Network Learning by Exponential Linear Units (ELUs), link https://arxiv.org/abs/1511.07289v5, 6/2023

[20] Seyed Mehran Kazemi et al, Time2Vec: Learning a Vector Representation of Time, link https://arxiv.org/abs/1907.05321, 6/2023

[21] Bryan Lim at el, Temporal Fusion Transformers for Interpretable Multi-horizon Time Series Forecasting, link https://arxiv.org/abs/1912.09363, 4/2023

[22] Ashish Vaswani et al, Attention Is All You Need, link https://arxiv.org/abs/1706.03762,

Proceedings of 2023 International Conference on System Science and Engineering (ICSSE)

Development of a Wearable Human Pulse

Measuring Device using Piezoelectric Sensor

HCMC University of Technology and

Ho Chi Minh City quandaoforwork@gmail.com

Department of Mechanical Engineering HCMC University of Technology and

Ho Chi Minh City lequoctuan.fwork@gmail.com

Department of Mechanical Engineering HCMC University of Technology and

Ho Chi Minh City ducbh@hcmute.edu.vn

Pulse diagnosis, a key technique in traditional Chinese medicine (TCM), offers a non-invasive approach to disease diagnosis through the assessment of wrist pulse signals However, this traditional method relies heavily on the practitioner's experience and lacks the ability to store pulse records for future analysis To address these limitations, a novel wearable device was developed, utilizing a highly sensitive piezoelectric sensor to accurately capture wrist pulse signals as digital data To enhance the clarity of the pulse wave, a low-pass filter was implemented to eliminate high-frequency noise The processed pulse data was then compared with simultaneous electrocardiogram (ECG) readings, revealing that the device's recordings align closely with the ECG data, demonstrating its potential for effective pulse diagnosis.

Keywords— piezoelectric sensor, traditional Chinese medicine, wearable device, pulse wave analysis, Arterial pulse

For millennia, pulse diagnosis has been a crucial component of traditional Chinese medicine (TCM) for analyzing diseases This widely recognized noninvasive assessment method is effectively utilized in various medical contexts.

China but also in many Eastern countries Compared with

ECG signals are essential for monitoring heart health, while wrist pulse waves reveal the resonance oscillation between the heart and other internal organs through the arterial system, providing additional insights into overall health In practice, doctors assess the pulse at three key wrist locations—cun, guan, and chi—evaluating characteristics like rate, trend, strength, length, and width for diagnosis However, the accuracy of these diagnoses can vary among Chinese medicine practitioners due to reliance on intuition and experience Recently, advancements in digital signal processing and biomedical technology have led to the development of TCM pulse measuring devices that can instantly visualize and analyze pulse profiles.

Various researchers have explored the pulse capture technique utilizing different sensors, including John N Lygouras et al., Kun-Chan Lan et al., and Leonid S Lovinsky, who employed photoplethysmography sensors to detect pulse signals at the fingertips and wrist While their devices demonstrated effective results, they did not align with the principles of wrist pulse diagnosis in traditional Chinese medicine Photoplethysmography sensors, which measure blood volumetric changes using infrared light, fall short in capturing pulse strength and the resonance oscillation between the heart and other organs.

Ngày đăng: 14/11/2023, 10:11

TRÍCH ĐOẠN

TÀI LIỆU CÙNG NGƯỜI DÙNG

TÀI LIỆU LIÊN QUAN

w