1. Trang chủ
  2. » Luận Văn - Báo Cáo

An efficient, on demand charging for wrsns using fuzzy logic and q learning

46 8 0

Đang tải... (xem toàn văn)

Tài liệu hạn chế xem trước, để xem đầy đủ mời bạn chọn Tải xuống

THÔNG TIN TÀI LIỆU

Thông tin cơ bản

Định dạng
Số trang 46
Dung lượng 1,3 MB

Nội dung

HANOI UNIVERSITY OF SCIENCE AND TECHNOLOGY ———————— Master’s Thesis in Data Science and Artificial Intelligence An Efficient, On-demand Charging for WRSNs Using Fuzzy Logic and Q-Learning La Van Quan Quan.LV202335M@sis.hust.edu.vn Supervisor: Dr Nguyen Phi Le Department: Department of Software engineering Institute: School of Information and Communication Technology Hanoi, 2022 ————————— Declaration of Authorship and Topic Sentences Personal information Full name: La Van Quan Phone number: 039 721 1659 Email: Quan.LV202335M@sis.hust.edu.vn Major: Data Science and Artificial Intelligence Topic An Efficient, On-demand Charging for WRSNs Using Fuzzy Logic and Q-Learning Contributions • Propose a Fuzzy logic-based algorithm that determines the energy level to be charged to the sensors • Introduce a new method that optimizes the optimal charging time at each charging location to maximize the number of alive sensors • Propose Fuzzy Q-charging, which uses Q-learning in its charging scheme to guarantee the target coverage and connectivity Declaration of Authorship I hereby declare that my thesis, titled “An Efficient, On-demand Charging for WRSNs Using Fuzzy Logic and Q-Learning”, is the work of myself and my supervisor Dr Nguyen Phi Le All papers, sources, tables, and so on used in this thesis have been thoroughly cited Supervisor confirmation Ha Noi, April 2022 Supervisor Dr Nguyen Phi Le i Acknowledgments I would like to thank my supervisor, Dr Nguyen Phi Le, for her continued support and guidance throughout the course of my Masters’ studies She has been a great teacher and mentor for me since my undergraduate years, and I am proud to have completed this thesis under her supervision I want to thank my family and my friends, who have given me their unconditional love and support to finish my Masters’ studies Finally, I would like to again thank Vingroup and the Vingroup Innovation Foundation, who have supported my studies through their Domestic Master/Ph.D Scholarship program Parts of this work were published in the paper “Q-learning-based, Optimized Ondemand Charging Algorithm in WRSN” by La Van Quan, Phi Le Nguyen, ThanhHung Nguyen, and Kien Nguyen in the Proceedings of the 19th IEEE International Symposium on Network Computing and Applications, 2020 La Van Quan was funded by Vingroup Joint Stock Company and supported by the Domestic Master/Ph.D Scholarship Programme of Vingroup Innovation Foundation (VINIF), Vingroup Big Data Institute, code VINIF.2020.ThS.BK.03 ii Abstract In recent years, Wireless Sensor Networks (WSNs) have attracted great attention worldwide WSNs consist of sensor nodes deployed on an surveillance area to monitor and control the physical environment In WSNs, every sensor node needs to perform several important tasks, two of which are sensing and communication Every time the above tasks are performed, the sensor’s energy will be lost over time Therefore some sensor nodes may die A sensor node is considered dead when it runs out of energy Correspondingly, the lifetime of WSNs is defined as the time from the start of operation until a sensor dies [1] Thus, one of the important issues to improve the quality of WSNs is to maximize the life of the network In classical WSNs, sensor nodes have fixed energy and always degrade over time The limited battery capacity of the sensor is always a "bottleneck" that greatly affects the life of the network To solve this problem, Wireless Rechargeable Sensor Networks (WRSNs) were born WRSNs include sensors equipped with battery chargers and one or more mobile chargers (Mobile Chargers (MC)) responsible for adding power to the sensors In WRSNs, MCs move around the network, stopping at specific locations (called charging sites) and charging the sensors Thus, it is necessary to find a charging route for MC to improve the lifetime of WRSNs [2], [3] Keywords: Wireless Rechargeable Sensor Network, Fuzzy Logic, Reinforcement Learning, Q-Learning, Network Lifetime Author La Van Quan iii Contents List of Figures vi List of Tables vii Introduction 1.1 Problem overview 1.2 Thesis contributions 1.3 Thesis structure 1 Theoretical Basis 2.1 Wireless Rechargeable Sensor Networks 2.2 Q-learning 2.3 Fuzzy Logic 4 Literature Review 10 3.1 Related Work 10 3.2 Problem definition 11 Fuzzy Q-charging algorithm 4.1 Overview 4.2 State space, action space and Q table 4.3 Charging time determination 4.4 Fuzzy logic-based safe energy level determination 4.4.1 Motivation 4.4.2 Fuzzification 4.4.3 Fuzzy controller 4.4.4 Defuzzification 4.5 Reward function 4.6 Q table update 13 13 13 15 16 16 17 17 18 21 22 Experimental Results 24 5.1 Impacts of parameters 25 5.1.1 Impacts of α 25 5.1.2 Impacts of γ 26 iv 5.2 Comparison with existing algorithms 5.2.1 Impacts of the number of sensors 5.2.2 Impacts of the number of targets 5.2.3 Impacts of the packet generation frequency 5.2.4 Non-monitored targets and dead sensors over time Bibliography 27 27 28 28 30 34 v List of Figures 2.1 2.2 2.3 2.4 A wireless sensor network A sensor structure Network model Q-learning overview 3.1 Network model 12 4.1 4.2 4.3 4.4 The flow of Fuzzy Q-learning-based charging Illustration of the Q-table Fuzzy input membership functions Fuzzy output membership function 5.1 5.2 5.3 5.4 5.5 5.6 5.7 Impact of α on the network lifetime Impact of γ on the network lifetime Network lifetime vs the number of sensors Network lifetime vs the number of targets Network lifetime vs the packet generation frequency Comparison of non-monitored targets over time Comparison of dead sensors over time vi algorithm 5 14 14 18 19 25 26 27 29 29 30 31 List of Tables 4.1 4.2 4.3 4.4 4.5 Input variables with their linguistic values and corresponding membership function Output variable with its linguistic values and membership function Fuzzy rules for safe energy level determination Inputs of linguistic variables Fuzzy rules evaluation 5.1 System parameters 25 vii 18 18 19 19 20 Chapter Introduction 1.1 Problem overview Wireless Sensor Networks (WSNs) have found various applications, such as air quality monitoring, environmental management, etc., [4, 5] A WSN typically includes many battery-powered sensor nodes, monitoring several targets, and sending sensed data to a base station for further processing In the WSNs, it is necessary to provide sufficient monitoring quality surrounding the targets (i.e., guaranteeing target coverage) Moreover, the WSNs need to have adequate capacity for the communication between the sensors and base station (i.e., ensuring connectivity) [6][7][8] The target coverage and connectivity are severely affected by the depletion of the battery on sensor nodes When a node runs out of battery, it becomes a dead node without sensing and communication capability, damaging the whole network in consequence Wireless Rechargeable Sensor Networks (WRSNs) leverages the advantages of wireless power transferring technology to solve that critical issue in WSNs A WRSN uses a mobile charger (MC) to wirelessly compensate for a rechargeable battery’s energy consumption on a sensor node, aiming to guarantee both the target coverage and connectivity In a normal operation, the MC moves around the networks and performs charging strategies, which can be classified into the periodic [9][1][10][11][12] or on-demand charging [13][2][14][15] [16][17][18] In the former, the MC, with a predefined trajectory, stops at charging locations to charge the nearby sensors’ batteries In the latter, the MC will move and charge upon receiving requests from the sensors, which have the remaining energy below a threshold The periodic strategy is limited since it cannot adapt to the sensors’ energy consumption rate dynamic On the contrary, the on-demand charging approach potentially deals with the uncertainty of the energy consumption rate Since a sensor with a draining battery triggers the on-demand operation, the MC’s charging strategy faces a new time constraint challenge The MC needs to handle two crucial issues: deciding the next charging location and staying period at the location Although many, the existing on-demand charging schemes in the literature face two serious problems The first one is the consideration of the same role for the sensor nodes in WRSNs That is somewhat unrealistic since, intuitively, several sensors, depending on their locations, significantly impact the target coverage and the connectivity than others Hence, the existing charging schemes may enrich unnecessary sensors’ power while letting necessary ones run out of energy, leading to charging algorithms’ inefficiency It is of great importance to take into account the target coverage and connectivity simultaneously The second problem is about the MC’s charging amount, which is either a full capacity (of sensor battery) or a fixed amount of energy The former case may cause: 1) a long waiting time of other sensors staying near the charging location; 2) quick exhaustion of the MC’s energy In contrast, charging a too small amount to a node may lead to its lack of power to operate until the next charging round Therefore, the charging strategy should adjust the transferred energy level dynamically following the network condition 1.2 Thesis contributions Motivated by the above, this thesis propose a novel on-demand charging scheme for WRSN that assures the target coverage and connectivity and adjusts the energy level charged to the sensors dynamically My proposal, named Fuzzy Q-charging, aims to maximize the network lifetime, which is the time until the first target is not monitored First, this work exploit Fuzzy logic in an optimization algorithm that determines the optimal charging time at each charging location, aiming to maximize the numbers of alive sensors and monitoring targets Fuzzy logic is used to cope with network dynamics by taking various network parameters into account during the determination process of optimal charging time Second, this thesis leverage the Q-learning technique in a new algorithm that selects the next charging location to maximize the network lifetime The MC maintains a Q-table containing the charging locations’ Q-values representing the charging locations’ goodness The Q-values will be updated in a real-time manner whenever there is a new charging request from a sensor I design the Q-value to prioritize charging locations at which the MC can charge a node depending on its critical role After finishing tasks in one place, the MC chooses the next one, which has the highest Q-value, and determines an optimal charging time The main contributions of the paper are as follows • This thesis propose a Fuzzy logic-based algorithm that determines the energy level to be charged to the sensors The energy level is adjusted dynamically following the network condition • Based on the above algorithm, this thesis introduce a new method that optimizes the optimal charging time at each charging location It considers sev- Chapter Experimental Results This thesis compares the performance of Fuzzy Q-charging with the most relevant three existing algorithms The first one is INMA [17], in which the MC determines the next sensor to charge based on factors including the residual energy of sensors and the distance from sensors to the MC The next charging sensor is chosen to minimize the number of other requesting nodes that may suffer from energy depletion The second one is GSA [18] At each charging round in GSA, the MC uses the gravitational search algorithm to determine a near-optimal charging order to fulfill all charging requests In both INMA and GSA, the MC always charges to the maximum battery capacity of the sensor The last comparison benchmark is our previous work, namely Q-charging [31] Q-charging leverages Q-learning to determine the next charging location However, different from Fuzzy Q-charging, Q-learning tries to maximize the number of sensors being charged to a predefined energy level Besides, we also measure the network lifetime when there is no charging scheme is applied Hereafter, we call this option a "no charging" scheme I conduct two experiments, among which the first complements the other The first experiment investigates the impact of parameters γ and α on the performance of our proposal Based on the first experiment results, we determine the optimal values of γ and α They are used in the second experiment, which compares Fuzzy Q-charging performance to the existing works The metrics of interest include the network lifetime and the number of non-monitored targets over time in the evaluation In all experiments, the network area is fixed at the size of 1000 m×1000 m The sensors and targets were randomly scattered in the simulated region The charging locations are located on a × square grid Each value plotted on the curves is the average obtained from 10 runs Regarding the energy model, we adopted the parameters proposed in [33] More specifically, we set λ = 36, β = 30, eM C = 10J/s, emove = 0.01J/s Moreover, the initial energy of sensors and MC are 10J and 100J, respectively Each sensor has a battery capacity of 10J The sensors’ sensing range and transmission range are set to 40 m and 80 m, respectively The velocity of the MC is 5m/s The average energy 24 Table 5.1: System parameters Value 36 30 100J 500J m/s 10J 10J 4J 40 m 80 m 200 ∼ 400 100 ∼ 300 0.3 ∼ 0.7 600 400 200 Network lifetime (103s) 800 Factor λ β Initial energy of the MC Battery capacity of MC The velocity of the MC Initial energy of sensors Battery capacity of sensors Eth Sensing range Transmission range Number of sensors Number of targets Per second packet generation probability 0.3 0.4 0.5 0.6 0.7 0.8 α Figure 5.1: Impact of α on the network lifetime consumption rate of the sensors is estimated by the base station, as mentioned in Section 3.2 The parameters are summarized in Table 5.1 5.1 Impacts of parameters This section studies the impacts of parameters α and γ on our proposed algorithm’s performance Although we have conducted experiments with various settings, the results show similar trends Therefore, we only present the results in a scenario with 300 sensors and 200 targets 5.1.1 Impacts of α This experiment varies the value of α from 0.3 to 0.8 and measure the network lifetime’s variation The results are shown in Fig 5.1 We can see the network lifetime enlarges significantly when α increases from 0.3 to 0.5 It dramatically drops when α reaches 0.6, and becomes stable This phenomenon can be explained as follows As shown in (4.1), the new Q-value is calculated from the current Q-value, the reward, and the estimated maximal Q-value α is the weight of the last two components, 25 800 600 400 200 Network lifetime (103s) 0.3 0.4 0.5 0.6 0.7 0.8 γ Figure 5.2: Impact of γ on the network lifetime while − α is the weight of the first one Intuitively, the current Q-value reflects the experience the agent has learned so far Meanwhile, the reward and the estimated maximal Q-value can be seen as the knowledge the agent has just attained through the current action and the future prediction, respectively When α is relatively small, e.g., less than 0.5, increasing α helps exploit the experience and the future forecast in making the decision; thus, improve the goodness of the actions However, when α is significantly large, the current experience and the future prediction dominate the Q-value It means that the agent makes decision primarily based on the current reward and future forecast, and ignore all the experience the agent has learned so far The Q-learning now converges to the greedy approach That is why the performance drops severely when α increases from 0.5 to 0.6 and becomes stable beyond that From the experiment results, α should be moderate values around 0.5 5.1.2 Impacts of γ Similarly, the impacts of γ is shown in Fig 5.2 As can be observed, the network lifetime increases when γ goes from 0.3 to 0.5, and decreases after that This is because γ is the weight of the predicted maximal Q-value in the future The greater the γ, the more importantly the future prediction information contributes to the agent’s action When γ is significantly small, the role of the future prediction (i.e., Qmax ) in the Q-value is minor Increasing γ helps agents exploit more future information in making action decisions, thus improving the decision’s goodness, thereby extending the network lifetime However, when γ is significantly large, e.g., more than 0.6, increasing γ will eliminate the impacts of the current Q-value in making a decision In other words, the agent tends to ignore all experiences learned so far and relies primarily on future prediction As the future prediction does not entirely correct, the performance of the Fuzzy Q-learning downgrades severely From the experiment results, the optimal value of γ is from 0.4 to 0.6 26 800 1000 Q−charging 600 GSA INMA 200 400 No charging Network lifetime (103s) Fuzzy Q−charging 200 250 300 350 400 The number of sensors Figure 5.3: Network lifetime vs the number of sensors 5.2 Comparison with existing algorithms This section presents the comparison of our proposal to the existing ones Following the previous observation, we set the values of α and γ to 0.5 5.2.1 Impacts of the number of sensors Figure 5.3 depicts the network lifetime when the number of sensors varies from 200 to 400 In this experiment, the packets are generated randomly, with the probability of 0.3 in 1s; the target number is 200 We can see that the network lifetime increases along with the increasing number of sensors in all algorithms due to each sensor’s traffic load has been reduced However, Fuzzy Q-charging consistently outperforms the others Ours can extend the network lifetime by at least 27 times Moreover, the performance gaps between Fuzzy Q-charging and the others are proportional to the sensor number When the number increases from 200 to 250, the gaps are small However, the gaps dramatically change when reaching 300 sensors After that, Fuzzy Q-charging extends the network lifetime infinitely, while Q-charging, INMA and GSA can only attain a limited network lifetime The reason is that when the number of sensors is small, the traffic imposed on each sensor is large Therefore, the energy consumption rate of all sensors becomes immensely high In all charging algorithms, the MC cannot charge to all sensors in time That explains why the performance gap between the algorithms is insignificant, with a small number of sensors When the sensor number becomes sufficiently large, the energy consumption rate is slower Fuzzy Q-charging, with its effectiveness, significantly improves the network lifetime and outplays the other algorithms Compared to Q-charging, i.e., the second-best charging algorithm, at the condition of fewer than 350 nodes, Fuzzy Q-learning’s network lifetime is 1.7 times better In the case of more than 300 sensors, Fuzzy Q-learning’s network lifetime is infinite, while Q-learning’s one is 27 only prolonged to less than 550 × 103 seconds This results proves the effectiveness of our algorithm which uses Fuzzy logic to automatically adjust the charging energy level Concerning the two other algorithms, GSA and INMA, Fuzzy Q-learning improves the network lifetime to more than 4.8 times, at the condition of fewer than 300 nodes Moreover, when there are more than 300 nodes, GSA and INMA only prolong the network lifetime to less than 300 × 103 seconds, while that of Fuzzy Q-learning is infinite 5.2.2 Impacts of the number of targets We evaluate the target number’s impact in a scenario with 300 nodes and the packet generation probability of 0.3 We investigate the network lifetime variation when the number of targets increases from 100 to 300 The results are presented in Fig 5.4 As shown, Fuzzy Q-charging performs much better than the other algorithms In the cases of less than 200 targets, Fuzzy Q-charging may achieve the infinite network lifetime Meanwhile, Q-charging, INMA, GSA, and no charging attain the most significant lifetime of 120 × 103 s, 320 × 103 s, and 11.3 × 103 s, respectively That is because the traffic load imposed on the sensors is small, leading to a small number of sensors that need to be charged Fuzzy Q-charging favors the sensors with more essential roles in covering targets and transferring data to the base station It can hence maintain the essential sensors’ lifetime and ensure all monitored targets Other algorithms not concurrently consider the target coverage and connectivity constraints Therefore, the essential sensors may not be charged in time, causing some targets to be unmonitored Compared to Q-charging, when the number of the sensor is small (e.g., 100 and 150), both Q-charging and Fuzzy Q-charging can infinitely maintain the network lifetime When the number of sensors increases, the network lifetime achieved by the two algorithms decreases gradually At 200 sensors, Fuzzy Q charging shows a superiority when the number of sensors is 200 Their performance gap decreases when the number of sensors decreases 5.2.3 Impacts of the packet generation frequency Figure 5.5 shows the resulting impact of the packet generation probability on the network lifetime In this experiment, the number of sensors and targets is set to 300 and 200, respectively In all algorithms, the network lifetime tends to decrease when the packet generation probability increases When the probability is too large (i.e., being more than 0.5), all sensors’ energy consumption rate (especially sensors in the base station’s vicinity) becomes fast Therefore, the sensors’ batteries exhaust quickly In such a critical case, the difference between the algorithms is minor We can see the improvement of Fuzzy Q-charging over the existing algorithms clearly 28 Network lifetime (103s) 1200 Fuzzy Q−charging Q−charging 1000 GSA 800 INMA No charging 600 400 200 100 150 200 250 300 The number of targets 800 1000 Fuzzy Q−charging Q−charging GSA 600 INMA 200 400 No charging Network lifetime (103s) Figure 5.4: Network lifetime vs the number of targets 0.3 0.4 0.5 0.6 0.7 Per second packet generation probability Figure 5.5: Network lifetime vs the packet generation frequency under the condition of small packet generation probability When the probability is smaller than 0.5, Fuzzy Q-charging’s network lifetime is more 1.4 times than Qlearning, 6.4 times than INMA’s, and 4.9 times than GSA’s, and 32.7 times than no charging scheme The performance gaps between Fuzzy Q-charging and the other algorithms decrease when the packet generation probability increases Even when the probability is 0.7, the network lifetime’s ratio achieved by Fuzzy Q-charging is 1.2, 1.5, 1.4, and 4.0 times better than Q-charging, INMA, GSA, and no charging, respectively In summary, we can conclude Fuzzy Q-charging outperforms the existing algorithms Moreover, the performance gaps between Fuzzy Q-charging and the others increase when the number of sensors increases, the number of targets decreases, or the packet generation probability decreases 29 200 150 100 Q−charging GSA 50 INMA No charging The number of non−monitored targets Fuzzy Q−charging 0e+00 2e+05 4e+05 6e+05 8e+05 1e+06 Time Figure 5.6: Comparison of non-monitored targets over time 5.2.4 Non-monitored targets and dead sensors over time We present the number of non-monitored targets and the number of sensors over time, causing by different algorithms in Fig 5.6, Fig 5.7, respectively In Fig 5.7, when the time elapses, the number of sensors exhausting energy and becoming dead nodes increases Accordingly, more targets become non-monitored, as shown in Fig 5.6 Fuzzy Q-learning outperforms the other algorithms concerning both metrics There is a huge gap between the performance of Fuzzy Q-learning and the others in Fig 5.6 Fuzzy Q-charging with better charging strategies slows down the increase of non-monitored targets over time Another interesting observation is that while the gaps between the number of dead sensors caused by using INMA and GSA are relatively small (Fig 5.7), the gaps concerning the number of non-monitored targets are huge (Fig 5.6) The reason is INMA and GSA not consider target coverage and connectivity constraints Therefore, the next charging location is not optimized to prioritize the sensors with an essential role Those sensors may be dead in INMA and GSA, leading to the targets be non-monitored Meanwhile, in Fuzzy Q-charging, the charging location determination algorithm can identify the sensor nodes with a specific priority Therefore, the dead sensors caused by Fuzzy Q-charging are the less important ones In many cases, the death nodes may not affect or have minor impacts on the monitored targets 30 150 100 Q−charging GSA 50 INMA No charging The number of dead nodes Fuzzy Q−charging 0e+00 2e+05 4e+05 6e+05 8e+05 1e+06 Time Figure 5.7: Comparison of dead sensors over time 31 Conclusion and Future Work This paper addresses optimizing the MC’s charging schedule in WRSNs, which considers target coverage and connectivity constraints Unlike the existing approaches, ours took into account the charging location and the charging time in the newly proposed Fuzzy Q-charging Fuzzy Q-charging has an optimal charging time determination algorithm that relies on Fuzzy logic to adjust the energy charging level dynamically The algorithm has been utilized at every charging point to maximize the number of alive sensors Moreover, Fuzzy Q-charging uses Q-learning in an optimal charging scheme to maximize the target number We have extensively evaluated Fuzzy Q-charging in comparison to the previous charging schemes in WRSNs The evaluation results show that Fuzzy Q-charging outperforms the others Specifically, Fuzzy Q-charging prolongs the network lifetime infinitely in certain conditions of the target and sensor numbers, while the other algorithms cannot In other cases, Fuzzy Q-charging extends the network lifetime by 7.3% times on average, and 24.5% times in the best case, compared to the existing algorithms In the future, we plan to extend this work to handle the WRSNs with multiple mobile chargers 32 Publications Parts of this thesis have been submitted and accepted as papers in some conference and journal: • Nguyen, P.L.; La, V.Q.; Nguyen, A.D.; Nguyen, T.H.; Nguyen, K An OnDemand Charging for Connected Target Coverage in WRSNs Using Fuzzy Logic and Q-Learning Sensors 2021, 21, 5520 https://doi.org/10.3390/s21165520 • L Van Quan, P L Nguyen, T -H Nguyen and K Nguyen, "Q-learning-based, Optimized On-demand Charging Algorithm in WRSN," 2020 IEEE 19th International Symposium on Network Computing and Applications (NCA), 2020, pp 1-8, doi: 10.1109/NCA51143.2020.9306695 • La Van Quan, Minh Hieu Nguyen, Thanh Hung Nguyen, Kien Nguyen, and Phi Le Nguyen 2022 On the Global Maximization of Network Lifetime in Wireless Rechargeable Sensor Networks ACM Trans Sen Netw Just Accepted (January 2022) DOI:https://doi.org/10.1145/3510423 • L Van Quan, T Hung Nguyen and P Le Nguyen, "Extending Network Lifetime by Exploiting Wireless Charging in WSN," 2020 RIVF International Conference on Computing and Communication Technologies (RIVF), 2020, pp 1-6, doi: 10.1109/RIVF48685.2020.9140727 • La Van Quan, Minh Hieu Nguyen, Nguyen Phi Le, Le Van An, Huynh Thi Thanh Binh, Thanh Hung Nguyen, and Yusheng Ji Joint Optimization of Charging Time and Charging Path for Network Lifetime Maximization in WRSN IEEE Access Sen Netw Just Submitted 33 Bibliography [1] G Jiang, S Lam, Y Sun, L Tu, and J Wu Joint charging tour planning and depot positioning for wireless sensor networks using mobile chargers IEEE/ACM Trans Netw., 25(4):2250–2266, Aug 2017 [2] C Lin, J Zhou, C Guo, H Song, G Wu, and M S Obaidat Tsca: A temporalspatial real-time charging scheduling algorithm for on-demand architecture in wireless rechargeable sensor networks IEEE Trans Mobile Comput., 17(1): 211–224, Jan 2018 [3] R M Al-Kiyumi, C H Foh, S Vural, P Chatzimisios, and R Tafazolli Fuzzy logic-based routing algorithm for lifetime enhancement in heterogeneous wireless sensor networks IEEE Trans Green Commun Netw., 2(2):517–532, 2018 [4] G Han, X Yang, L Liu, M Guizani, and W Zhang A disaster managementoriented path planning for mobile anchor node-based localization in wireless sensor networks IEEE Trans Emerg Topics Comput., pages 1–1, 2017 [5] Tamoghna Ojha, Sudip Misra, and Narendra Singh Raghuwanshi Wireless sensor networks for agriculture: The state-of-the-art in practice and future challenges Comput Electron Agric., 118:66 – 84, 2015 [6] Phi Le Nguyen, Kien Nguyen, Huy Vu, and Yusheng Ji Telpac: A time and energy efficient protocol for locating and patching coverage holes in wsns J Netw Comput Appl., 147, 2019 [7] P Le Nguyen, Y Ji, K Le, and T Nguyen Load balanced and constant stretch routing in the vicinity of holes in wsns In Proc IEEE CCNC, pages 1–6, 2018 [8] N T Hanh, P Le Nguyen, P T Tuyen, H T T Binh, E Kurniawan, and Y Ji Node placement for target coverage and network connectivity in wsns with multiple sinks In Proc IEEE CCNC, pages 1–6, 2018 [9] Z Lyu, Z Wei, J Pan, H Chen, C Xia, J Han, and L Shi Periodic charging planning for a mobile wce in wireless rechargeable sensor networks based on hybrid pso and ga algorithm Appl Soft Comput., 75:388 – 403, 2019 34 [10] Y Ma, W Liang, and W Xu Charging utility maximization in wireless rechargeable sensor networks by charging multiple sensors simultaneously IEEE/ACM Trans Netw., 26(4):1591–1604, Aug 2018 [11] W Xu, W Liang, H Kan, Y Xu, and X Zhang Minimizing the longest charge delay of multiple mobile chargers for wireless rechargeable sensor networks by charging multiple sensors simultaneously In Proc IEEE ICDCS, pages 881– 890, 2019 [12] C Lin, Y Zhou, F Ma, J Deng, L Wang, and G Wu Minimizing charging delay for directional charging in wireless rechargeable sensor networks In Proc IEEE INFOCOM, pages 1819–1827, 2019 [13] Y Feng, N Liu, F Wang, Q Qian, and X Li Starvation avoidance mobile energy replenishment for wireless rechargeable sensor networks In Proc IEEE ICC, pages 1–6, 2016 [14] C Lin, Y Sun, K Wang, Z Chen, B Xu, and G Wu Double warning thresholds for preemptive charging scheduling in wireless rechargeable sensor networks Comput Netw., 148:72 – 87, 2019 [15] Abhinav Tomar, Lalatendu Muduli, and Prasanta K Jana A Fuzzy Logic-based On-demand Charging Algorithm for Wireless Rechargeable Sensor Networks with Multiple Chargers IEEE Trans Mobile Comput., 1233(c):1–1, 2020 [16] X Cao, W Xu, X Liu, J Peng, and T Liu A Deep Reinforcement LearningBased On-Demand Charging Algorithm for Wireless Rechargeable Sensor Networks Ad Hoc Netw., 2020 [17] J Zhu, Y Feng, M Liu, G Chen, and Y Huang Adaptive online mobile charging for node failure avoidance in wireless rechargeable sensor networks Comput Netw., 126:28 – 37, 2018 [18] Amar Kaswan, Abhinav Tomar, and Prasanta K Jana An efficient scheduling scheme for mobile charger in on-demand wireless rechargeable sensor networks J Netw Comput Appl., 114:123 – 134, 2018 [19] V Krishnaswamy and S.S Manvi Fuzzy and pso based clustering scheme in underwater acoustic sensor networks using energy and distance parameters Wireless Pers Commun., 108:1529–1546, 2019 [20] P Kofinas, A.I Dounis, and G.A Vouros Fuzzy q-learning for multi-agent decentralized energy management in microgrids Appl Energy, 219:53 – 67, 2018 35 [21] W Xu, W Liang, X Jia, and Z Xu Maximizing sensor lifetime in a rechargeable sensor network via partial energy charging on sensors In Proc IEEE SECON, pages 1–9, 2016 [22] Krishnamurthi R and Goyal M Hybrid neuro-fuzzy method for data analysis of brain activity using eeg signals Soft Computing and Signal Processing, 900, 2019 [23] S K Behera, L Jena, A K Rath, and P K Sethy Disease classification and grading of orange using machine learning and fuzzy logic In Proc ICCSP, pages 0678–0682, 2018 [24] C Yang, Y Jiang, J Na, Z Li, L Cheng, and C Su Finite-time convergence adaptive fuzzy control for dual-arm robot with unknown kinematics and dynamics IEEE Trans Fuzzy Syst, 27(3):574–588, 2019 [25] Oscar Castillo and Leticia Amador-Angulo A generalized type-2 fuzzy logic approach for dynamic parameter adaptation in bee colony optimization applied to fuzzy controller design Inf Sci., 460-461:476 – 496, 2018 [26] C S Yang, C K Kim, J Moon, S Park, and C G Kang Channel access scheme with alignment reference interval adaptation (aria) for frequency reuse in unlicensed band lte: Fuzzy q-learning approach IEEE Access, 6:26438–26451, 2018 [27] A Jain and A.K Goel Energy efficient fuzzy routing protocol for wireless sensor networks Wireless Pers Commun., 110:1459–1474, 2020 [28] N Ghosh, I Banerjee, and R.S Sherratt On-demand fuzzy clustering and antcolony optimisation based mobile data collection in wireless sensor network Wireless Netw., 25:1829–1845, 2019 [29] Runze Wan, Naixue Xiong, Qinghui Hu, Haijun Wang, and Jun Shang Similarity-aware data aggregation using fuzzy c-means approach for wireless sensor networks EURASIP J Wirel Commun Netw., 59, 2019 [30] N.K Avin and R Sharma A fuzzy reinforcement learning approach to thermal unit commitment problem Neural Comput & Applic., 31:737–750, 2019 [31] L Van Quan, P L Nguyen, T H Nguyen, and K Nguyen Q-learning-based, optimized on-demand charging algorithm in wrsn In Proc IEEE NCA, pages 1–8, 2020 [32] S He, J Chen, F Jiang, D K Y Yau, G Xing, and Y Sun Energy provisioning in wireless rechargeable sensor networks IEEE Trans Mobile Comput., 12 (10):1931–1942, Oct 2013 36 [33] L Fu, P Cheng, Y Gu, J Chen, and T He Optimal charging in wireless rechargeable sensor networks IEEE Trans Veh Technol., 65(1):278–291, Jan 2016 [34] Chuanxin Zhao, Hengjing Zhang, Fulong Chen, Siguang Chen, Changzhi Wu, and Taochun Wang Spatiotemporal charging scheduling in wireless rechargeable sensor networks Comput Commun., 152:155 – 170, 2020 [35] J Zhu, Y Feng, M Liu, G Chen, and Y Huang Adaptive online mobile charging for node failure avoidance in wireless rechargeable sensor networks Comput Netw., 126:28–37, 2018 [36] J Luo and J Hubaux Joint sink mobility and routing to maximize the lifetime of wireless sensor networks: The case of constrained mobility IEEE/ACM Trans Netw., 18(3):871–884, June 2010 [37] ILOG CPLEX Optimization Studio ILOG CPLEX Optimization Studio https://www.ibm.com/products/ilog-cplex-optimization-studio [38] L Fu, L He, P Cheng, Y Gu, J Pan, and J Chen Esync: Energy synchronized mobile charging in rechargeable wireless sensor networks IEEE Trans Veh Technol., 65(9):7415–7431, Sep 2016 [39] Jinqi Zhu, Yong Feng, Ming Liu, Guihai Chen, and Yongxin Huang Adaptive online mobile charging for node failure avoidance in wireless rechargeable sensor networks Comput Commun., 126:28 – 37, 2018 [40] C Lin, Y Zhou, H Dai, J Deng, and G Wu Mpf: Prolonging network lifetime of wireless rechargeable sensor networks by mixing partial charge and full charge In Proc of IEEE Annu Int Conf on Sensing, Communication, and Networking (SECON), pages 1–9, June 2018 [41] W Xu, W Liang, X Jia, Z Xu, Z Li, and Y Liu Maximizing sensor lifetime with the minimal service cost of a mobile charger in wireless sensor networks IEEE Trans Mobile Comput., 17(11):2564–2577, Nov 2018 [42] W Liang, Z Xu, W Xu, J Shi, G Mao, and S K Das Approximation algorithms for charging reward maximization in rechargeable sensor networks via a mobile charger IEEE/ACM Trans Netw., 25(5):3161–3174, Oct 2017 [43] W Xu, W Liang, X Jia, and Z Xu Maximizing sensor lifetime in a rechargeable sensor network via partial energy charging on sensors In Proc of IEEE Annu Int Conf on Sensing, Communication, and Networking (SECON), pages 1–9, June 2016 37 [44] A Aggarwal and J Park Notes on searching in multidimensional monotone arrays In Proc of Annu Symp on Foundations of Computer Science, pages 497–512, 1988 [45] Nadeem Ahmed, Salil S Kanhere, and Sanjay Jha The holes problem in wireless sensor networks: A survey Mob Comput Commun Rev., 9(2):4–18, April 2005 [46] Binay K Bhattacharya and Asish Mukhopadhyay On the minimum perimeter triangle enclosing a convex polygon Lect Notes Comput Sci., pages 8496 Springer, 2002 [47] Alexander Krăoller, Sándor P Fekete, Dennis Pfisterer, and Stefan Fischer Deterministic boundary recognition and topology extraction for large sensor networks In Proc of Annu ACM-SIAM Symp on Discrete Algorithm, SODA ’06, pages 1000–1009, 2006 ISBN 0-89871-605-5 [48] Yue Wang, Jie Gao, and Joseph S.B Mitchell Boundary recognition in sensor networks by topological methods In Proc of MOBICOM’06, MobiCom ’06, pages 122–133 ACM, 2006 ISBN 1-59593-286-0 [49] I F Akyildiz, W Su, Y Sankarasubramaniam, and E Cayirci Wireless sensor networks: a survey Computer Networks, 38:393–422, 2002 [50] C F Garcia-Hernandez, P H Ibarguengoytia-Gonzalez, J Garcia-Hernandez, and J A Perez-Diaz Wireless Sensor Networks and Applications: a Survey IJCSNS Int Journal of Computer Science and Network Security, 7(3):264–273, 2007 38 ... coverage and connectivity constraints in charging schedule optimization This thesis uniquely considers the optimization of charging time and charging location simultaneously I use Fuzzy logic and Q- learning. .. coverage and connectivity Declaration of Authorship I hereby declare that my thesis, titled ? ?An Efficient, On- demand Charging for WRSNs Using Fuzzy Logic and Q- Learning? ??, is the work of myself and my... Science and Artificial Intelligence Topic An Efficient, On- demand Charging for WRSNs Using Fuzzy Logic and Q- Learning Contributions • Propose a Fuzzy logic- based algorithm that determines the

Ngày đăng: 10/10/2022, 07:42

TÀI LIỆU CÙNG NGƯỜI DÙNG

TÀI LIỆU LIÊN QUAN