Handbook of Wireless Networks and Mobile Computing phần 10 ppsx

63 342 0
Handbook of Wireless Networks and Mobile Computing phần 10 ppsx

Đang tải... (xem toàn văn)

Tài liệu hạn chế xem trước, để xem đầy đủ mời bạn chọn Tải xuống

Thông tin tài liệu

26.6 PERFORMANCE ANALYSIS We develop a simple model to analyze the performance of the proposed cache manage- ment scheme. Specifically, we want to estimate the miss probability and mean query delay for the proposed scheme. For the purposes of analysis, we consider the performance in a single cell (as mobility is assumed to be transparently handled) with one MSS and N mo- bile hosts. We make the following assumptions: ț Total number of data items is M, each of size b a bits. ț The queries generated by a sleeping MH (i.e., when the MH is disconnected from the MSS) are lost. ț A single wireless channel of bandwidth W is assumed for all transmissions taking place in the cell. All messages are queued to access the wireless channel and ser- viced according to the FCFS (first come first served) scheduling policy. ț Queries are of size b q bits and invalidations are of size b i bits. ț Software overheads are ignored. ț Modeling Query-Update Pattern. The time between updates to any data item is as- sumed to follow an exponential distribution with mean 1/ ␮ . Each MH generates a query according to a Poisson distribution with mean rate of ␭ . These queries are uni- formly distributed over all data items in the database. The query-update model is shown in Figure 26.6. 566 DATA MANAGEMENT IN WIRELESS MOBILE ENVIRONMENTS Figure 26.6 Modeling query–update pattern. ț Modeling Sleep Pattern for an MH. An MH alternates between sleep and awake modes. The sleep/wake up pattern of an MH is modeled by using two parameters (see Figure 26.7): (1) the fraction s, 0 Յ s, Յ 1, of the total time spent by an MH in the sleep mode; (2) the frequency ␻ at which it changes state (sleeping or awake). We consider an exponentially distributed interval of time t with mean 1/ ␻ . The MH is in the sleep mode for time st, and in the awake mode for time (1 – s)t. By varying the value of ␻ , different frequencies of change of state can be obtained for the same total sleep time. We want to estimate the average hit ratio at the local cache of an MH, i.e., the percent- age of queries, generated at an MH, that would be satisfied by the local cache and the av- erage delay experienced in answering a query. 26.6.1 Estimation of Hit Ratio Since the queries generated by a sleeping MH are lost, the effective rate of query genera- tion, ␭ e , by an MH can be approximated as ␭ e = (1 – s) ␭ Since queries are uniformly distributed, the rate at which queries are generated for a given data item x, ␭ x , is given by ␭ x = = A query made for a specific data item x by an MH would be a miss in the local cache (and would require an up-link request) in the case of either of the following two events (consid- (1 – s) ␭ ᎏ M ␭ e ᎏ M 26.6 PERFORMANCE ANALYSIS 567 Figure 26.7 Modeling sleep behavior of an MH. er the time interval t between the current query for x and the immediately preceding query for x by the MH): 1. Event 1. During this interval t, the data item x has been invalidated at least once [see Figure 26.8(a)]. 2. Event 2. Data item x has not been invalidated during the interval t; the MH has gone to sleep at least once during the interval t, it woke up the last time at time t – t 1 , and the current query is the very first one after waking up from its last sleep [Figure 26.8(b)]. Note that the first query generated by a sleeping MH after waking up needs an up-link request to the MSS regardless of whether the required data item is in the local cache. We compute the probabilities of Event 1 and Event 2 as follows. ț Probability of miss due to absence of valid data item in cache: P(Event 1) = ͵ ϱ 0 ( ␭ x e – ␭ x t )(1 – e – ␮ t )dt = = ț Probability of miss due to disconnection: P(Event 2) = ͵ ϱ 0 P (no invalidation and query for x during time t) × P [the query (for x) is first after wake up]dt = ͵ ϱ 0 ␭ x e – ␭ x t e – ␮ t ␣ dt = ΂ – + ΃ ␻ ᎏᎏ ␮ + ␭ x + ␭ x + ␻ ␻ + ␭ e ᎏᎏ ␮ + ␭ x + ␻ ␭ e ᎏ ␮ + ␭ x ␻␭ x ᎏᎏ ␭ e ( ␻ + ␭ e ) M ␮ ᎏᎏ (1 – s) ␭ + M ␮ ␮ ᎏ ␭ x + ␮ 568 DATA MANAGEMENT IN WIRELESS MOBILE ENVIRONMENTS Figure 26.8 Two mutually exclusive events when up-link request messages are needed. The probability, P miss , of a query requiring an up-link request is the sum of the proba- bilities for Event 1 and Event 2 and is given by P miss = P(Event 1) + P(Event 2) and the probability of a hit is P hit = 1 – P miss 26.6.2 Estimation of Mean Query Delay We now estimate the mean query delay T delay . Note that a single wireless channel of band- width C is assumed for all transmissions taking place in the cell and all messages are queued to access the wireless channel and serviced according to the FCFS scheduling pol- icy; also, queries are of size b q bits and invalidations are of size b i bits. In order to determine T delay we do the following: ț Model the servicing of up-link queries as a M/D/1 queue under the assumption that there is a dedicated up-link channel of bandwidth C. The query arrival rate ␭ q is es- timated to be NP miss ␭ e since there are N MHs in a cell and for each MH in the cell the up-link query generation rate is P miss ␭ e . The query service rate ␮ q is then esti- mated to be C/(b q + b a ). ț Model the servicing of invalidation on down-link channel as an M/D/1 queue under the assumption that there is a dedicated down-link channel of bandwidth C. The av- erage invalidation arrival rate ␭ i is estimated to be M ␮ and the invalidation service rate ␮ i is then estimated to be C/b i . ț In order to model a single wireless channel of bandwidth C for both up-link and down-link traffic and estimate the mean query service rate T q on this shared chan- nel, we combine both the up-link and down-link M/D/1 models. Since we are in- terested in only the query service rate, the invalidations on the channel merely add to the delay in servicing the queries. Thus, we assume the service rate of the chan- nel for both types of traffic to be ␮ q (the service rate for queries) and adjust the arrival rate of invalidations in proportion to the service rate of queries. Thus the effective arrival rate of invalidations is taken as ~ ␭ i = ( ␮ i / ␮ q ) ␭ i . The combined M/D/1 queue is shown in Figure 26.9. Using the standard queuing theory result for an M/D/1 queue, the average delay experienced by a query going up-link is given as T q = ț All queries that are cache hits do not experience any delay. Thus, the average delay experienced by any query in the system is given by T delay = P miss T q . 2 ␮ q – ( ␭ q + ~ ␭ i ) ᎏᎏ 2 ␮ q [ ␮ q – ( ␭ q + ~ ␭ i )] 26.6 PERFORMANCE ANALYSIS 569 26.7 PERFORMANCE COMPARISON 26.7.1 Experimental Setup In this section, we present the simulated performance result of the AS scheme of cache invalidation. The simulation results are for a single cell with a base station and a vary- ing number of mobile hosts. The purpose of these experiments were twofold: (1) to in- vestigate how closely the experimental results coincide with the values for performance metrics (delay, up-link requests) predicted by the simple model; (2) to investigate how efficiently the AS scheme manages disconnection in a mobile environment; for this pur- pose the AS scheme was experimentally compared with the Ideal Scheme. The default parameters used for each scenario are as shown in Table 26.3. Delay is defined to be the time it takes to answer a query. The delay is assumed to be zero when there is a local cache hit. 570 DATA MANAGEMENT IN WIRELESS MOBILE ENVIRONMENTS Figure 26.9 Combining the up-link and down-link M/D/1 queues. TABLE 26.3 Default parameters for AS and AT/TS Parameter Value N 25 M 100 ␭ 1/120 query/s ␮ 10 –4 updates/s (low), 1/1800 updates/s (high) b a 1200 bytes b q 64 bytes b i 64 bytes C 10000 bits/sec s 20% ␻ 1800 sec L 10 sec w (TS) 100L 26.7.2 Experimental Results 26.7.2.1 Comparison at Low Data Invalidation Rate The first scenario studies the performance of the TS and AS schemes when the data changes infrequently. The value of ␮ used is 10 –4 updates/sec. Delay and number of up- links per query were studied by varying s. The TS scheme performs better than the AT scheme at low invalidation rates. Delay. Figure 26.10(a) shows the variation in average delay with the sleep characteris- tics of the mobile hosts. For the TS scheme, delay has two components: query’s waiting time to be serviced by the MSS and the time it must wait for the next invalidation report to be broadcast (to ensure that the data item has not been invalidated since the last update was received). Only the network delay component is shown. If the time to wait for the next invalidation report is added, an additional L/2 delay is seen for AT/TS. A significant im- provement in total delay is seen when the AS strategy is used. The network delays are higher for TS, as queries tend to go up-link in bursts, causing congestion in the network. This is a consequence of waiting for the next invalidation report to arrive to answer the query. The delay for the TS scheme increases with s, as the cache has to be discarded more often, reducing the hit rate and generating more up-link queries. However, the total query rate is reduced as the sleep rate increases. At very high sleep rates, this effect dominates and the delay decreases. Up-links. Figure 26.10(b) shows the variation in the number of up-links per query with the sleep characteristics of mobile hosts. AS requires an up-link for the first query after every sleep interval. If the item queried is already in the cache and not invalidated, then this up-link is additional to the ideal scheme. Thus, AS has a marginally higher number of up-links per query as opposed to the ideal scheme, in which they increase as the sleep rate increases. The shape of the TS curve can be explained based on the fact that it has to dis- card its cache after an extended sleep. This results in low hit rates and more up-link queries. The effect is not so dominant when the sleep rate is low, but increases as the aver- age sleep percentage increases. 26.7.2.2 Comparison at High Data Invalidation Rate In this section, we present results for a scenario in which the data in the network changes frequently. The data change rate ␮ is assumed to be 1/1800 updates/sec. All other parame- ters are as in Table 26.3. The AS scheme is compared only to the AT scheme, since the AT scheme performs better than the TS scheme at high invalidation rates. This is because the AT scheme wastes less bandwidth than the TS scheme by not repeating the same invalida- tion report multiple times. Delay. Figure 26.11(a) shows a plot of average delay against s. The network delays are slightly higher as compared to Figure 26.10. This is due to additional invalidation reports being sent in the AS scheme and the increase in size of each report for the AT scheme. The decrease at a very high sleep rate is due to the number of queries decreasing as most hosts are sleeping. There is an almost seven to eight times improvement in overall delay (net- work delay + wait for next report) when the AS strategy is compared to an AT similar to that in Figure 26.10. Up-links. Figure 26.11(b) shows the variation of the number of up-links per query with 26.7 PERFORMANCE COMPARISON 571 572 Figure 26.10 Delay and up-links: low invalidation rate. (a) Delay (b) Up-links 573 Figure 26.11 Delay and up-links at high invalidation rate. (a) Delay (b) Uplinks the sleep rate. As the invalidation rate is high, the cache is less effective and more queries result in up-links as compared to the scenario of Section 26.7.2.1. In the AT/TS strategy, the MH must wait for the next invalidation to come before it can answer a query. Since the time window for answering a query is greater, there is a greater probability that the item would be invalidated by that time. The hit rate of the cache is therefore poorer. This com- bined with the drop in hit rate due to discarding the cache contributes to the number of up- links for AT. AS performs marginally worse than the ideal scheme as in the case of low in- validation rate. 26.7.2.3 Comparison with Ideal Scheme Figures 26.12(a) and (b) show the delay and up-links per query, respectively, at different query rates for the ideal and AS schemes. As the number of queries per second increases, both the number of up-links per query and the average delay to answer a query increase. This is a consequence of the decrease in hit rate. As there is a greater delay between queries, the probability of an invalidation occurring between queries increases. This re- sults in more up-links and higher delay. As the sleep rate increases, the probability of an invalidation between successive queries increases even further. In all cases, the plots for AS closely follow that of the ideal scheme, with the gap increasing as the sleep rate in- creases. 26.7.2.4 Model Validation Figure 26.13 shows the comparison between the miss rate for the proposed scheme AS ob- tained through simulation and that predicted by the mathematical model (MM). Figure 26.14 shows the comparison between the delay for the proposed scheme AS obtained through simulation (denoted as Sim in the plots) and that predicted by the mathematical model. The model captures the behavior very well and the results are closer at low invali- dation rates. This is because of the heuristic used in the modeling for estimating the equiv- alent arrival rate of invalidation messages. 26.8 SUMMARY Cache maintenance is one of the most important issues in providing data to mobile appli- cations. We have described a cache maintenance (invalidation) strategy called AS for a distributed system that is predisposed to frequent disconnections. Such disconnections happen in a mobile wireless environment for various reasons. The proposed algorithm minimizes the overhead, preserving bandwidth and reducing the number of up-link re- quests and average latency. State information about the local cache at an MH with respect to data items is maintained at the home MSS; by sending asynchronous call-backs and buffering them until implicit acknowledgments are received, the cache continues to be valid even after the MH is temporarily disconnected from the network. The performance analysis and simulation results show the benefits in terms of bandwidth savings (reduction in uplink queries) and data access latency compared to other caching schemes that provide data currency guarantees similar to the AS scheme. 574 DATA MANAGEMENT IN WIRELESS MOBILE ENVIRONMENTS 575 Figure 26.12 Delay and up-links compared with ideal scheme at low invalidation rate. (a) Delay (b) Uplinks [...]... Gupta, and P K Srimani, An efficient cache management scheme for mobile environment, in Proceedings of 20th International Conference on Distributed Computing Systems (ICDCS’00), April 2000 Handbook of Wireless Networks and Mobile Computing, Edited by Ivan Stojmenovic ´ Copyright © 2002 John Wiley & Sons, Inc ISBNs: 0-471-41902-8 (Paper); 0-471-22456-1 (Electronic) CHAPTER 27 Mobile, Distributed, and. .. could capture and store locations, times, and descriptions of payments made by a traveler Back home, the traveler could use the recorded events to generate an expense report 27.3 ARCHITECTURE OF PERVASIVE COMPUTING SOFTWARE The engineering of pervasive computing software is discussed in [1, 2] The software of pervasive computing applications is subject to the support of everyday use and continuous... architectures and protocols are public documents developed by neutral organizations Key specifications are required to handle mobility, service discovery, and distributed computing 581 582 MOBILE, DISTRIBUTED, AND PERVASIVE COMPUTING In this chapter, we review the main characteristics of applications of pervasive computing in Section 27.2, discuss the architecture of pervasive computing software in Section... digital assistants (PDAs) and cell phones are the first widely available and used pervasive computing devices Next-generation devices are being designed Several of them will be portable and even wearable, such as glass embedded displays, watch PDAs, and ring mouses Several pervasive computing devices and users are wireless and mobile Devices and applications are continuously running and always available... 27 Mobile, Distributed, and Pervasive Computing MICHEL BARBEAU School of Computer Science, Carleton University, Ottawa, Canada 27.1 INTRODUCTION Pervasive computing aims at availability and invisibility On the one hand, pervasive computing can be defined as availability of software applications and information anywhere and anytime On the other hand, pervasive computing also means that computers are... August 1990 10 G Y Liu and G Q McGuire Jr., A mobility-aware dynamic database caching scheme for wireless mobile computing and communications, Distributed and Parallel Databases, 4: 271–288, 1996 11 J Jing, A Elmagarmid, A Helal, and R Alonso, Bit-sequences: an adaptive cache invalidation REFERENCES 12 13 14 15 16 17 18 19 20 21 579 method in mobile client/server environments, Mobile Networks and Applications,... Attribute color, model- 590 MOBILE, DISTRIBUTED, AND PERVASIVE COMPUTING ing support of color printing, is mandatory and has default value false Attribute speed specifies the speed of the printer, in pages per minute, and is optional Attribute pagequeue is also optional and indicates the current number of pages in the queue of the printer Interactions between DAs, SAs, and UAs are based on the following... 584 MOBILE, DISTRIBUTED, AND PERVASIVE COMPUTING curs, a signal is sent from the PDA to the proxy The signal contains the identity of the event and the identity of the object in which the event occurred The handler of the event is the proxy The result translates to updates of the screen layout prepared in the proxy and rendered on the PDA 27.4 OPEN PROTOCOLS Open protocols are required by pervasive computing. .. where, and why something was done and can use that information as input to solve new tasks Pervasive computing is characterized by a high degree of heterogeneity: devices and distributed components are from different vendors and sources Support of mobility and distribution in such a context requires open distributed computing architectures and open protocols Openness means that specifications of architectures... limiting the network coverage of a request and the number of replies Each scope is named UAs and SAs can be members of several scopes They can learn their scope name(s) by, for example, reading a configuration file 27.4.1.3 Jini The architecture of a Jini system consists of clients, lookup services, and service providers, which are analogous to the concepts of UAs, DAs, and SAs of SLP As in SLP, Jini proposes . specifications are required to handle mobility, service discovery, and distributed computing. 581 Handbook of Wireless Networks and Mobile Computing, Edited by Ivan Stojmenovic´ Copyright © 2002 John. PERVASIVE COMPUTING SOFTWARE The engineering of pervasive computing software is discussed in [1, 2]. The software of pervasive computing applications is subject to the support of everyday use and continuous execution watch PDAs, and ring mouses. Several pervasive computing devices and users are wireless and mobile. Devices and applications are continuously running and always available. From an architectural point of view,

Ngày đăng: 13/08/2014, 12:21

Từ khóa liên quan

Tài liệu cùng người dùng

Tài liệu liên quan