1. Trang chủ
  2. » Kỹ Thuật - Công Nghệ

Tài liệu Mạng lưới giao thông và đánh giá hiệu suất P1 doc

38 436 0

Đang tải... (xem toàn văn)

Tài liệu hạn chế xem trước, để xem đầy đủ mời bạn chọn Tải xuống

THÔNG TIN TÀI LIỆU

Thông tin cơ bản

Định dạng
Số trang 38
Dung lượng 310,64 KB

Nội dung

1 SELF-SIMILAR NETWORK TRAFFIC: AN OVERVIEW K IHONG P ARK Network Systems Lab, Department of Computer Sciences, Purdue University, West Lafayette, IN 47907 W ALTER W ILLINGER Information Sciences Research Center, AT&T LabsÐResearch, Florham Park, NJ 07932 1.1 INTRODUCTION 1.1.1 Background Since the seminal study of Leland, Taqqu, Willinger, and Wilson [41], which set the groundwork for considering self-similarity an important notion in the understanding of network traf®c includingthe modelingand analysis of network performance, an explosion of work has ensued investigating the multifaceted nature of this phenom- enon. 1 The long-held paradigm in the communication and performance communities has been that voice traf®c and, by extension, data traf®c are adequately described by certain Markovian models (e.g., Poisson), which are amenable to accurate analysis and ef®cient control. The ®rst property stems from the well-developed ®eld of Markovian analysis, which allows tight equilibrium bounds on performance vari- ables such as the waitingtime in various queueingsystems to be found. This also forms a pillar of performance analysis from the queueingtheory side [38]. The Self-Similar Network Traf®c and Performance Evaluation, Edited by KihongPark and Walter Willinger ISBN 0-471-31974-0 Copyright # 2000 by John Wiley & Sons, Inc. 1 1 For a nontechnical account of the discovery of the self-similar nature of network traf®c, includingparallel efforts and important follow-up work, we refer the reader to Willinger [71]. An extended list of references that includes works related to self-similar network traf®c and performance modelingup to about 1995 can be found in the bibliographical guide [75]. Self-Similar Network Traf®c and Performance Evaluation, Edited by KihongPark and Walter Willinger Copyright # 2000 by John Wiley & Sons, Inc. Print ISBN 0-471-31974-0 Electronic ISBN 0-471-20644-X second feature is, in part, due to the simple correlation structure generated by Markovian sources whose performance impactÐfor example, as affected by the likelihood of prolonged occurrence of ``bad events'' such as concentrated packet arrivalsÐis fundamentally well-behaved. Speci®cally, if such processes are appro- priately rescaled in time, the resultingcoarsi®ed processes rapidly lose dependence, takingon the properties of an independent and identically distributed (i.i.d.) sequence of random variables with its associated niceties. Principal amongthem is the exponential smallness of rare events, a key observation at the center of large deviations theory [70]. The behavior of a process under rescalingis an important consideration in performance analysis and control since bufferingand, to some extent, bandwidth provisioningcan be viewed as operatingon the rescaled process. The fact that Markovian systems admit to this avenue of tamingvariability has helped shape the optimism permeatingthe late 1980s and early 1990s regardingthe feasibility of achievingef®cient traf®c control for quality of service (QoS) provisioning. The discovery and, more importantly, succinct formulation and recognition that data traf®c may not exhibit the hereto accustomed scalingproperties [41] has signi®- cantly in¯uenced the networkinglandscape, necessitatinga reexamination of some of its fundamental premises. 1.1.2What Is Self-Similarity? Self-similarity and fractals are notions pioneered by Benoit B. Mandelbrot [47]. They describe the phenomenon where a certain property of an objectÐfor example, a natural image, the convergent subdomain of certain dynamical systems, a time series (the mathematical object of our interest)Ðis preserved with respect to scaling in space and=or time. If an object is self-similar or fractal, its parts, when magni®ed, resembleÐin a suitable senseÐthe shape of the whole. For example, the two- dimensional (2D) Cantor set livingon A 0; 1Â0; 1 is obtained by startingwith a solid or black unit square, scalingits size by 1=3, then placingfour copies of the scaled solid square at the four corners of A. If the same process of scalingfollowed by translation is applied recursively to the resultingobjects ad in®nitum, the limit set thus reached de®nes the 2D Cantor set. This constructive process is illustrated in Fig. 1.1. The limitingobjectÐde®ned as the in®nite intersection of the iteratesÐhas the property that if any of its corners are ``blown up'' suitably, then the shape of the zoomed-in part is similar to the shape of the whole, that is, it is self-similar.Of Fig. 1.1 Two-dimensional Cantor set. 2 SELF-SIMILAR NETWORK TRAFFIC: AN OVERVIEW course, this is not too surprisingsince the constructive processÐby its recursive actionÐendows the limitingobject with the scale-invariance property. The one-dimensional (1D) Cantor set, for example, as obtained by projectingthe 2D Cantor set onto the line, can be given an interpretation as a traf®c series X tPf0; 1gÐcall it ``Cantor traf®c''Ðwhere X t1 means that there is a packet transmission at time t. This is depicted in Fig. 1.2 (left). If the constructive process is terminated at iteration n ! 0, then the contiguous line segments of length 1=3 n may be interpreted as on periods or packet trains of duration 1=3 n , and the segments between successive on periods as off periods or absence of traf®c activity. Nonuni- form traf®c intensities may be imparted by generalizing the constructive framework via the use of probability measures. For example, for the 1D Cantor set, instead of lettingthe left and right components after scalinghave identical ``mass,'' they may be assigned different masses, subject to the constraint that the total mass be preserved at each stage of the iterative construction. This modi®cation corresponds to de®ning a probability measure m on the Borel subsets of 0; 1 and distributingthe measure at each iteration nonuniformly left and right. Note that the classical Cantor set constructionÐviewed as a mapÐis not measure-preserving. Figure 1.2 (middle) shows such a construction with weights a L  2 3 , a R  1 3 for the left and right Fig. 1.2 Left: One-dimensional Cantor set interpreted as on=off traf®c. Middle: One- dimensional nonuniform Cantor set with weights a L  2 3 , a R  1 3 . Right: Cumulative process correspondingto 1D on=off Cantor traf®c. 1.1 INTRODUCTION 3 components, respectively. The probability measure is represented by ``height''; we observe that scale invariance is exactly preserved. In general, the traf®c patterns producible with ®xed weights a L , a R are limited, but one can extend the framework by allowing possibly different weights associated with every edge in the weighted binary tree induced by the 1D Cantor set construction. Such constructions arise in a more re®ned characterization of network traf®cÐcalled multiplicative processes or cascadesÐand are discussed in Chapter 20. Further generalizations can be obtained by de®ningdifferent af®ne transformations with variable scale factors and transla- tions at every level in the ``traf®c tree.'' The correspondingtraf®c pattern is self- similar if, and only if, the in®nite tree can be compactly represented as a ®nite directed cyclic graph [8]. Whereas the previous constructions are given interpretations as traf®c activity per unit time, we will ®nd it useful to consider their corresponding cumulative processes, which are nondecreasingprocesses whose differencesÐalso called increment processÐconstitute the original process. For example, for the on=off Cantor traf®c construction (cf. Fig. 1.2 (left)), let us assign the interpretation that time is discrete such that at step n ! 0, it ranges over the values t  0; 1=3 n ; 2=3 n ; .; 3 n À 1=3 n ; 1. Thus we can equivalently index the discrete time steps by i  0; 1; 2; .; 3 n . With a slight abuse of notation, let us rede®ne X Á as X i1 if, and only if, in the original process X i=3 n 1andX i=3 n À e1 for all 0 < e < 1=3 n . That is, for i values for which an on period in the original process X t begins at t  i=3 n , X i is de®ned to be zero. Thus, in the case of n  2, we have X 00; X 11; X 20; X 31; X 40; X 50; X 60; X 71; X 80; X 91: Now consider the continuous time process Y t shown in Fig. 1.2 (right) de®ned over 0; 3 n  for iteration n. Y t is nondecreasingand continuous, and it can be checked by visual inspection that X iY iÀY i À 1; i  1; 2; .; 3 n ; and X 0Y 00. Thus Y t represents the total traf®c volume up to time t, whereas X i represents the traf®c intensity duringthe ith interval. Most importantly, we observe that exact self-similarity is preserved even in the cumulative process. This points toward the fact that self-similarity may be de®ned with respect to a cumulative process with its increment processÐwhich is of more relevance for traf®c modelingÐ``inheriting'' some of its properties including self-similarity. An important drawback of our constructions thus far is that they admit only a strongform of recursive regularityÐthat of deterministic self-similarityÐand needs to be further generalized for traf®c modeling purposes where stochastic variability is an essential component. 4 SELF-SIMILAR NETWORK TRAFFIC: AN OVERVIEW 1.1.3 Stochastic Self-Similarity and Network Traf®c Stochastic self-similarity admits the infusion of nondeterminism as necessitated by measured traf®c traces but, nonetheless, is a property that can be illustrated visually. Figure 1.3 (top left) shows a traf®c trace, where we plot throughput, in bytes, against time where time granularity is 100 s. That is, a single data point is the aggregated traf®c volume over a 100 second interval. Figure 1.3 (top right) is the same traf®c series whose ®rst 1000 second interval is ``blown up'' by a factor of ten. Thus the truncated time series has a time granularity of 10 s. The remaining two plots zoom in further on the initial segment by rescaling successively by factors of 10. Unlike deterministic fractals, the objects correspondingto Fig. 1.3 do not possess exact resemblance of their parts with the whole at ®ner details. Here, we assume that the measure of ``resemblance'' is the shape of a graph with the magnitude suitably normalized. Indeed, for measured traf®c traces, it would be too much to expect to observe exact, deterministic self-similarity given the stochastic nature of many network events (e.g., source arrival behavior) that collectively in¯uence actual network traf®c. If we adopt the view that traf®c series are sample paths of stochastic processes and relax the measure of resemblance, say, by focusingon certain statistics of the rescaled time series, then it may be possible to expect exact similarity of the mathematical objects and approximate similarity of their speci®c realizations with respect to these relaxed measures. Second-order statistics are statistical properties Fig. 1.3 Stochastic self-similarityÐin the ``burstiness preservation sense''Ðacross time scales 100 s, 10 s, 1 s, 100 ms (top left, top right, bottom left, bottom right). 1.1 INTRODUCTION 5 that capture burstiness or variability, and the autocorrelation function is a yardstick with respect to which scale invariance can be fruitfully de®ned. The shape of the autocorrelation functionÐabove and beyond its preservation across rescaled time seriesÐwill play an important role. In particular, correlation, as a function of time lag, is assumed to decrease polynomially as opposed to exponentially. The existence of nontrivial correlation ``at a distance'' is referred to as long-range dependence.A formal de®nition is given in Section 1.4.1. 1.2PREVIOUS RESEARCH 1.2.1 Measurement-Based Traf®c Modeling The research avenues relatingto traf®c self-similarity may broadly be classi®ed into four categories. In the ®rst category are works pertaining to measurement-based traf®c modeling [13, 26, 34, 42, 56, 74], where traf®c traces from physical networks are collected and analyzed to detect, identify, and quantify pertinent characteristics. They have shown that scale-invariant burstiness or self-similarity is an ubiquitous phenomenon found in diverse contexts, from local-area and wide-area networks to IP and ATM protocol stacks to copper and ®ber optic transmission media. In particular, Leland et al. [41] demonstrated self-similarity in a LAN environment (Ethernet), Paxson and Floyd [56] showed self-similar burstiness manifestingitself in pre-World Wide Web WAN IP traf®c, and Crovella and Bestavros [13] showed self-similarity for WWW traf®c. Collectively, these measurement works constituted strong evidence that scale-invariant burstiness was not an isolated, spurious phenomenon but rather a persistent trait existingacross a range of network environments. Accompanyingthe traf®c characterization efforts has been work in the area of statistical and scienti®c inference that has been essential to the detection and quanti®cation of self-similarity or long-range dependence. 2 This work has speci®- cally been geared toward network traf®c self-similarity [28, 64] and has focused on exploitingthe immense volume, high quality, and diversity of available traf®c measurements; for a detailed discussion of these and related issues, see Willinger and Paxson [72, 73]. At a formal level, the validity of an inference or estimation technique is tied to an underlyingprocess that presumably generated the data in the ®rst place. Put differently, correctness of system identi®cation only holds when the data or sample paths are known to originate from speci®c models. Thus, in general, a sample path of unknown origin cannot be uniquely attributed to a speci®c model, and the main (and only) purpose of statistical or scienti®c inference is to deal with this intrinsically ill-posed problem by concludingwhether or not the given data or sample paths are consistent with an assumed model structure. Clearly, being consistent with an assumed model does not rule out the existence of other models that may conform to the data equally well. In this sense, the aforementioned works on measurement-based traf®c modelinghave demonstrated that self-similarity is 2 The relationship between self-similarity and long-range dependenceÐthey need not be one and the sameÐis explained in Section 1.4.1. 6 SELF-SIMILAR NETWORK TRAFFIC: AN OVERVIEW consistent with measured network traf®c and have resulted in addingyet another class of modelsÐthat is, self-similar processesÐto an already longlist of models for network traf®c. At a practical level, many of the commonly used inference techniques for quantifying the degree of self-similarity or long-range dependence (e.g., Hurst parameter estimation) have been known to exhibit different idiosyncra- sies and robustness properties. Due to their predominantly heuristic nature, these techniques have been generally easy to use and apply, but the ensuing results have often been dif®cult to interpret [64]. The recent introduction of wavelet-based techniques to the analysis of traf®c traces [1, 23] represented a signi®cant step toward the development of more accurate inference techniques that have been shown to possess increased sensitivity to different types of scalingphenomena with the ability to discriminate against certain alternative modeling assumptions, in particu- lar, nonstationary effects [1]. Due to their ability to localize a given signal in scale and time, wavelets have made it possible to detect, identify, and describe multifractal scalingbehavior in measured network traf®c over ®ne time scales [23]: a nonuniform (in time) scalingbehavior that emerges when studyingmeasured TCP traf®c over ®ne time scales, one that allows for more general scaling phenomena than the ubiquitous self-similar scaling property, which holds for a range of suf®ciently large time scales. 1.2.2 Physical Modeling In the second category are works on physical modeling that try to explicate the physical causes of self-similarity in network traf®c based on network mechanisms and empirically established properties of distributed systems that, collectively, collude to induce self-similar burstiness at multiplexingpoints in the network layer. In view of traditional time series analysis, physical modelingaffects model selection by pickingamongcompetingandÐin a statistical senseÐequally well- ®ttingmodels that are most congruent to the physical networkingenvironment where the data arose in the ®rst place. Put differently, physical modelingaims for models of network traf®c that relate to the physics of how traf®c is generated in an actual network, is capable of explainingempirically observed phenomena such as self- similarity in more elementary terms, and provides new insights into the dynamic nature of the traf®c. The ®rst type of causalityÐalso the most mundaneÐis attributable to the arrival pattern of a single data source as exempli®ed by variable bit rate (VBR) video [10, 26]. MPEG video, for example, exhibits variability at multiple time scales, which, in turn, is hypothesized to be related to the variability found in the time duration between successive scene changes [25]. This ``single- source causality,'' however, is peripheral to our discussions for two reasons: one, self-similarity observed in the original Bellcore data stems from traf®c measure- ments collected during1989±1991, a period duringwhich VBR video payload was minimalÐif not nonexistentÐto be considered an in¯uencingfactor 3 ; and two, it is 3 The same holds true for the LBL WAN data considered by Paxson and Floyd [56] and the BU WWW data analyzed by Crovella and Bestavros [13]. 1.2 PREVIOUS RESEARCH 7 well-known that VBR video can be approximated by short-range dependent traf®c models, which, in turn, makes it possible to investigate certain aspects of the impact on performance of long-range correlation structure within the con®nes of traditional Markovian analysis [32, 37]. The second type of causalityÐalso called structural causality [50]Ðis more subtle in nature, and its roots can be attributed to an empirical property of distributed systems: the heavy-tailed distribution of ®le or object sizes. For the moment, a random variable obeyinga heavy-tailed distribution can be viewed as giving rise to a very wide range of different values, includingÐas its trademarkÐ``very large'' values with nonnegligible probability. This intuition is made more precise in Section 1.4.1. Returningto the causality description, in a nutshell, if end hosts exchange ®les whose sizes are heavy tailed, then the resultingnetwork traf®c at multiplexingpoints in the network layer is self-similar [50]. This causal phenomenon was shown to be robust in the sense of holdingfor a variety of transport layer protocols such as TCPÐfor example, Tahoe, Reno, and VegasÐand ¯ow-controlled UDP, which make up the bulk of deployed transport protocols, and a range of network con®gurations. Park et al. [50] also showed that research in UNIX ®le systems carried out duringthe 1980s give strongempirical evidence based on ®le system measurements that UNIX ®le systems are heavy-tailed. This is, perhaps, the most simple, distilled, yet high-level physical explanation of network traf®c self-similarity. Correspondingevidence for Web objects, which are of more recent relevance due to the explosion of WWW and its impact on Internet traf®c, can be found in Crovella and Bestavros [13]. Of course, structural causality would be meaningless unless there were explana- tions that showed why heavy-tailed objects transported via TCP- and UDP-based protocols would induce self-similar burstiness at multiplexingpoints. As hinted at in the original Leland et al. paper [41] and formally introduced in Willinger et al. [74], the on=off model of Willinger et al. [74] establishes that the superposition of a large number of independent on=off sources with heavy-tailed on and=or off periods leads to self-similarity in the aggregated processÐa fractional Gaussian noise processÐ whose long-range dependence is determined by the heavy tailedness of on or off periods. Space aggregation is inessential to inducing long-range dependenceÐit is responsible for the Gaussian property of aggregated traf®c by an application of the central limit theoremÐhowever, it is relevant to describingmultiplexed network traf®c. The on=off model has its roots in a certain renewal reward process introduced by Mandelbrot [46] (and further studied by Taqqu and Levy [63]) and provides the theoretical underpinningfor much of the recent work on physical modelingof network traf®c. This theoretical foundation together with the empirical evidence of heavy-tailed on=off durations (as, e.g., given for IP ¯ow measurements [74]) represents a more low-level, direct explanation of physical causality of self-similarity and forms the principal factors that distinguish the on=off model from other mathematical models of self-similar traf®c. The linkage between high-level and low-level descriptions of causality is further facilitated by Park et al. [50], where it is shown that the application layer property of heavy-tailed ®le sizes is preserved by the protocol stack and mapped to approximate heavy-tailed busy periods at the network 8 SELF-SIMILAR NETWORK TRAFFIC: AN OVERVIEW layer. The interpacket spacingwithin a single session (or equivalently transfer= connection=¯ow), however, has been observed to exhibit its own distinguishing variability. This re®ned short time scale structure and its possible causal attribution to the feedback control mechanisms of TCP are investigated in Feldmann et al. [22, 23] and are the topics of ongoing work. 1.2.3 Queueing Analysis In the third category are works that provide mathematical models of long-range dependent traf®c with a view toward facilitatingperformance analysis in the queueingtheory sense [2, 3, 17, 43, 49, 53, 66]. These works are important in that they establish basic performance boundaries by investigating queueing behavior with long-range dependent input, which exhibit performance characteristics funda- mentally different from correspondingsystems with Markovian input. In particular, the queue length distribution in in®nite buffer systems has a slower-than-exponen- tially (or subexponentially) decreasingtail, in stark contrast with short-range dependent input for which the decay is exponential. In fact, dependingon the queueing model under consideration, long-range dependent input can give rise to Weibullian [49] or polynomial [66] tail behavior of the underlyingqueue length distributions. The analysis of such non-Markovian queueingsystems is highly nontrivial and provides fundamental insight into the performance impact question. Of course, these works, in addition to providingvaluable information into network performance issues, advance the state of the art in performance analysis and are of independent interest. The queue length distribution result implies that bufferingÐas a resource provisioningstrategyÐis rendered ineffective when input traf®c is self- similar in the sense of incurringa disproportionate penalty in queueingdelay vis-a Á - vis the gain in reduced packet loss rate. This has led to proposals advocating a small buffer capacity=large bandwidth resource provisioningstrategy due to its simplistic, yet curtailingin¯uence on queueing: if buffer capacity is small, then the ability to queue or remember is accordingly diminished. Moreover, the smaller the buffer capacity, the more relevant short-range correlations become in determining buffer occupancy. Indeed, with respect to ®rst-order performance measures such as packet loss rate, they may become the dominant factor. The effect of small buffer sizes and ®nite time horizons in terms of their potential role in delimitingthe scope of in¯uence of long-range dependence on network performance has been studied [29, 58]. A major weakness of many of the queueing-based results [2, 3, 17, 43, 49, 53, 66] is that they are asymptotic, in one form or another. For example, in in®nite buffer systems, upper and lower bounds are derived for the tail of the queue length distribution as the queue length variable approaches in®nity. The same holds true for ``®nite buffer'' results where bounds on buffer over¯ow probability are proved as buffer capacity becomes unbounded. There exist interestingresults for zero buffer capacity systems [18, 19], which are discussed in Chapter 17. Empirically oriented studies [20, 33, 51] seek to bridge the gap between asymptotic results and observed behavior in ®nite buffer systems. A further drawback of current performance results 1.2 PREVIOUS RESEARCH 9 is that they concentrate on ®rst-order performance measures that relate to (long- term) packet loss rate but less so on second-order measuresÐfor example, variance of packet loss or delay, generically referred to as jitterÐwhich are of importance in multimedia communication. For example, two loss processes may have the same ®rst-order statistic but if one has higher variance than the other in the form of concentrated periods of packet lossÐas is the case in self-similar traf®cÐthen this can adversely impact the ef®cacy of packet-level forward error correction used in the QoS-sensitive transport of real-time traf®c [11, 52, 68]. Even less is known about transient performance measures, which are more relevant in practice when conver- gence to long-term steady-state behavior is too slow to be of much value for engineering purposes. Lastly, most queueing results obtained for long-range depen- dent input are for open-loop systems that ignore feedback control issues present in actual networkingenvironments (e.g., TCP). Since feedback can shape and in¯uence the very traf®c arrivingat a queue [22, 50], incorporatingtheir effect in feedback- controlled closed queueingsystems looms as an important challenge. 1.2.4 Traf®c Control and Resource Provisioning The fourth category deals with works relating to the control of self-similar network traf®c, which, in turn, has two subcategories: resource provisioning and dimension- ing, which can be viewed as a form of open-loop control, and closed-loop or feedback traf®c control. Due to their feedback-free nature, the works on queueing analysis with self-similar input have direct bearingon the resource dimensioning problem. The question of quantitatively estimatingthe marginal utility of a unit of additional resource such as bandwidth or buffer capacity is answered, in part, with the help of these techniques. Of importance are also works on statistical multiplexing usingthe notion of effective bandwidth, which point toward how ef®ciently resources can be utilized when shared across multiple ¯ows [27]. A principal lesson learned from the resource provisioningside is the ineffectiveness of allocating buffer space vis-a Á -vis bandwidth for self-similar traf®c, and the consequent role of short-range correlations in affecting ®rst-order performance characteristics when buffer capacity is indeed provisioned to be ``small'' [29, 58]. On the feedback control side is the work on multiple time scale congestion control [67, 68], which tries to exploit correlation structure that exists across multiple time scales in self-similar traf®c for congestion control purposes. In spite of the negative performance impact of self-similarity, on the positive side, long- range dependence admits the possibility of utilizing correlation at large time scales, transformingthe latter to harness predictability structure, which, in turn, can be affected to guide congestion control actions at smaller time scales to yield signi®cant performance gains. The problem of designing control mechanisms that allow correlation structure at large time scales to be effectively engaged is a nontrivial technical challenge for two principal reasons: one, the correlation structure in question exists at time scales typically an order of magnitude or more above that of the feedback loop; and two, the information extracted is necessarily imprecise due 10 SELF-SIMILAR NETWORK TRAFFIC: AN OVERVIEW [...]... discusses how the causality of self-similar network traf®c can be traced back to a high-level structural property of the underlying networked system, namely, the heavy-tailed nature of ®le size or Web document distributions at the application layer The authors show that when objects sampled from such distributions are exchanged via the mediation of a ``typical'' protocol stackÐapplication layer (e g.,

Ngày đăng: 15/12/2013, 11:15

TỪ KHÓA LIÊN QUAN

w