1. Trang chủ
  2. » Kỹ Thuật - Công Nghệ

Resource Management in Satellite Networks part 32 potx

10 179 0

Đang tải... (xem toàn văn)

THÔNG TIN TÀI LIỆU

298 Gorry Fairhurst, Michele Luglio, Cesare Roseti The above discussion leads to the conclusion that satellite systems could benefit from adaptive algorithms for choosing the transmission parameters by means of cross-layer interactions between transport and physical layers. An additional possibility is that MAC and physical layers interact by inserting a link-layer erasure code [20],[27] just above MAC layer, which could be an all-software solution, independent of the underlying hardware characteristics. The recent DVB-S2 standard [28] considers very powerful error-correcting codes. For ideal AWGN channel conditions, an optimization based on channel coding would be useless because the curves that give PER versus E b /N 0 are very steep [29], causing a sort of on-off behavior of the physical channel: either PER is negligible, or it is so high that it collapses TCP performance. However, optimizing channel parameters makes sense in non-ideal channel conditions, and, in general, on the satellite return channel [3]. 9.4 Cross-layer interaction between TCP and MAC The interaction between TCP and MAC protocols in a shared network can greatly improve the efficiency of satellite systems. MAC protocols play a fundamental role to guarantee good performance to higher-level protocols by managing the arbitration of uplink access. Two cases must be distinguished: (i) when TCP operates end-to-end (as the general Internet standard, or when an end-to-end IPsec protection scheme is used); (ii) a PEP scheme violates the end-to-end semantics. Without loss of generality, hereafter we will consider the latter case, where, referring to a DVB-RCS network (see Chapter 1, Section 1.4), the Gateway acts as a PEP (i.e., it is a local TCP receiver -from remote RCSTs-, located in the Earth station). Satellite networks employing a DAMA scheme introduce an additional con- tribution to the end-to-end delay, called the access delay that can significantly impact the end-to-end performance of TCP flows. In a DVB-RCS-like network, the Network Control Center (NCC) assigns return link capacity in response to explicit requests received from RCSTs [3]. This capacity negotiation requires a signaling exchange that regulates the data flow. Therefore, when TCP is used as transport protocol, two nested control loops exist with the same time constants (i.e., RTT): • At MAC layer: resource request - resource assignment loop; • At TCP layer: TCP segment - acknowledgement loop. The consequence of this interaction is an increase in the latency perceived by the end-systems. To mitigate this effect it is possible to reduce the access delay with a preventive allocation scheme driven by a cross-layer interaction between MAC and TCP layers. The idea is to use the TCP parameters, such as cwnd and ssthresh, to estimate in advance the resources needed by a given TCP flow [30],[31]. In fact, from the comparison of these two quantities, it is possible to determine the TCP congestion control status (i.e., SS or CA). Chapter 9: RESOURCE MANAGEMENT AND TRANSPORT LAYER 299 Consequently, the MAC layer can know the law according to which cwnd is enlarged on an RTT basis and can predict with a very good accuracy the necessary resource allocation needed by each TCP flow. In this way, it is expected to reduce significantly queuing delay, while also achieving an efficient utilization of the satellite shared capacity. More details on this approach are provided in the following sub-Sections. 9.4.1 A novel TCP-driven dynamic resource allocation scheme The implementation of a dynamic access scheme allows optimizing the re- source sharing. The DVB-RCS standard defines the following set of capacity request methods (see for more details Chapter 1, sub-Section 1.4.3): CRA, RBDC, VBDC, AVBDC, and FCA. In particular, VBDC performs a capacity request, as long as new data arrive in the RCST queue. The amount of capacity per frame, a generic RCST requests at the k-th super-frame, can be expressed by using the formula defined in [32]: r (k)=  q (k) − n s · a (k) − n s ·  L−1 j=1 r (k − L + j) − n s · w (k) n s  (9.7) where: •. denotes rounding to the upper positive integer; • q(k) = amount of queued data; • n s = number of frames per super-frame; • n s ·a(k ) = capacity assigned in the k-th super-frame; • L = system response time expressed in super-frames (also indicated as allocation period); it represents the time elapsed from a capacity request transmission to the actual assignment of the requested capacity; • n s ·  L−1 j=1 r (k − L + j) = resources requested in the previous super-frames, but not yet assigned; • n s ·w(k) = resources requested in the previous allocation periods and not yet assigned. Unfortunately, the VBDC allocation method leads to a huge increase in the end-to-end delay perceived by the systems where TCP applications are running. In fact, the above mentioned access delay involves in this case the following contributions: • Reservation delay: since requests are sent at a fixed rate in dedicated slots, a time interval occurs between the arrival of data in the MAC buffer and the transmission of the corresponding capacity request; • RTD contribution: sum of the time to propagate the capacity request from the RCST to the NCC and the time to deliver the Terminal Burst Time Plan (TBTP) in the opposite direction; 300 Gorry Fairhurst, Michele Luglio, Cesare Roseti • Processing (and synchronization) delay: time spent by the DAMA con- troller (in the NCC) to transmit the TBTP message with the capacity assignment; • Forwarding delay: time between the reception of the TBTP by the RCST and the actual transmission of data. On the basis of the above delay contributions, the RTT values correspond- ing to the VBDC case can be of the order of 1.6 s ( 1 ) in a standard GEO bent-pipe system [33]. The DVB-RCS standard also supports an RBDC capacity request method. In this case, resources are allocated on the basis of the rate at which an RCST wishes to transmit (usually based on monitoring the arrival rate at its layer 2 queue). This method reduces the access delay. Most RCS systems provide a wide range of Bandwidth on Demand (BoD) schemes based on a combination of both methods (VBDC and RBDC). As already anticipated in Section 9.4, our interest here is in reducing the access delay, keeping optimal network efficiency, by using TCP status information to predict the amount of data that will feed the RCST queue in the future. In order to exchange cross-layer signaling between layer 2 and layer 4, dedicated local messages [31] are generated each time that TCP parameters (e.g., cwnd) go beyond a certain threshold; this is according to an explicit cross-layer method. Let us assume a system response time greater than the physical RTD ( 2 ), in computing the r (k) request. Such assumption allows to the proposed algorithm predicting the further data that will be present in the RCST queue when the resources will be allocated, according to both the amount of data transmitted in the k-th super-frame and the TCP phase (SS or CA): Q  (k)=  2 · n s · a (k) Slow Start n s · a (k) ·  1+ 1 cwnd  Congestion Avoidance . (9.8) Therefore, in our TCP-driven RRM a new term is added to (9.7) and, therefore, the amount of resources per frame requested at the k-th super- frame, r(k)is: r (k) = (9.9) =  q (k) − n s · a (k) − n s ·  L−1 j=1 r (k −L + j) − n s · w (k) n s + Q  (k) n s  . 1 The value of RTT ≈ 3 RTD is due to the use, for the simulations, of an architecture where NCC is separated from the Gateway. 2 This assumption is appropriate to current DVB-RCS systems when the TCP flow is not encrypted, especially when PEP mechanisms are used at the satellite Gateway to end TCP connections within the satellite segment. Chapter 9: RESOURCE MANAGEMENT AND TRANSPORT LAYER 301 Finally, in addition to r(k), also the TCP phases will be communicated by the RCST to the NCC in the capacity request message by setting the following flag (TCP phase flag): • 1 −→ Slow Start; • 0 −→ Congestion Avoidance. On the other side, the NCC serves all incoming requests by considering two priority levels: a High priority level associated to requests with the TCP phase flag setto1,andaLow priority level associated to requests with the TCP phase flag set to 0. Our aim is to prioritize connections in the SS phase with respect to those operating in the CA one to favor both short transfers and just started connections. In each queue (i.e., the queue for requests in the SS phase and the queue for requests in the CA phase), requests are satisfied according to Maximum Legal Increment (MLI) algorithm [34] to guarantee a fair allocation among the different competing flows. If the amount of needed resources exceeds those available in a super-frame, the NCC creates a “waiting list” to assign the resources in the next super- frames and stops the cwnd growth of all the connections coming from the RCSTs that have not obtained the requested resources. In particular, the proposed allocation scheme at the NCC performs the following two tasks: • Assure that resources are fairly shared among all the active TCP connec- tions; • Provide a further cross-layer action that sets a new variable, named cwnd*, in order to modify the current cwnd value used by the TCP source in the RCST as follows: cwnd ←− cwnd*. Note that the NCC (acting like a PEP) sends back the cwnd* value by using a field for TCP options (layer 4 ACKs) in the headers. The rationale of this modification on the TCP protocols is to avoid internal congestion on the RCST side and, then, the possibility of layer 2 buffer overflows. The main expected effects of the proposed cross-layer-based access scheme are: • Reduction of the access delay: since the request algorithm predicts also the amount of data that will feed the RCST queue due to the TCP congestion control mechanism, the access delay will be reduced of an RTD; • Avoidance of internal congestions at the RCSTs: the cross-layer interaction between RRM and TCP layers permits to prevent layer 2 buffer overflows due to satellite network congestion; • Efficient and dynamic resource allocation: resources are dynamically as- signed on a super-frame basis according to explicit requests, thus allowing a better utilization of the available capacity. 302 Gorry Fairhurst, Michele Luglio, Cesare Roseti Analysis of the allocation process A simulator has been implemented using ns-2 (release 2.27) [35], in order to evaluate the performance of the cross-layer allocation process and the resulting performance. In particular, the ns-2 extensions that reproduce a traditional GEO satellite network have been modified to simulate a centralized Multi Frequency - Time Division Multiple Access (MF-TDMA) scheme and the NCC functionalities. The interaction between the TCP cwnd trend and the corresponding allocation process has been analyzed by means of the average resources assigned (in slots) as a function of time; such parameter has been monitored for one or more TCP connections sharing the return link of a communication network compliant to Scenario 2 described in Chapter 1, sub-Section 1.4.5. The main simulation parameters are detailed in Table 9.1. Physical parameters Physical RTT (RTD) ∼ 515 ms Return link bandwidth 2048 kbit/s Maximum number of RCSTs 32 Frame parameters Super-frame duration 96 ms Number of slots per frame 32 Protocols Transport Protocol TCP NewReno Application Protocol FTP TCP parameters TCP packet size 1500 bytes PER Variable, from 0 to 0.0001 Table 9.1: Main simulation parameter values. In particular, by considering a file transfer (where the application layer is achieved by means of the File Transfer Protocol,FTP)fromanRCSTto the NCC, Figure 9.3 highlights how the allocated resources (continuous line) are strictly correlated to the cwnd trend (dotted line) with our scheme. In particular, three different phases can be recognized in the allocation process according to the following sequence: 1. An initial exponential growth corresponding to the TCP SS phase; 2. A clear reduction of the allocated resources (approximately one half) when the Fast Recovery mechanism is invoked as reaction to the detection of a loss; 3. A linear growth corresponding to the TCP CA phase. Chapter 9: RESOURCE MANAGEMENT AND TRANSPORT LAYER 303 Fig. 9.3: Comparison between allocated resources and cwnd trend versus time (1 TCP connection, PER = 10 −4 ). Referring to our TCP-driven RRM scheme, Figure 9.4 focuses on the fair resource sharing between two TCP connections, when losses occur. At the beginning, the capacity is saturated (i.e., the NCC stops the cwnd growth of both the connections in order to prevent congestion and losses): the overall capacity is perfectly divided between the two connections. When a connection is affected by a transmission error (loss), with consequent cwnd reduction, the NCC re-assigns temporarily the unused capacity to the other connection in order to optimize the utilization of resources. Performance evaluation The TCP performance strictly depends on the perceived latency at the end-systems, as shown by (9.1) and (9.2). Therefore, RTT can represent a valid parameter to evaluate the TCP performance. Hence, we have compared our TCP-driven RRM scheme with the classical CRA and VBDC capacity allocation techniques [3]. The main simulation parameters, compliant to Scenario 2, are those provided in the previous Table 9.1. Figure 9.5 shows the average perceived RTT for the three considered access schemes. In particular, the obtained results allow the following considerations: • VBDC presents the higher delay equivalent to about three times the physical RTD (see Chapter 1 for RTD characteristics) [33]: 1 RTD for the capacity request (on the basis of new data in the layer 2 queue, RCST side) and notification exchange; 1 RTD for the TCP segment and ACK 304 Gorry Fairhurst, Michele Luglio, Cesare Roseti Fig. 9.4: Comparison among allocated resources in the RTT versus time (2 TCP connections, PER = 10 −4 ). Fig. 9.5: Comparison among average RTT values obtained with the following techniques: VBDC, CRA and cross-layer scheme. Chapter 9: RESOURCE MANAGEMENT AND TRANSPORT LAYER 305 exchange; 1 RTD for the capacity allocation for the availability of the channel for ACK transmissions (Gateway side). • In the CRA case, RTT is only affected by the physical delay RTD, since the capacity is not negotiated, but permanently assigned in the set-up phase of a connection; • The proposed TCP-driven RRM scheme (also simply called “cross-layer scheme” in what follows) reduces the overall VBDC delay by almost 1 RTD, trying to predict the amount of data that will feed the RCST queue. Then, by evaluating only the end-to-end performance in terms of RTT for a single TCP connection, the proposed cross-layer technique represents a good trade-off solution between VBDC and CRA. The principle of assigning capacity on the basis of the real needs of data sources leads to significant improvements in terms of both end-to-end performance and network utilization when multiple TCP connections compete for the overall capacity. The following simulations have been performed considering: 20 FTP transfers coming from different RCSTs and with start time instants spaced of 5 s; 10 Mbytes files have to be uploaded to a remote system through the satellite Gateway. As a reference, a fixed allocation scheme (i.e., CRA) is considered where the capacity is equally divided at the beginning among the RCSTs in a static manner. The average file transfer time has been measured for different PER values and then compared with the mean transfer time of the proposed cross-layer scheme. The results, shown in Figure 9.6, highlight that the TCP-driven RRM scheme with cross-layer information allows a reduction of the mean transfer time ranging from 12.3% (PER = 0) to 26.5% (PER = 0.01). Finally, Figure 9.7 highlights the benefits derived from the use of the proposed cross-layer scheme with respect to CRA in terms of channel uti- lization. In fact, the continuous line indicates the percentage of the average utilization increase, for the cross-layer scheme with respect to CRA, when 5 FTP transfers (10 Mbytes) are running at instants spaced of 5 seconds with PER = 10 −3 . This figure also shows the curve representing the instantaneous channel utilization when the cross-layer scheme is used (dashed line), in order to show the optimal values constantly achieved. 9.5 Overview of UDP-based multimedia over satellite This Section focuses on multimedia transport in satellite networks, with a specific reference to Scenario 2 described in Chapter 1, sub-Section 1.4.5. Cross-layer methods offer new opportunities for satellite systems to adapt RRM to the needs of multimedia traffic. The challenge is the design of cross-layer mechanisms that can optimize the overall end-to-end multimedia application performance over satellite links, while minimizing the utilized 306 Gorry Fairhurst, Michele Luglio, Cesare Roseti Fig. 9.6: Average file transfer time versus PER (20 FTP transfers starting at instants spaced of 5 s). Fig. 9.7: Cross-layer access scheme: utilization and percentage of the average utilization increase with respect to the CRA scheme (5 FTP transfers starting at instants spaced of 5 s, PER = 10 −3 ). Chapter 9: RESOURCE MANAGEMENT AND TRANSPORT LAYER 307 radio resource. This topic requires a combination of expertise in propagation analysis, channel modeling, coding and modulation, jointly with consideration of link framing design and transport protocol design/evaluation. Analysis can be performed by combining physical simulation (based on propagation models) with packet-level protocol simulation (including application modeling). 9.5.1 Cross-layer methods for UDP Examples of multimedia cross-layer methods include adapting transport pro- tocols and application mechanisms to make them more robust to changes in the link quality conditions [36]. A first type of cross-layer method uses RRM and QoS techniques to tailor lower layer parameters to the characteristics of particular multimedia flows (as proposed for TCP in earlier Section 9.4). The requirements for multimedia traffic can differ from application to application. This kind of cross-layer communication also implies some form of signaling exchange between different protocol layers. Recognizing the emergence of error-tolerant codecs, IETF has recently standardized a new multimedia transport protocol, named UDP-Lite [37], allowing an application to specify the required level of payload protection, while maintaining end-to-end delivery checks. In order to benefit from using UDP-Lite, the changes at the transport layer must be reflected in the design of satellite link and physical layers. Hence, it is important to tune the characteristics of lower layers in terms of modulation and coding (trading BER for IBR). Cross-layer signaling may also be valuable to indicate the prevailing system performance to transport entities (in PEPs or end-hosts); this could also permit multimedia applications to adjust their choice of media codec in response to increased delay or reduced capacity. Hence, the use of cross-layer methods can provide increased information to the transport layer and ap- plications concerning the quality and characteristic of the channel they are using. This new flexibility gives opportunity to higher-layer protocols to react in appropriate ways. The success of multimedia cross-layer approaches relies not only on the development of suitable techniques, but also on the selection of appropriate signaling methods, and on the adoption of design methodologies that will permit cross-layer systems to inter-work and to evolve. 9.6 Conclusions This Chapter provides an overview of the key issues that concern transport protocol performance over paths that include a GEO satellite segment. In particular, it gives a detailed survey of several approaches that permit a better interaction of transport layer protocols with RRM and physical layers. . allocated resources (continuous line) are strictly correlated to the cwnd trend (dotted line) with our scheme. In particular, three different phases can be recognized in the allocation process according. (VBDC and RBDC). As already anticipated in Section 9.4, our interest here is in reducing the access delay, keeping optimal network efficiency, by using TCP status information to predict the amount of. PEP mechanisms are used at the satellite Gateway to end TCP connections within the satellite segment. Chapter 9: RESOURCE MANAGEMENT AND TRANSPORT LAYER 301 Finally, in addition to r(k), also the

Ngày đăng: 05/07/2014, 19:20

Xem thêm: Resource Management in Satellite Networks part 32 potx