Tài liệu hạn chế xem trước, để xem đầy đủ mời bạn chọn Tải xuống
1
/ 29 trang
THÔNG TIN TÀI LIỆU
Thông tin cơ bản
Định dạng
Số trang
29
Dung lượng
446,49 KB
Nội dung
212 THE FUTURE OF INTERNET CONGESTION CONTROL to a different ISP because their UDP-based application does not work anymore. If a large majority of applications use UDP instead of DCCP, the latter loss may be quite significant. Thus, an ISP might have to wait for DCCP to be used by applications before installing penalty boxes, which in turn would motivate application designers to use DCCP – two parties waiting for each other, and what have we learned from history? Take a look at this paragraph about QoS deployment from RFC 2990 (Huston 2000): No network operator will make the significant investment in deployment and support of distinguished service infrastructure unless there is a set of clients and applications available to make immediate use of such facilities. Clients will not make the investment in enhanced services unless they see performance gains in applications that are designed to take advantage of such enhanced services. No application designer will attempt to integrate service quality features into the application unless there is a model of operation supported by widespread deployment that makes the additional investment in application complexity worthwhile and clients who are willing to purchase such applications. With all parts of the deployment scenario waiting for the others to move, widespread deployment of distinguished services may require some other external impetus. Will we also need such other external impetus for DCCP, and what could it be? 6.2.4 Congestion control and QoS All the incentive-related problems discussed so far have one common reason: implementing congestion control is expensive, and its direct or indirect benefit to whoever implements it is unclear. Thus, promoting a QoS oriented view of congestion control may be a reasonable (albeit modest) first step towards alleviating these issues. In other words, the question ‘what is the benefit for a single well-behaved application?’ should be a central one. In order to answer it, let us focus on the specific goal of congestion control mechanisms: they try to use the available bandwidth as efficiently as possible. One could add ‘without interfering with others’ to this statement, but this is the same as saying ‘without exceeding it’, which is pretty much the same as ‘efficiently using it’. When you think about it, it really all seems to boil down to this single goal. So what exactly does ‘efficiently using the available bandwidth’ mean? There are two sides to this. We have already seen the negative effects that loss can have on TCP in Chapter 3 – clearly, avoiding it is the most important goal. This is what was called operat- ing the network at the knee in Section 2.2 and put in the form of a rule on Page 8: ‘queues should generally be kept short’. Using as much of the available bandwidth as possible is clearly another goal – nobody wants a congestion control mechanism that always sends at a very low rate across a high-capacity path. This is underlined by the many research efforts towards a more-efficient TCP that we have discussed in Chapter 3. To summarize, we end up with two very simple goals, which, in order of preference, are as follows: 1. Keep queues short. 2. Utilize as much as possible of the available capacity. 6.2. INCENTIVE ISSUES 213 The benefits of the first goal are low loss and low delay. The benefit of the second goal is high throughput – ideally, a congestion control mechanism would yield the highest possible throughput that does not lead to increased delay or loss. At the ‘Protocols for Fast Long-Distance Networks’ workshop of 2005 in Lyon, France, a panel discussion revolved around the issue of benchmarking: there are many papers that somehow compare congestion control mechanisms with each other, but the community lacks a clearly defined set of rules for doing so. How exactly does one judge whether the performance of mechanism a is better than the performance of mechanism b? I think that someone should come up with a measure that includes (i) low loss and low delay and (ii) high capacity, and assigns a greater weight to the first two factors. Fulfilling these two goals is certainly attractive, and if a single, simple measure can be used to show that a ‘proper’ congestion control mechanism (as in TCP) does a better job at attaining these goals than the simplistic (or non-existing) mechanisms embedded in the UDP-based applications of today, this could yield a clear incentive to make use of a congestion control mechanism. Still, the burden of having to implement it within the application certainly weighs on the negative side, and DCCP could help here – but DCCP might face the aforementioned deployment issues unless using congestion control is properly motivated. To summarize, I believe that we might need an integrated approach, where DCCP is provided and the benefits of using it are clearly stated using a simple measure as described above. Looking at congestion control mechanisms from a QoS-centric perspective has another interesting facet: as discussed in (Welzl and M ¨ uhlh ¨ auser 2003), such mechanisms could be seen as a central element for achieving fine-grain QoS in a scalable manner. This would be an alternative to IntServ over DiffServ as described in Section 5.3.5, and it could theoretically be realized with an architecture that comprises the following elements: • A new service class would have to be defined; let us call it ‘CC’ service. The CC service would consist of flows that use a single specific congestion control mechanism only (or perhaps traffic that adheres to a fairness framework such as ‘TCP-friendliness’ or the one described in Section 6.1.2). Moreover, it would have to be protected from the adverse influence of other traffic, for example, by placing it in a special DiffServ class. • Flows would have to be checked for conformance to the congestion control rules; this could be done by monitoring packets, tracking the sending rate and comparing it with a calculated rate in edge routers. • Admission control would be needed in order to prevent unpredictable behaviour – there can be no guarantees if an arbitrary number of flows can enter and leave the network at any time. This would require some signalling, for example, a flow could negotiate its service with a ‘bandwidth broker’ that can also say ‘not admitted at this time’. Again, this function could be placed at the very edges of the network. This approach would scale, and the per-flow service quality attained with this approach could be calculated from environment conditions (e.g. number of flows present) and an ana- lytic model of the congestion control mechanism or framework that is used. The throughput 214 THE FUTURE OF INTERNET CONGESTION CONTROL of TCP, for instance, can rather precisely be predicted with Equation 3.7 under such cir- cumstances. Note that the idea of combining congestion control with QoS is not a new one; similar considerations were made in (Katabi 2003) and (Harrison et al. 2002), and a more detailed sketch of one such architecture can be found in (Welzl 2005a). 6.3 Tailor-made congestion control Nowadays, the TCP/IP transport layer does not offer much beyond TCP and UDP – but if we assume that the more-recent IETF developments will actually be deployed, we can expect Internet application programmers to face quite a choice of transport services within the next couple of years: • Simple unreliable datagram delivery (UDP) – with or without delivery of erroneous data (UDP-Lite) • Unreliable congestion-controlled datagram delivery (DCCP) – with or without delivery of erroneous data – with a choice of congestion control mechanisms • Reliable congestion-controlled in-order delivery of – a consecutive data stream (TCP) – multiple data streams (SCTP) • Reliable congestion-controlled unordered but potentially faster delivery of logical data chunks (SCTP) This is only a very rough overview: each protocol has a set of features and parameters that can be tuned – for example, the size of an SCTP data chunk (or UDP or DCCP datagram) represents a trade-off between end-to-end delay and bandwidth utilization (small packets are delivered faster but have a larger per-packet header overhead than large packets). We have already discussed TCP parameter tuning in Section 4.3.3, and both SCTP and DCCP clearly have even more knobs that can be turned. Nowadays, the problem of the transport layer is its lack of flexibility. In the near future, this may be alleviated by DCCP and SCTP – but then, it is going to be increasingly difficult to figure out which transport protocol to use, how to use it and how to tune its parameters. The problem will not go away. Rather, it will turn into the issue of coping with transport layer complexity. 6.3.1 The Adaptation Layer ‘Tailor-made Congestion Control’ is a research project that just started at the University of Innsbruck. In this project, we intend to alleviate the difficulty of choosing and tuning a transport service by hiding transport layer details from applications; an underlying ‘Adap- tation Layer’ could simply ‘do its best’ to fulfil application requirements by choosing from what is available in the TCP/IP stack. This method also makes the application independent 6.3. TAILOR-MADE CONGESTION CONTROL 215 Requirements Traffic specification Applications Adaptation layer Transport layer Control of network resources Performance measurements Feedback Figure 6.5 The Adaptation Layer of the underlying network infrastructure – as soon as a new mechanism becomes available, the Adaptation Layer can use it and the application could automatically work better. The envisioned architecture is shown in Figure 6.5. As indicated by the arrows, an application specifies • its network requirements (e.g. by providing appropriate weights to tune the trade-off between factors such as bandwidth, delay, jitter, rate smoothness and loss); • the behaviour that it will show (e.g. by specifying whether it is ‘greedy’ (i.e. does it use all the resources it is given)). On the basis of this information, the Adaptation Layer controls resources as follows: • By choosing and tuning a mechanism from the stack – depending on the environment, this could mean choosing an appropriate congestion control mechanism and tuning its parameters via the DCCP protocol, making use of a protocol such as UDP-Lite, which is tailored for wireless environments, or even using an existing network QoS architecture. • By performing additional functions such as buffering or simply choosing the ideal packet size. It is not intended to mandate strict layering underneath the Adaptation Layer; it could also directly negotiate services with interior network elements. The Adaptation Layer needs to monitor the performance of the network. It provides QoS feedback to the application, which can then base its decisions upon this tailored high- level information rather than generic low-level performance measurements. As shown in Figure 6.6, it is yet another method to cope with congestion control state. 6.3.2 Implications Introducing a layer above TCP/IP comes at the cost of service granularity – it will not be possible to specify an API that covers each and every possible service and at the same time 216 THE FUTURE OF INTERNET CONGESTION CONTROL UDP Application Application Application UDPUDP Congestion control Congestion control Congestion control UDP Application Congestion control Application Application UDPUDP Application Application Application DCCP Congestion control Congestion control DCCP DCCP Congestion control Congestion control Application Application Application (a) Congestion controlled applications (b) DCCP ( c ) con g estion mana g er ( d ) Tailor-made con g estion control Figure 6.6 Four different ways to realize congestion control ideally exploit all available mechanisms in any situation. On the positive side, the fact that applications could transparently use new services as they become available – much like SCTP-based TCP would provide multihoming without requiring to actually implement or even know about the feature at the application level – makes it possible to incrementally enhance implementations of the Adaptation Layer. This would automatically have a positive impact on the quality that is attained by users. For instance, if an ISP provides QoS to its users (who are running Adaptation Layer–enabled applications), there would be an immediate benefit; this is different from the situation of today, where someone would first have to write a QoS-enabled application that may not run in different environments. We believe that the Adaptation Layer could effectively break the vicious circle that is described in RFC 2990 (see Page 212) and thereby add a new dimension of competitiveness to the ISP market as well as the operating system vendor market. The latter would be possible by realizing sophisticated mechanisms in the Adaptation Layer – this could even become a catalyst for related research, which would encompass the question whether it would be better to statically choose the right service or whether the Adaptation Layer should dynamically adapt to changing network characteristics. While it seems obvious that the latter variant could be more efficient, this brings about a number of new questions, for example, if the middleware switches between several TCP-friendly congestion control mechanisms depending on the state of the network, is the outcome still TCP-friendly? Since the Adaptation Layer could be aware of the network infrastructure, 6.3. TAILOR-MADE CONGESTION CONTROL 217 it could also make use of TCP-unfriendly but more-efficient transport protocols such as the ones described in Chapter 4. Would this be a feasible way to gradually deploy these mechanisms? I have asked many questions and given few answers; while I am convinced that the Adaptation Layer would be quite beneficial, it could well be that only a small percentage of what I have described in this section is realistic. In our ongoing project, we work towards our bold goal in very small steps, and I doubt that we will ever realize everything that I have described here. On the other hand, as we approach the end of the chapter, I would like to remind you that being controversial was the original goal; I wanted to stir up some thoughts and provoke you to come up with your own ideas, and it would surprise me if all these bold claims and strange ideas did not have this effect on some readers. This chapter is an experiment of sorts, and as such it can fail. I did my best, and I sincerely hope that you found it interesting. Appendix A Teaching congestion control with tools Teaching concepts of congestion control is not an easy task. On the one hand, it is always good to expose students to running systems and let them experience the ‘look and feel’ of real things; on the other, congestion control is all about traffic dynamics and effects that may be hard to see unless each student is provided with a test bed of her own. At the University of Innsbruck, we typically have lectures with 100 or more students, and accompanying practical courses with approximately 15 to 20 students per group. Every week, a set of problems must be solved by the students, and some of them are picked to present their results during a practical course. Compared to the other universities that I have seen, I would assume that our infrastructure is pretty good; we have numerous PC labs where every student has access to a fully equipped PC that runs Linux and Windows and is connected to the Internet. Still, carrying out real-life congestion control tests with these students would be impossible, and showing the effects of congestion control in an active real-life system during the lecture is a daunting task. At first, having students carry out simulations appeared to solve the problem, but our experience was not very positive. Sadly, the only network simulator that seemed to make sense for us – ns – has quite a steep learning curve. Since we only wanted them to work on short and simple exercises that illustrate some important aspects of congestion control, students had to spend more than half of their time learning how to use the tool, while the goal was to have them spend almost all of the time with congestion control and nothing else. ns is not interactive, and the desired ‘look and feel’ exposure is just not there. In order to enhance this situation somewhat, two tools were developed at the University of Innsbruck. Simplicity was the main goal of this endeavour; the tools are small Java programs that only provide the minimum amount of features that is required, making them easy to use. They are presented in the following two sections along with hints on how they can be used in a classroom. Network Congestion Control: Managing Internet Traffic Michael Welzl 2005 John Wiley & Sons, Ltd 220 TEACHING CONGESTION CONTROL WITH TOOLS The tools can be obtained from the accompanying web site 1 of this book. You are encouraged to download, use and extend it at will; any feedback is welcome. Note that these tools were specifically designed for our environment, where we would like to show animations during the lecture and have groups of 15 or 20 students solve a problem in the classroom or as a homework (in which case it should be very simple and easy to present the outcome afterwards). If your setting allows for more flexibility (e.g. your groups of students are smaller, or they are doing projects that take longer), you may want to make use of more sophisticated tools such as the following: tcpdump: This shows traffic going across an interface. 2 ethereal: This does the same, but has a graphical user interface that can also be used to visualize tcpdump data. 3 dummynet: This emulates a network and can be used to analyse the behaviour of an appli- cation under controlled conditions. 4 NIST Net: This is like a dummynet, but runs under Linux and comes with a graphical user interface. 5 A good overview of such tools can be found in (Hassan and Jain 2004). A.1 CAVT As we have seen in Section 2.5.1 of Chapter 2, vector diagrams are a very simple means of analysing the behaviour of congestion control mechanisms. At first sight, the case of two users sharing a single resource may appear to be somewhat unrealistic; on the other hand, while these diagrams may not suffice to prove that a mechanism works well, it is likely that a mechanism that does not work in this case is generally useless. Studying the dynamic behaviour of mechanisms with these diagrams is therefore worthwhile – but in scenarios that are slightly more complex than the ones depicted in Figure 2.4 (e.g. if one wants to study interactions between different mechanisms with homogeneous or even heterogeneous RTTs), imagining what the behaviour would look like or drawing a vector diagram on paper can become quite difficult. This gap is filled by the Congestion-Avoidance Visualization Tool (CAVT), a simple yet powerful program to visualize the behaviour of congestion control mechanisms. Essentially being a simulator that builds upon the well-known Chiu/Jain vector diagrams, the small Java application provides an interactive graphical user interface where the user can set a starting point and view the corresponding trajectory by clicking the mouse in the diagram. This way, it is easy to test a congestion control mechanism in a scenario where the RTTs are not equal: the rate of a sender simply needs to be updated after a certain number of time units. In the current version, precise traffic information is fed to the sources and no 1 http://www.welzl.at/congestion 2 http://www.tcpdump.org 3 http://www.ethereal.com 4 http://freshmeat.net/projects/dummynet 5 http://www-x.antd.nist.gov/nistnet A.1. CAVT 221 Figure A.1 Screenshot of CAVT showing an AIMD trajectory calculations are done at routers. Also, there is no underlying queuing model; thus, assuming perfectly fluid behaviour, the state of the network during time instances t i−x , t i and t i+x is seen by the source with RTT x at time instances t i , t i+x and t i+2x respectively. Figure A.1 is a screenshot of CAVT showing an AIMD trajectory; time is visualized by the line segments becoming brighter as they are drawn later in (simulated) time, but this effect is not noticeable with AIMD and equal RTTs because its convergence causes several lines to overlap. In addition to tuning individual parameters for each rate-allocation strategy, it is possible to individually change the RTTs and generate log files. Depending on the mechanism that was chosen, a varying number of parameter values with corresponding sliders is visible on each side of the drawing panel. Log files contain the trajectory seen in the GUI as well as the evolvement (time on the x-axis) of the rate of each user and the distance d of the current point p from the optimum o, which is simply the Euclidean distance: d = (o x − p x ) 2 + (o y − p y ) 2 (A.1) The white space in the lower left corner is used for status and error messages. CAVT has the following features: • A facility to choose between various existing mechanisms such as AIMD, MIMD, AIAD and CADPC (see Section 4.6.4). These mechanisms were defined with a very simple script language, which can be used for further experimentation. • An additional window that shows time on the x-axis plotted against the rate of each user as well as the distance from the optimum during operation (enabled or disabled [...]... August 199 1, and the working group concluded in the same year hostreq: This group, which also concluded in 199 1, wrote RFC 1122 (Braden 198 9), which was the first RFC to prescribe usage of the congestion control and RTO calculation mechanisms from (Jacobson 198 8) in TCP B.3 Finding relevant documents The intention of this appendix is to provide you with some guidance in your quest for congestion control- related... Backwards ECN Binary Increase TCP Congestion Avoidance with Distributed Proportional Control Congestion Avoidance with Proportional Control Congestion Avoidance Visualization Tool Constant Bit Rate Congestion Control ID Congestion Experienced CHOose and Keep for responsive flows, CHOose and Kill for unresponsive flows Current Limiting Receiver Congestion Manager Congestion Window Congestion Window Reduced Choose... to congestion control, having these options supported by a new header compression standard is likely to have some impact on congestion control in the Internet The following concluded working groups are noteworthy because of their congestion control- related efforts: ecm: The ‘Endpoint Congestion Management’ working group wrote two RFCs: RFC 3124 (Balakrishnan and Seshan 2001), which specifies the congestion. .. other related information to the community (e.g RFC 2488 (Allman et al 199 9a), which describes how to use certain parts of the existing TCP specification for improving performance over satellites) pcc: The ‘Performance and Congestion Control working group wrote RFC 1254 (Mankin and Ramakrishnan 199 1), which is a survey of congestion control approaches in routers; among other things, it contains some interesting... unresponsive flows Current Limiting Receiver Congestion Manager Congestion Window Congestion Window Reduced Choose Your Response Function Datagram Congestion Control Protocol Distributed Denial of Service Dynamic-RED Network Congestion Control: Managing Internet Traffic 2005 John Wiley & Sons, Ltd Michael Welzl 240 DRS DupACK DWDM D-SACK EC ECE ECN ECT EPD EPRCA ERICA ETEN EWMA FACK FC FEC FIFO FIN FLID-DL... 4 Throughput at 5 150 140 Throughput (kbit) 130 120 110 100 90 80 70 60 50 0 10 20 30 Time (s) 40 50 60 Figure A .9 The congestion collapse scenario with 1 Mbps from source 1 to router 2 9 http://www.welzl.at /congestion Appendix B Related IETF work B.1 Overview The Internet Engineering Task Force (IETF)’ is the standardization body of the Internet It is open to everybody, and participation takes place... arbitrary congestion control mechanisms as long as their decisions are based on traffic feedback only and do not use factors such as delay (as with TCP Vegas (Brakmo et al 199 4)); this covers all the TCP-friendly mechanisms that can be designed with the framework presented in (Bansal and Balakrishnan 2001) or even the “CYRF” framework in (Sastry and Lam 2002b) As mentioned earlier, a congestion control. .. will never be removed from the official list once they are published, they can have a different status The three most important RFC categories are ‘Standards 1 http://www.ietf.org Network Congestion Control: Managing Internet Traffic 2005 John Wiley & Sons, Ltd Michael Welzl 236 RELATED IETF WORK Track’, ‘Informational’ and ‘Experimental’ While the latter two mean just what the words say (informational... the only important recent one is RFC 3714 (Floyd and Kempf 2004), ‘IAB Concerns Regarding Congestion Control for Voice Traffic in the Internet If you really need to check the RFC list for exceptions such as this one, the ideal starting point is http://www.rfc-editor.org, but for most other relevant congestion control work, the sources above – combined with the links and bibliography of this book –... Increase/Decrease with Dynamic Layering Flow RED General AIMD Geostationary Earth Orbit Satellite HighSpeed TCP Internet Control Message Protocol Internet Engineering Task Force Inverse Increase/Additive Decrease Integrated Services Internet Protocol Inter-Packet Gap Internet Research Task Force Internet Service Provider Initial Window Indirect-TCP Loss-Delay Based Adjustment Algorithm Label Distribution . time 216 THE FUTURE OF INTERNET CONGESTION CONTROL UDP Application Application Application UDPUDP Congestion control Congestion control Congestion control UDP Application Congestion control Application Application UDPUDP Application Application Application DCCP Congestion. control Application Application UDPUDP Application Application Application DCCP Congestion control Congestion control DCCP DCCP Congestion control Congestion control Application Application Application (a) Congestion controlled applications (b). they can be used in a classroom. Network Congestion Control: Managing Internet Traffic Michael Welzl 2005 John Wiley & Sons, Ltd 220 TEACHING CONGESTION CONTROL WITH TOOLS The tools can