ADMINISTERING CISCO QoS IP NETWORKS - CHAPTER 6 pptx

22 369 0
ADMINISTERING CISCO QoS IP NETWORKS - CHAPTER 6 pptx

Đang tải... (xem toàn văn)

Tài liệu hạn chế xem trước, để xem đầy đủ mời bạn chọn Tải xuống

Thông tin tài liệu

Queuing and Congestion Avoidance Overview Solutions in this chapter: ■ Using FIFO Queuing ■ Using Priority Queuing ■ Using Custom Queuing ■ Using Weighted Fair Queuing ■ Using Random Early Detection Chapter 6 217 110_QoS_06 2/13/01 11:48 AM Page 217 218 Chapter 6 • Queuing and Congestion Avoidance Overview Introduction In this chapter, we introduce the default queuing mechanisms used on interfaces of various speeds and examine the reasons for using different queuing mecha- nisms on different types of interfaces.You should pay particular attention to the fact that the choice of different queuing methods as the default behavior for an interface is based on bandwidth. This chapter also introduces some basic types of queuing, such as priority and custom queuing.We will investigate, in detail, the benefits and drawbacks of these methods, and thereby provide the basis for making a decision as to whether these queuing methods are appropriate for implementation on your network. This chapter also introduces the theory of random discard congestion avoid- ance.We discuss where this might be used, where it may or may not be needed, and what the benefits and drawbacks of the random discard method of conges- tion avoidance are.This discussion will be continued in later chapters, as some of these technologies are the foundation of other, more advanced, methods. Using FIFO Queuing As network traffic reaches an ingress or egress point, such as a router interface, network devices, such as a router, must be able to adequately process this traffic as it is being received. FIFO (first in/first out) queuing is the most basic approach to ordering traffic for proper communication. Like a line at a carnival ride, a FIFO queue places all packets in a single line as they enter the interface. Packets are processed by the router in the same order they enter the interface. No priority is assigned to any packet.This approach is often referred to using the “leaky bucket” analogy. Packets are placed in the bucket from the top and “leak” out of the bucket from a controlled opening at the bottom.Why then is a FIFO queue necessary, if no priority or packet control is applied? Why not just let packets flow as they come in to the interface? The reason is simple. During the routing process, when a packet goes from one router interface to another, it often changes interface type and speed. Consider, for example, a single communication stream going from a 10BaseT Ethernet segment to a 256 Kbps serial connection. Figure 6.1 shows the FIFO queuing process for this stream.The stream encoun- ters a speed mismatch.The Ethernet segment feeds the stream to the router at 10 Mbps, whereas the outbound serial connection sends the stream out at 256 Kbps. The FIFO queue is used to order the packets and hold them until the serial link can properly process them. www.syngress.com 110_QoS_06 2/13/01 11:48 AM Page 218 www.syngress.com The FIFO queue enables the router to process higher speed communications exiting through a slower speed medium. In cases where the Ethernet communi- cation is comprised of short bursts, the FIFO queue handles all the packets without difficulty. However, an increased amount of higher speed traffic coming from the Ethernet interface can often cause the FIFO queue to overflow.This sit- uation is called “tail drop,” because packets are dropped from the tail of the queue.The queue will continue to drop packets at the tail until it processes packets from the head, thus freeing space within the queue to accommodate new packets inbound from the tail end. Figure 6.2 shows a tail end drop situation. The disadvantage of FIFO queuing comes from its simplicity. Since it does not have a mechanism to distinguish the packets that it handles, it has no way to ensure that it processes the packets fairly and equally. It simply processes them in the same order that they enter the queue.This means that high traffic protocols, such as FTP (File Transfer Protocol), can use significant portions of the FIFO queue, leaving time sensitive protocols such as Telnet with little bandwidth to operate. In such a case, the Telnet session would seem interrupted and unrespon- sive, since the greater share of the queue is used by the FTP transfer. Queuing and Congestion Avoidance Overview • Chapter 6 219 Figure 6.1 FIFO Queue in Operation FIFO Queue Input Packets Output Packets Figure 6.2 FIFO Tail End Drop FIFO Queue Full Input Packets 256 kbps Output Packets 128 kbps Packets Discarded 110_QoS_06 2/13/01 11:48 AM Page 219 220 Chapter 6 • Queuing and Congestion Avoidance Overview It is thus apparent that FIFO is a very basic queuing mechanism that allows the router to order and process packets as they compete to exit an interface. These packets may come from one or multiple other interfaces connected to the router. For example, if a router has one serial interface connected to a wide area network and two Ethernet interfaces connected into two different local IP net- works, packets from both Ethernet interfaces would compete for a spot in the outgoing FIFO queue running on the serial interface. This single queue principle is the base of all other queuing mechanisms offered by the Cisco Internet Operating System (IOS).All other queuing mecha- nisms build on this single queue principle to offer better quality of service (QoS), depending on the traffic requirements. High Speed versus Low Speed Links By default, the Cisco IOS has Weighted Fair Queuing (WFQ) for any link having a speed of E1 (2.048 Mbps) or lower.This is an “invisible” feature of IOS, as it does not show up in the configurations. If you want to use FIFO queuing on an interface of E1 speed or lower,WFQ must be manually disabled through the IOS configurations.This feature first appeared in version 11.0 of the IOS. TIP Cisco has good reason for placing these default settings within the IOS configuration. FIFO queuing normally is not the preferred queuing method on slow speed links. If you use FIFO on these links, you must be aware of the consequences. When Should I Use FIFO? FIFO may not seem like a very sophisticated, or even desirable, queuing method, considering the rich features of other queuing mechanisms offered by the Cisco IOS. However, FIFO can be a very efficient queuing method in certain circum- stances. Imagine, for example, a 10BasetT Ethernet segment connected to a router that in turn connects to a wide area network (WAN) through a T3 seg- ment (45 Mbps, approximately). In this case, there is no chance that the inbound 10 Mbps communication can overwhelm the 45 Mbps outbound pipe.The router still requires the FIFO queue to order the packets into a single line in order to feed them to the T3 interface for processing. Using a simple queuing www.syngress.com 110_QoS_06 2/13/01 11:48 AM Page 220 Queuing and Congestion Avoidance Overview • Chapter 6 221 mechanism reduces the delay experienced by the packets as the router processes them. In delay-sensitive applications such as voice or video, this can be an impor- tant factor. QoS with FIFO One negative consequence of packets being tail dropped is that retransmissions are required at the upper layers of the OSI model.With TCP/IP for example, the Session layer would detect a break in the communication through the acknowl- edgement process.The Session layer would then adjust the transmission window size and start sending packets in smaller numbers.This retransmission process can be controlled for our purposes through techniques such as random early detec- tion (RED) and weighted random early detection (WRED).These techniques are used in conjunction with FIFO queuing to maximize the throughput on congested links.They will be discussed in detail later in this chapter. Using Priority Queuing Priority queuing (PQ) enables network administrators to prioritize traffic based on specific criteria.These criteria include protocol or sub-protocol types, source interface, packet size, fragments, or any parameter identifiable through a standard or extended access list. PQ offers four different queues: ■ low priority ■ normal priority ■ medium priority ■ high priority Through the proper configuration of PQ, each packet is assigned to one of these queues. If no classification is assigned to a packet, it is placed in the normal priority queue. How Does Priority Queuing Work? The priority of each queue is absolute. As packets are processed, PQ examines the state of each queue, always servicing higher priority queues before lower pri- ority queues.This means that as long as there is traffic in a higher priority queue, lower priority queues will not be processed. PQ, therefore, does not use a fair allocation of resources among its queues. It services them strictly on the basis of the priority classifications configured by the network administrator. Figure 6.3 shows PQ in action. www.syngress.com 110_QoS_06 2/13/01 11:48 AM Page 221 222 Chapter 6 • Queuing and Congestion Avoidance Overview Queue Sizes Each of these queues acts as an individual “leaky bucket” which is prone to tail discards.The default queue sizes for PQ are shown in Table 6.1.These queue sizes can be manually adjusted from 0 to 32,767 packets. Table 6.1 Priority Queuing Default Queue Sizes Limit Size High priority queue limit 20 packets Medium priority queue limit 40 packets Normal priority queue limit 60 packets Low priority queue limit 80 packets Why Do I Need Priority Queuing on My Network? Priority queuing can seem like a coarse or “brute force” approach to traffic pri- oritization, but it allows you to give certain traffic classes absolute priority over others. For example, many legacy systems such as mainframes use Systems Network Architecture (SNA) as their method of transport. SNA is very suscep- tible to delays and so would be an excellent candidate for a high priority queue. If Telnet is the core business of an enterprise, it could also be given high priority www.syngress.com Figure 6.3 Priority Queuing in Operation Priority Queues Input Packets Packet Classification High Priority Queue Medium Priority Queue Normal Priority Queue Low Priority Queue Output Packets 110_QoS_06 2/13/01 11:48 AM Page 222 Queuing and Congestion Avoidance Overview • Chapter 6 223 over all other traffic.This ensures that high volume protocols such as FTP or HTTP do not negatively impact business-critical applications. Remember that the configuration of PQ dictates how the queuing process will operate on that link. If new applications using new protocols are deployed within the networking environment, PQ will simply place these unaccounted for protocols in the normal priority queue.The configuration of PQ should there- fore be periodically reviewed to ensure the validity of the queuing configuration. Queue Starvation When using PQ, you must give serious consideration to your traffic prioritiza- tion. If the traffic assigned to the high priority queue is heavy, lower priority queues will never be serviced.This leads to the traffic in these queues never being transmitted and additional traffic assigned to these queues being tail dropped . Figure 6.4 depicts such a situation. NOTE Priority queuing does not work with any type of tunnel interface. Make sure you remember this fact when engineering QoS in a network that includes tunnels. www.syngress.com Figure 6.4 Queue Starvation in Priority Queuing Priority Queues Input Packets Packet Classification Output Packets High Priority Queue Low Priority Queue Low priority queue never gets serviced because high priority queue is never empty. 110_QoS_06 2/13/01 11:48 AM Page 223 224 Chapter 6 • Queuing and Congestion Avoidance Overview Using Custom Queuing We have seen how priority queuing allows us to assign traffic to different queues, each queue being serviced depending strictly on its priority. Custom queuing (CQ) shifts the service of queues from an absolute mechanism based on priority to a round-robin approach, servicing each queue sequentially. CQ allows the creation of up to 16 user queues, each queue being serviced in succession by the CQ pro- cess.There is also one additional queue, called queue 0, which is created automati- cally by the CQ process.This queue is user configurable, but is not recommended. We will discuss this queue in greater detail later in this section. Each of the user- configurable queues, and even queue 0, represents an individual “leaky bucket” which is also susceptible to tail drops. Unlike priority queuing, however, custom queuing ensures that each queue gets serviced, thereby avoiding the potential situa- tion in which a certain queue never gets processed. Custom queuing gets its name from the fact that network administrators can control the number of queues in the queuing process. In addition, the amount of bytes, or “byte count” for each queue can be adjusted in order for the CQ process to spend more time on certain queues. CQ can therefore offer a more refined queuing mechanism, but it cannot ensure absolute priority like PQ. How Does Custom Queuing Work? Custom queuing operates by servicing the user-configured queues individually and sequentially for a specific amount of bytes.The default byte count for each queue is 1500 bytes, so without any customization, CQ would process 1500 bytes from queue 1, then 1500 bytes from queue 2, then 1500 bytes from queue 3, and so on.Traffic can be classified and assigned to any queue through the same methods as priority queuing, namely, protocol or sub-protocol types, source interface, packet size, fragments, or any parameter identifiable through a standard or extended access list. Figure 6.5 shows CQ in action. Through a judicious use of the byte count of each queue, it is possible to perform bandwidth allocation using custom queuing. Imagine, for example, an enterprise wanting to restrict Web traffic to 25 percent of the total bandwidth, Telnet traffic to 25 percent of the total bandwidth, and the remaining 50 percent for all other traffic.They could configure custom queuing with three queues. Queue 1 would handle all Web traffic with a default byte count of 1500 bytes. Queue 2 would handle all Telnet traffic, also with a default byte count of 1500 bytes. Queue 3 would handle all remaining traffic, but it would be manually assigned a byte value of 3000 bytes. Figure 6.6 shows this CQ configuration. www.syngress.com 110_QoS_06 2/13/01 11:48 AM Page 224 Queuing and Congestion Avoidance Overview • Chapter 6 225 In this case, CQ would process 1500 bytes of Web traffic, then 1500 bytes of Telnet traffic, and finally 3000 bytes of remaining traffic, giving us the 25 percent, 25 percent, 50 percent allocation desired. If more bandwidth is available towing to a light network traffic load, CQ can actually process more information from each queue. In Figure 6.6, if only queues 1 and 2 had traffic in them, they would each be allocated 50 percent of the total bandwidth.The byte count values indi- cate the bandwidth allocation in a congested situation. www.syngress.com Figure 6.5 Custom Queuing in Action Custom Queues Input Packets Packet Classification Queue #1 Queue #2 Queue #3 Queue #16 Output Packets CQ services each queue up to the byte count limit in a round-robin fashion. Figure 6.6 CQ with Custom Byte Count Custom Queues Input Packets Packet Classification Queue #1 Queue #2 Queue #3 Queue #16 Output Packets Even though packets remain in each queue, CQ only services them up to the value set by each queue's byte count. Byte count limit for each queue 110_QoS_06 2/13/01 11:48 AM Page 225 226 Chapter 6 • Queuing and Congestion Avoidance Overview WARNING Custom queuing does not perform packet fragmentation. If a packet is larger than the total byte count allocation of that queue, CQ processes the entire packet anyway. This means that a 1500-byte queue will service a 3000-byte packet in its 1500-byte interval. In an Ethernet or HDLC envi- ronment where the maximum transmission unit (MTU) size is 1500, the default byte count value of CQ is appropriate. In other environments, such as Token Ring, where the MTU can climb to 4098 bytes, using 1500-byte queues to allocate bandwidth can lead to inaccurate alloca- tion of resources. In the previous example, increasing the byte count ratio to 4098/4098/8196 would be more appropriate. Be aware of your environment. This warning is necessary in a variety of situations.We have seen that if an interface is allowed to send 1500 bytes and the first packet in the queue is 1501 bytes or more, the entire packet will be sent. However, it is also true that if the first packet is 1499 bytes and the second packet is 1500 bytes or more, the entire first packet will be sent, and because an additional 1 byte is allowed to be trans- mitted, the entire second packet will also be sent. The disadvantage of custom queuing is that, like priority queuing, you must create policy statements on the interface to classify the traffic to the queues. If you do not create custom queuing policies on the custom queuing interface, all traffic is placed in a single queue (the default queue) and is processed on a first in/first out basis, in the same manner as a FIFO queuing interface. Queue Sizes As with priority queuing, each custom queue is an individual “leaky bucket.”The default queue size for each CQ queue is 20 packets. Individual queue sizes can be manually adjusted from 0 to 32,767 packets. Protocol Interactions with Custom Queuing It is important to understand that custom queuing does not provide absolute guarantees with respect to bandwidth allocation. CQ supports network protocols, but it is also dependent on their operation. For example, consider the windowing mechanism of TCP/IP. On an Ethernet segment (MTU of 1500 bytes), if the TCP/IP transmission window is set to 1, the Session layer will send one packet www.syngress.com 110_QoS_06 2/13/01 11:48 AM Page 226 [...]... congestion point of the link Figure 6. 8 shows the effect of Figure 6. 8 The Effect of RED on a TCP Sliding Window Size Sender Receiver Send 1 Send 2 Send 3 ACK 4 window size = 5 Send 4 Send 5 Send 6 Send 7 X (RED) Send 8 ACK 4 window size = 3 Send 4 Send 5 Send 6 ACK 7 www.syngress.com 110 _QoS_ 06 2/13/01 11:48 AM Page 235 Queuing and Congestion Avoidance Overview • Chapter 6 RED on a TCP sliding window size... classification www.syngress.com 235 110 _QoS_ 06 2 36 2/13/01 11:48 AM Page 2 36 Chapter 6 • Queuing and Congestion Avoidance Overview attributes It is important to understand the mechanics behind each queuing process in order to select the one that best suits your environment.Typical use of these techniques gives priority to delay-sensitive applications such as voice and video, business-critical traffic, traffic that... information in Table 6. 2 www.syngress.com 110 _QoS_ 06 2/13/01 11:48 AM Page 229 Queuing and Congestion Avoidance Overview • Chapter 6 Table 6. 2 Weighted Fair Queuing Flow Identification Fields Protocol WFQ Flow Identification Fields TCP /IP IP Protocol Source IP address Destination IP address Source port Destination port Type of service (ToS) field Source network, node and socket Destination network, node... bandwidth.Where, then, does the “weighted” factor start affecting the queuing process? The answer is, when the ToS field, or IP precedence field, is www.syngress.com 110 _QoS_ 06 2/13/01 11:48 AM Page 231 Queuing and Congestion Avoidance Overview • Chapter 6 different.WFQ takes into account IP precedence and gives preferential treatment to higher precedence flows by adjusting their weight If all packets have... default IP precedence of 0, they are each given a weight of 1 (0 + 1).The weight of the total bandwidth is 3 (1 + 1 + 1), and each flow is given one-third of the total bandwidth On the other hand, if two flows have an IP precedence of 0, and a third flow has a precedence of 5, the total weight is 8 (1 + 1 + 6) .The first two flows are each given one-eighth of the bandwidth, whereas the third flow receives six-eighths... www.syngress.com 227 110 _QoS_ 06 228 2/13/01 11:48 AM Page 228 Chapter 6 • Queuing and Congestion Avoidance Overview What Is Queue 0? Queue 0 is a special queue used by the system to pass “network control packets,” such as keepalive packets, signaling packets, and so forth It is user-configurable, but is not recommended Queue 0 has priority over all other queues and so is emptied before any user-defined queues...110 _QoS_ 06 2/13/01 11:48 AM Page 227 Queuing and Congestion Avoidance Overview • Chapter 6 and wait for an acknowledgement (ACK) before sending another packet If the byte count of the queue configured to handle TCP /IP is set to 3000 bytes, the queue will always remain half empty when serviced by CQ, since TCP /IP will not send more than 1500 bytes at a time... high priority queue and having the remaining traffic flow through www.syngress.com 237 110 _QoS_ 06 238 2/13/01 11:48 AM Page 238 Chapter 6 • Queuing and Congestion Avoidance Overview the normal priority queue Do I need to configure PQ on every router or just the edge routers? Figure 6. 9 Priority Queuing Across Multiple Routers R3 R2 R4 R1 R5 Network A Network B A: Priority queuing, custom queuing, and... www.syngress.com 110 _QoS_ 06 2/13/01 11:48 AM Page 233 Queuing and Congestion Avoidance Overview • Chapter 6 situations where UDP traffic is predominant, because RED has no appreciable effects on it.We will see why later in this section TCP /IP Sliding Window In order to fully understand how RED operates, it is important to understand the underlying mechanism that RED uses to reduce communications Figure 6. 7 shows the... point,TCP recovers at the last successful ACK sequence and reduces the window size in an attempt to achieve successful communication www.syngress.com 233 110 _QoS_ 06 234 2/13/01 11:48 AM Page 234 Chapter 6 • Queuing and Congestion Avoidance Overview When multiple TCP connections operate on a common link, they will all increase the size of their sliding windows as successful ACKs are received.This synchronized . Output Packets Figure 6. 2 FIFO Tail End Drop FIFO Queue Full Input Packets 2 56 kbps Output Packets 128 kbps Packets Discarded 110 _QoS_ 06 2/13/01 11:48 AM Page 219 220 Chapter 6 • Queuing and Congestion. value of 3000 bytes. Figure 6. 6 shows this CQ configuration. www.syngress.com 110 _QoS_ 06 2/13/01 11:48 AM Page 224 Queuing and Congestion Avoidance Overview • Chapter 6 225 In this case, CQ would. characterized using the information in Table 6. 2. www.syngress.com 110 _QoS_ 06 2/13/01 11:48 AM Page 228 Queuing and Congestion Avoidance Overview • Chapter 6 229 Table 6. 2 Weighted Fair Queuing Flow Identification

Ngày đăng: 09/08/2014, 14:21

Từ khóa liên quan

Mục lục

  • Cover

  • Table of Contents

  • Foreword

  • Chapter 1

  • Chapter 2

  • Chapter 3

  • Chapter 4

  • Chapter 5

  • Chapter 6

  • Chapter 7

  • Chapter 8

  • Chapter 9

  • Chapter 10

  • Chapter 11

  • Chapter 12

  • Index

  • Related Titles

Tài liệu cùng người dùng

Tài liệu liên quan