20 Frame Relay For data transfer, X.25-based packet switching has established itself worldwide as a standard and very reliable means. However, X.25 is not a technique suited to the higher quality and speeds of modern data communications networks, and so it is beginning to be supplanted by new techniques, among them ‘frame relay’. In this chapter we start by discussing the shortcomings of X.25-based packet switching in carrying highspeed bitrates and explain how frame relay was designed to overcome these problems. We conclude with a more detailed review of the frame relay protocols themselves. 20.1 THE THROUGHPUT LIMITATIONS OF X.25 PACKET SWITCHING The reliability of X.25 packet switching has resulted from worldwide accepted standards and the huge availability of compatible hardware and software products enabling computer devices made by different manufacturers and strewn around the world to intercommunicate without difficulty. X.25 was the first universal data communication protocol and it stimulated rapid growth in data communication traffic volumes, because of its reliability and its robustness. Paradoxically, its robustness is now leading to the demise of X.25, because one of the main limitations of packet switching based on X.25 is its unsuitability for carriage of high speed information channels and its relative inefficiency when used in conjunction with high quality transmission networks. When X.25 was developed in the late 1970s, the relative speed of the communicating devices was very low (in comparison with today’s devices, typically under 9600 bit/s) and the quality of wide area digital lines was comparatively poor. As a result (and to their credit) X.25 packet networks are highly robust against poor line quality. X.25 networks are able to survive and even recover from even extensive bit errors on digital lines. The problem is that the cost of this robustness is the very limited linespeeds which are possible, and the relative inefficiency of line utilization in the case of higher quality lines. The problems which arise when attempting to operate X.25 protocol at high speeds are due to the windowing technique employed by X.25 to help avoid errors. To 379 Networks and Telecommunications: Design and Operation, Second Edition. Martin P. Clark Copyright © 1991, 1997 John Wiley & Sons Ltd ISBNs: 0-471-97346-7 (Hardback); 0-470-84158-3 (Electronic) 380 FRAME RELAY illustrate the problem we consider trying to use a 2 Mbit/s line to carry X.25 data over a distance of 1000 km. As Figure 20.1 illustrates, on a high speed data transmission line there are always a large number of bits in transit on the line at any point in time (because of its length), in our example around 20 000 bits or 2500 bytes (line length X bitrate/speed of light). (That should blow any preconception you might have had that electricity travels so fast that we can consider sender and transmitter to be in synchronism with one another!) These bits in transit on the line must be considered when designing high speed data networks, if the network is to operate efficiently. X.25 lays a very high priority on the safe arrival of bits, in the correct order and without errors. One of the methods used to ensure safe arrival is the use of an acknowledgement window. Only so many packets (as defined by the window size, typically 7) may be transmitted by the sending device before an acknowledgement is received confirming safe arrival. As the typical maximum packet size is defined as 256 bytes, this means that a maximum of 1792 bytes (7 X 256) may be transmitted by the sender before an acknowledgement is returned by the receiver to confirm safe arrival. This compares with the 2500 bytes actually on the line, so that even before con- sidering the inefficiencies caused by packet overheads (the protocol control information in the X.25 header) the X.25 window will constrain the efficiency of the line of Figure 20.1 to a maximum of 1792/2500 (maximum bits allowed in transit/available bits in transit) or around 70%! Simple, you might say: increase the maximum window size! Unfortunately this only generates new problems. First, the end devices need to provide much greater storage buffers for retaining copies of the sent but unacknowledged information. Second, because the window size is greater, so is the likelihood of errors within a window. The probability of the need for a retransmission of the information to eliminate the errors is thus also greater. Also, because of the increased window size, the time required for retransmission is longer. So increasing the window size may actually reduce throughput! Today's digital transmission is several orders of magnitude better quality than that of the 1970s, so that the heavy duty error detection and correction techniques used by X.25 (a) n ~ pulse travels at around 10' m/s 4 1- bit length = speed in m/S = 50 m bitrate 2 X 10' sender network (2048 kbitls) receiver number of bits in transit on the line = line length I bit length = 1000 km I50m = 20 000 bits = 2500 bytes Figure 20.1 Bits in transit in an X.25 packet-switched data network THE NEED FOR FASTER RESPONSE DATA NETWORKS 381 have become redundant. Windowing and acknowledgement are now largely super- fluous. Frame relay (or frame relaying) was one of the first techniques designed to dispense with heavy duty error detection and correction techniques. Instead of it being undertaken by the network, the job of error detection and correction or recovery is left to higher layer protocols (i.e. the end user’s device, computer or software) to sort out. The frame relay network, meanwhile, may concentrate on the raw information carriage and is thus more efficient and also more capable of higher information throughput. Thus for wide area computer and LAN-to-LAN connection needs of 64 kbit/s or greater, frame relay is today’s preferred method. However, for bitrates above 2 Mbit/s, native ATM (see Chapter 26) should be considered. 20.2 THE NEED FOR FASTER RESPONSE DATA NETWORKS Although the basic need to carry high bandwidth signals drove the need for the development of the frame relay protocols, so did the need for faster responding net- works. It may not be immediately obvious, but the time required to propagate even low bandwidth information across a digital network is dependent on the bitspeed employed: the higher the bitspeed, the lower the propagation delay. Even data applications with limited information transport needs appear to run faster when carried by a high speed network, even though sufficient bandwidth may have previously already been available. The following discussion gives two reasons why. Let us imagine two rainwater conduits, one of small bore and one of large bore. Let us assume that the first has throughput capacity of 5litres/s and the second of 10litres/s. Now let us assume that the rainfall rate is 4 litre+. Why should I bother with the large bore conduit? The answer is that the rainfall rate is not constant. Over the course of time the rate may vary between, say 2 and 6 litres per second, so that during moments in time when the rate of rainfall exceeds 5 litres/s, water will be accumulating in the roof gutter rather than flowing down the conduit. The accumulation clears when the rainfall rate drops momentarily below 5 litresis. As a result of the momentary accumulation, some of the rainwater is delayed slightly, thereby increasing the propagation delay. The analogy is relevant to data transmission, where the rate of generation of typed characters or other data to be transferred is not constant, but can fluctuate wildly. The first reason why high speed lines give better performance is that they cope with short high speed bursts better. The second reason is that frames can be conveyed more quickly, as we explain next. Figure 20.2 illustrates a more detailed example of data transmission across a tele- communication transmission line. In this case, the carriage of the electrical signals (i.e. the waveform pulses representing individual bits of a digital signal pattern) is at a speed close to the speed of light. Thus the leading edge of an individual pulse traverses the network at around 108 metres/s (Figure 20.2(a)). The time required, however, to transmit an entire frame of 1 byte (8 bits) is sensitive to the transmission bitspeed. The propagation time in this case is equal to the sum of the raw propagation time and the signal duration (Figure 20.2(b)). Taking the example of a 9600 bit/s dataline of 100 km length (as might be employed in a corporate packet switching network today), we can calculate propagation times for both cases (a) and (b) of our example: 382 FRAME RELAY (a) DL-, pulse travels at around 10’ mh sender network receiver (b) signal pattern travels at 108 mlS l+ + signal duration = number of bitslbitspeed Figure 20.2 Signal propagation time across a data network 0 single bit (pulse) propagation time = 105 m/lOs ms-’ = 10-3 S = 1 ms 0 byte propagation time =pulse propagation time + signal duration = 1 ms + 8 bits/9600 bit S-’ = 1.8ms In other words, before enough bits (8) have been received to interpret the frame (in our case a data or ASCII character), 1.8 ms have elapsed. This compares with the 1 ms needed for conveyance of a pulse across the line. Despite the fact that the average throughput required from the line may be far less than the 9600 bit/s available (say 2400 bit/s or 300 characters/s), the effective propagation time of characters across the line is much longer than the 1 ms that you might expect. No human being will notice the extra 0.8ms, you might say? Indeed they will not where a simple one-way transmission is involved with a human end user. However, where an interactive dialogue is taking place between two computers (question-answer- question-answer), then this will take around 80% longer to conduct. A human waiting for the computer’s response may see a response in around 4 seconds, where previously it was around 2 seconds. Such intercomputer dialogues are the main cause of delays for modern computer software. (Typical dialogues run ‘please send first character’ - ‘first character’ - ‘received first character OK, please send next character’ - ‘second character’ - ‘received second character OK. . .’ etc.) The perhaps surprising reality is that it may indeed make sense to use a network with 64 kbit/s transmission links rather than 9600 bit/s links even though the average throughput is only 4000 bit/s ! (Reduction in byte propagation time from 1 .S ms to 1.1 ms). The following points are thus critical in the response time performance of data networks and associated computer applications software 0 transmission line bitspeed (e.g. 9600 bit/s, 64 kbit/s, 2 Mbit/s, etc.) 0 message, packet or frame length in number of bits 0 the number of inter-computer interactions (request and response dialogue) necessary to complete an action before responding to the human user. Though our example is of a lowspeed data application, similar principles apply for all types of data applications. Thus the higher the bitspeeds employed in the network, the faster the application response time. THE EMERGENCE AND USE OF FRAME RELAY 383 router G- frame relay switch wide-area frame relay Figure 20.3 Typical use of frame relay to improve the wide area efficiency of LAN/router networks 20.3 THE EMERGENCE AND USE OF FRAME RELAY The recent explosion in the number of LANs (local area networks, networks connecting personal computers within office buildings) and the need to interconnect LANs, as well as the growing number of clientlserver computing (UNIX) environments, have created the demand for high speed networking in data networks. Frame relay provides for relatively cheap wide area data communication at rates between 9600 bit/s and 2 Mbit/s, and has proved a viable alternative to leaselines, particularly in router networks. Initially, frame relay networks only supported PVC (permanent virtual channel) ser- vice between pairs of fixed end-points. The service to the user was therefore somewhat akin to a 64 kbit/s leaseline, but without the full costs of a leaseline, because statistical multiplexing in the wide area part of the network allowed resources to be shared across a number of users and therefore costs to be saved by each of them (Figure 20.3). The pricing strategy of public network operators has also encouraged the use of frame relay service as a leaseline replacement service where high speed but relatively low volume usage is required, because flat rate charging based on the committed information rate (CIR) has become the industry standard. 20.4 FRAME RELAY UN1 In the arrangement of Figure 20.3, which is typical for a frame relay network, each of the routers is connected to the frame relay network using a single connection, typically of 64 kbit/s, employing the Frame relay VNI (user-network interface). Over this single physical connection, up to 1024 (21°) logical channels (PVCs, in the correct frame relay terminology, data links) may be connected, each to a separate end destination. These 384 FRAME RELAY channels are available on a permanent basis, but the capacity of the wide area part of the network is only used when there are actually frames requiring to be relayed from one end of the data link to the other. In theframe header (like the HDLC header of X.25) is a numbered value identifying the logical connection to which the frame belongs. This is called the data link channel identifier (DLCZ). Unlike X.25, there are no real end-to-end network functions supported at OS1 layer 3. These are the functions which provide within the network for reliable end-to-end transfer. Saving these functions simplifies the frame relay protocol (in comparison with X.25), making it more efficient and faster running. As is also shown in Figure 20.3, it is common in frame relay networks to build fully meshed networks of logical connections (PVCs) between individual routers (a triangle in our case). This circumvents the need for the routers themselves to act as transit nodes for inter-router traffic, and so improves the overall performance perceived by the LAN users without having much effect on the overall cost of either the network hardware or the transmission lines needed in the wide area network (WAN). 20.5 FRAME RELAY SVC SERVICE The main drawback of the arrangement shown in Figure 20.3 is the management effort needed to establish and maintain the large number of PVC connections within the network. Potentially, each time a link in the network fails or a new link is added, administrative work may be necessary to reconfigure some or all of the PVCs to new, more efficient paths. To get around this problem, the Europeans in particular have driven the development of an enhancement of the frame relay UN1 to include the capability for on-demand establishment of data links (i.e. switched virtual circuits, SVCs) using a dial-up procedure. Although this adds layer 3 functions to the frame relay protocol stack, these are only connection and clearing functions, which do not affect the subsequent end-to-end carriage characteristics (high speed, low delay as discussed). The main benefit of an SVC network is that individual data links need only be established when needed and may be cleared afterwards. This simplifies the network and its management, and in addition has the effect of automatically optimizing the routing of connections each time they are newly established. 20.6 CONGESTION CONTROL IN FRAME RELAY NETWORKS The high speed of the computer devices using frame relay networks leads to the need for special measures to control network congestion, because the layer 3 protocol is almost non-existent. Figure 20.4 illustrates a case in which one of the intermediate links within the wide area or backbone part of the network is in congestion. As a result, frames are accumulating rapidly (and unabated) in the buffer immediately preceding the congested link, as they wait to be transmitted over the link. Ultimately, the buffer will overflow and frames will be lost, so affecting all the data links sharing the congested link. Worse still, once the end user devices detect the loss of information, retransmission will commence, and the load on the network only further increases. CONGESTION CONTROL IN FRAME RELAY NETWORKS 385 end user end user device device (sending) (receiving) excess frames overflowing the buffer and being discarded Figure 20.4 Congestion in a frame relay network As the connection from the sending device of Figure 20.4 to the network is not con- gested, the sending of frames would continue unabated, were it not for the congestion notzjication procedures within the frame relay protocols. Any time a frame underway within a frame relay network encounters congestion (the exceeding of a given waiting time or a given buffer queue length) then the frame header is tagged with a notification message, called the forward explicit congestion notijica- tion (FECN). This notifies the receiving device and the switch to which it is connected of the congestion, which may then be communicated back to the source end by means of a backward explicit congestion notijication (BECN) message. The sending device may voluntarily respond to receipt of the BECN by reducing its transmitted output, or may be forced by the network to do so. By reducing the frame transmission rate from relevant causal sources, the congestion will ease, and the transmission rate restriction may be removed. Meanwhile, further service degradation within the network as a whole is avoided. A further refinement of the congestion control procedure of frame relay is provided by the committed information rate (CIR) and excess information rate (EIR) parameters. The committed information rate (CIR) is the agreed minimum bitrate to be provided by the network between the two ends of the frame relay data link. The CIR is agreed at the time of setting up the connection. Provided the frame transmission rate of the sending device is at or below the CIR, then the network is not permitted to force a reduction in the frame sending rate of the sending device, and is not permitted wilfully to discard frames. However, where the sending device is exceeding the CIR at a time when the BECN message is received, then the network may first request reduction in the rate of frame transmission to the CIR. Should the reduction not be undertaken (for example, because the sending device cannot respond to the request), then the network is permitted to discard the excess frames. At times of no congestion, sending devices are permitted for defined short periods of time (called the excess burst, or excess burst duration, B,) to transmit at bitrates higher than the CIR. The maximum bitrate at which the device may send is termed the EIR (excess information rate). The EIR is always greater or equal to the CIR. It is a manage- ment decision for the network operator how high EIR and CIR may be set for a given connection, and usually these values are included in the contract or order for the user’s connection (for a PVC) or negotiated at connection set-up time for an SVC. The ability 386 FRAME RELAY to handle short bursts of high speed information (above the CIR) is what makes frame relay networks attractive to data applications requiring fast response times, as we discussed earlier in the chapter. 20.7 FRAME RELAY NNI As in packet-switched data networks, it is common for all the switches within a given operator’s network to be purchased from and supplied by a single manufacturer. The leading manufacturers of frame relay network components are the Stratacom and Cascade companies and Northern TeIecom (NorteI). As the switches are supplied by a single manufacturer, it is not necessary to use a standardized interface between the nodes within the network. As a result, the individual manufacturers have tended to develop extra congestion controls, network management and service features over and above those required by the frame relay standards, to try to improve the market value of their products. All very well; except, of course when a given frame relay connection needs to be switched across two different networks or sub-networks, supplied by different manufacturers (as in Figure 20.5). For this a standardized interface, the NNZ (network-network interface) is required. Although the frame relay NNI allows for interconnection of sub-networks of switches supplied by different manufacturers, and although reliable data transfer is possible, it is true that the congestion control and management capabilities of the combined network are much more restricted than the capabilities available within each of the sub-networks independently. This reflects the relative youth of the frame relay NNI standard. 20.8 FRAME FORMAT Figure 20.6 illustrates the format of a single frame. It consists of five basic information fields, much like the data link layer format of X.25 (i.e. HDLC). The flag marks the beginning of the frame, delineating it from the previous frame. The address field carries frame relay network A frame relay network B (manufacturer A) (manufacturer B) UN1 NNI UN1 Figure 20.5 Frame relay NNI (network-network interface) ADDRESS FIELD FORMAT 387 Flag Frame check Information field Control Address Field sequence (FCS) Figure 20.6 Frame format for frame relay the DLCI (data link connection identifier), the equivalent of the logical channel number (EN) of HDLC (i.e. is an OS1 layer 2 address). In addition, the address field also contains control information (command/response), the forward and backward explicit congestion notification (FECN and BECN) discussed previously in this chapter, the discard eligibility (DE) indication and some extra fields used for extended addressing. The control field contains supervision indication for the connection like receiver ready (RR), receiver not ready (RNR), etc. For user information frames, this field indicates the length of the frame. Such controls were discussed more fully in chapter 18 on X.25. These enable the two end devices to coordinate one another for the communication. The informationJield contains the user information. This may be up to 65 536 bytes in length. Finally, the frame check sequence (FCS) is a cyclic redundancy check (CRC) code providing for error detection. We discussed CRC codes in Chapter 9. 20.9 ADDRESS FIELD FORMAT Figure 20.7 illustrates the two octet (i.e. two byte) address field used in frame relay. The main function of the address field is to carry the data link connection identifier (DLCZ), which identifies the end device to which the frame is to be sent (OS1 layer 2 address). The DLCI is a 10 bit field, allowing up to 1024 separate virtual connections to share the same physical connection. 1 2 3 4 5 6 l 8 (transmitted first BECN = backward explicit congestion notification C/R = command/response bit DE = discard eligibility DLCI = data link connection identifier EA = extended addressing lsb = least significant bits msb = most significant bits Figure 20.7 Address field format for frame relay 388 FRAME RELAY Higher layer information Higher layer information network layer 4.922 (Core aspects 4.922 (core aspects) link layer 4.933 4.933 (signalling) physical layer physical layer Note: 4.933 protocol is the network layer protocol used for establishing switched connections (SVCs). It is not used in the PVC (permanent virtual connection) service. Figure 20.8 Protocol stack for frame relay Should congestion be encountered by a given frame during its transit through the network, then the affected intermediate switch will toggle the FECN (forward explicit congestion notijication) as we discussed earlier in the chapter. This alerts the receiving device of the congestion. In response, returned frames are marked using the BECN (backward explicit congestion notijication). This allows the flow control procedures to be undertaken. Should congestion become so serious that frames need to be discarded, then frames marked with the discard eligibility (DE) set to ‘1’ will be discarded first. The DE bit is set to ‘1’ by the first frame relay switch (near the origin) on excess frames (i.e. those causing the information rate to exceed the committed information rate (CIR)). 20.10 ITU-T RECOMMENDATIONS PERTINENT TO FRAME RELAY The following ITU-T recommendations define frame relay. 0 Recommendation 1.233 describes the frame relay service. 0 Recommendation 1.122 defines the framework of recommendations which specify frame relay, referring to the complete list of relevant recommendations. e Recommendation Q.922 is perhaps the most important. It defines the core aspects of frame relay, specifically the data link procedure (i.e. frame format, address field etc.). 0 Recommendation 1.370 defines the congestion management procedures. e Recommendation 4.933 defines the signalling procedures to set up switched virtual connections. This recommendation is not relevant for permanent virtual circuit (PVC) service. Figure 20.8 shows the layered protocol structure of frame relay. 20.11 FRAD (FRAME RELAY ACCESS DEVICE) Frame relay has historically been offered by public telecommunication. operators as a cheap alternative to a leaseline in cases where high bitrates were desirable but the number of hours of usage per day was relatively low. To access a frame relay network, a