5 Asynchronous Transfer Mode (ATM) All this buttoning and unbuttoning 18th century suicide note The history of telecommunications is basically a history of technology. Advances in technology have led to new networks, each new network offering a range of new services to the user. The result is that we now have a wide range of networks supporting different services. We have the telex network, the telephone network, the ISDN, packet-switched data networks, circuit-switched data networks, mobile telephone networks, the leased line network, local area networks, metropolitan area networks, and so on. More recently we have seen the introduction of networks to support Frame Relay and SMDS services. The problem is that the increasing pace of developments in applications threatens to make new networks obsolete before they can produce a financial return on the investment. To avoid this problem it has long been the telecommunications engineer’s dream to develop a universal network capable of supporting the complete range of services, including of course those that have not yet been thought of. The key to this is a switching fabric flexible enough to cater for virtually any service requirements. ATM is considered by many to be as close to this as we are likely to get in the foreseeable future. This chapter explains the basic ideas of ATM, how it can carry different services in a unified way, and how it will provide seamless networking over both the local and wide areas, i.e. Total Area Networking. Section 5.1 gives a general overview of the key features of ATM with an explanation of the underlying principles. Section 5.2 puts a bit more flesh on the skeleton. Section 5.3 looks at how SMDS and Frame Relay are carried over an ATM network,and section 5.4 looks briefly at ATM in local area networks. Total Area Networking: ATM, IP, Frame Relay and SMDS Explained. Second Edition John Atkins and Mark Norris Copyright © 1995, 1999 John Wiley & Sons Ltd Print ISBN 0-471-98464-7 Online ISBN 0-470-84153-2 . Figure 5.1 ATM cells: the universal currency of exchange 5.1 THE BASICS OF ATM Cell switching The variety of networks has arisen because the different services have their own distinct requirements. But despite this variety, services can be categorised broadly as continuous bit-stream oriented, in that the user wants the remote end to receive the same continuous bit-stream that is sent; or as bursty, in that the information generated by the user’s application arises in distinct bursts rather than as a continuous bit-stream. Generally speaking, continuous bit-stream oriented services map naturally on to a circuit-switched network, whereas bursty services tend to be better served by packet-switched networks. Any ‘universal’ switching fabric therefore needs to combine the best features of circuit-switching and packet-switching, while avoiding the worst. There is also great diversity in the bit rates that different services need. Interactive screen-based data applications might typically need a few kilobits per second. Telephony needs 64 kbits/s. High-quality moving pictures may need tens of megabits per second. Future services (such as holographic 3D television or interactive virtual reality) might need many tens of megabits per second. So the universal network has to be able to accommodate a very wide range of bit rates. The technique that seems best able to satisfy this diversity of needs is what has come to be called cell-switching which lies at the heart of ATM. In cell-switching the user’s information is carried in short fixed-length packets known as cells. As standardised for ATM, each cell contains a 5-octet header and a 48-octet information field, as shown in Figure 5.1. On transmission links, both between the user and the network and between switches within the network, cells are transmitted as continuous streams with no intervening 92 ASYNCHRONOUS TRANSFER MODE (ATM) . Figure 5.2 Cell switching spaces. So if there is no information to be carried, empty cells are transmitted to maintain the flow. User information is carried in the information field, though for reasons that will become clear the payload that a cell carries is sometimes not quite 48-octets. The cell header contains information that the switches use to route the cell through the network to the remote terminal. Because it is only 5-octets long the cell header is too short to contain a full address identifying the remote terminal and it actually contains a label that identifies a connection. So cell-switching, and therefore ATM, is intrinsically connection-oriented (we will see later how connectionless services can be supported by ATM). By using different labels for each connection a terminal can support a large number of simultaneous connections to different remote terminals. Different connections can support different services. Those requiring high bit-rates (such as video) will naturally generate more cells per second than those needing more modest bit rates. In this way ATM can accommodate very wide diversity in bit rate. The basic idea of ATM is that the user’s information, after digital encoding if not already in digital form, is accumulated by the sending terminal until a complete cell payload has been collected, a cell header is then added and the complete cell is passed to the serving local switch for routing through the network to the remote terminal. The network does not know what type of information is being carried in a cell; it could be text, it could be speech, it could be video, it might even be telex! Cell switching provides the universal switching fabric because it treats all traffic the same (more or less—read on for more detail), whatever service is being carried. Figure 5.2 illustrates the principle of cell switching. A number of transmission links terminate on the cell switch, each carrying a continuous stream of cells. 935.1 THE BASICS OF ATM . All cells belonging to a particular connection arrive on the same transmission link and are routed by the switch to the desired outgoing link where they are interleaved for transmission on a first-come-first-served basis with cells belonging to other connections. For simplicity only one direction of transmission is shown. The other direction of transmission is treated in the same way, though logically the two directions of transmission for a connection are quite separate. Indeed, as we shall see, one of the features of ATM is that the nature of the two channels forming a connection (one for each direction of transmission) can be configured independently. Following usual packet-switching parlance, ATM connections are more correctly known as ‘virtual’ connections to indicate that, in contrast with real connections, a continuous end-to-end connectionis not provided betweenthe users. But to make for easier reading in what follows, the term ‘virtual’ is generally omitted; for connection read virtual connection. A connection is created through the network by making appropriate entries in routing look-up tables at every switch en route. This would be at subscription time for a permanent virtual circuit (PVC) or call set-up time for a switched virtual circuit (SVC) (for simplicity here, aspects of signalling are omitted). Each (horizontal) entry in the routing look-up table relates to a specific connection and associates an incoming link and the label used on that link to identify the connection with the desired outgoing link and the label used on that link to identify the connection. Note that different labels are used on incoming and outgoing transmission links to identify the same connection (if they happen to be the same it is pure coincidence). Figure 5.2 shows successive cells arriving on incoming link m, each associated with a different connection, i.e. they have different labels on that link. The routing table shows that the cell with incoming label x should be routed out on link o. It also shows that the label to be used for this connection on outgoing link o is y. Similarly, the incoming cell with label w should be routed out on link p, with the new label z. It is clear therefore that different connections may use the same labels, but not if they are carried on the same transmission link. Because of the statistical nature of traffic, no matter how carefully designed an ATM network is, there will be occasions (hopefully rare) when resources (usually buffers) become locally overloaded and congestion arises. In this situation there is really no choice but to throw cells away. To increase the flexibility of ATM, bearing in mind that some services are more tolerant of loss than others, a priority scheme has been added so that when congestion arises the network can discard cells more intelligently than would otherwise be the case. There is a single bit in the cell header (see Figure 5.10) known as the Cell Loss Priority bit (CLP) that gives an indication of priority. Cells with CLP set to 1 are discarded by the network before cells with CLP set to 0. As will be seen, different cells belonging to the same connection may have different priority. 94 ASYNCHRONOUS TRANSFER MODE (ATM) . Choice of cell length If cells are going in the same direction, the switch may route them simultaneously to the same outgoing link. Since only one cell can actually be transmitted at a time, it is necessary to include buffer storage to hold contending cells until they can be transmitted. Contending cells queue for transmission on the outgoing links. By choosing a short cell length, the queueing delay that is incurred by cells en route through the network can be kept acceptably short. Another important consideration that favours a short cell length is the time it takes for a terminal to accumulate enough information to fill a cell, usually referred to as the packetisation delay. For example, for a digital telephone which generates digitally encoded speech at 64 kbit/s it takes about 6 ms to fill a cell. This is delay that is introduced between the speaker and the listener, additional to any queueing delays imposed by the network. Speech is particularly sensitive to delay because of the unpleasant effects of echo that arise when end-to-end delays exceed about 20 ms. One of the important effects caused by queueing in the network is the introduction of cell delay variation in that not all cells associated with a particular connection will suffer the same delay in passing through the network. Although cells may be generated by a terminal at regular intervals (as for example for 64 kbit/s speech) they will not arrive at the remote terminal with the same regularity. Some will be delayed more than others. To reconstitute the 64 kbit/s speech at the remote terminal a reconstruction buffer is needed to even out the variation in cell delay introduced by the network. This buffer introduces yet more delay, often referred to as depacketisation delay. Clearly, the shorter the cell the less cell delay variation there will be and the shorter the depacketisation delay. So the shorter the cell the better. But this has to be balanced against the higher overhead which the header represents for a shorter cell length, and the 53-octet cell has been standardised for ATM as a compromise. The saga of this choice is interesting and reflects something of the nature of international standardisation. Basically, Europe wanted very short cells with an information field of 16 to 32 octets so that speech could be carried without the need to install echo suppressors, which are expensive. The USA on the other hand wanted longer cells with a 64 to 128 octet information field to increase the transmission efficiency; the transmission delays on long distance telephone circuits in the USA meant that echo suppressors were commonly fitted anyway. CCITT (now ITU-TS) went halfway and agreed an information field of 48 octets, thought by many to combine the worst of both worlds! Network impairments The dynamic allocation of network resources inherent in cell-switching brings the flexibility and transmission efficiencies of packet switching, 955.1 THE BASICS OF ATM . whereas the short delays achieved by having short fixed-length cells tend towards the more predictable performance of circuit switching. Nevertheless, impairments do arise in the network, as we have seen, and they play a central role in service definition and network design, as we shall see. The main impairments are as follows. • Delay: especially packetisation delay, queueing delay, and depacketisation delay, though additionally there will be switching delays and propagation delay. • Cell delay variation: different cells belonging to a particular virtual connection will generally suffer different delay in passing through the network because of queueing. • Cell loss: may be caused by transmission errors that corrupt cell headers, congestion due to traffic peaks or equipment failure • Cell misinsertion: corruption of the cell header may cause a cell to be routed to the wrong recipient. Such cells would be lost to the intended recipient and inserted into the wrong connection. Control of these impairments in order to provide an appropriate quality of service over a potentially very wide range of services is one of the dominating themes of ATM. The traffic contract From what we have seen of cell-switching so far it should be clear that new connections would compete for the same network resources (transmission capacity, switch capacity and buffer storage) as existing connections. It is important, therefore, to make sure that creating a new connection would not reduce the quality of existing connections below what is acceptable to the users. But what is acceptable to users? We have seen that one of the key attractions of ATM is its abilityto support a very wide range of services. These will generally have different requirements, and what would be an acceptable network performance for one service may be totally unacceptable for another. Voice, for example, tends to be more tolerant to cell loss than data, but much less tolerant to delay. Furthermore, for the network to gauge whether it has the resources to handle a new connection it needs to know what the demands of that connection would be. A key feature of ATM is that for each connection a traffic contract is agreed between the user and the network. This contract specifies the characteristics of the traffic the user will send into the network on that connection and it specifies the quality of service (QoS) that the network must maintain. The contract places an obligation on the network; the user knows exactly what service he is paying for and will doubtless complain to the service provider if he does not get it. And it places an obligation on the user; if he exceeds the agreed traffic profile the network can legitimately refuse to accept the excess 96 ASYNCHRONOUS TRANSFER MODE (ATM) . traffic on the grounds that doing so may compromise the quality of the existing connections and thereby breach the contracts already agreed for those connections. But provided that the user stays within the agreed traffic profile the network should support the quality of service requested. The contract also provides the basis on which the network decides whether it has the resources available to support a new connection; if it does not then the new connection request is refused. Traffic characteristics are described in terms of parameters such as peak cell rate, together with an indication of the profile of the rate at which cells will be sent into the network. The quality of service is specified in terms of parameters relating to accuracy (such as cell error ratio), dependability (such as cell loss ratio), and speed (such as cell transfer delay and cell delay variation). Some of these parameters are self-explanatory, some are not. They are covered in more detail later, but serve here to give a flavour of what is involved. We may summarise this as follows: • for each connection the user indicates his service requirements to the network by means of the traffic contract; • at connection set-up time the network uses the traffic contract to decide, before agreeing to accept the new connection, whether it has the resources available to support it while maintaining the contracted quality of service of existing connections; the jargon for this is connection admission control (CAC); • during the connection the network uses the traffic contract to check that the users stay within their contracted service; the jargon for this is usage parameter control (UPC). How this is achieved is considered in more detail in section 5.2. Adaptation ATM, then, offers a universal basis for a multiservice network by reducing all services to sequences of cells and treating all cells the same (more or less). But, first we have to convert the user’s information into a stream of cells, and of course back again at the other end of the connection. This process is known as ATM adaptation, and is easier said than done! The basic idea behind adaptation is that the user should not be aware of the underlying ATM network infrastructure (we will look at exceptions to this later when we introduce native-mode ATM services). Circuit emulation—an example of ATM adaptation For example, suppose that the user wants a leased-line service; this should appear as a direct circuit connecting him to the remote end, i.e. the ATM 975.1 THE BASICS OF ATM . Figure 5.3 ATM adaptation for circuit emulation network should emulate a real circuit. The user transmits a continuous clocked bit stream, at say 256 kbit/s, and expects that bit stream to be delivered at the remote end with very little delay and with very few transmission errors (and similarly in the other direction of transmission). As shown in Figure 5.3, at the sending end the adaptation function would divide the user’s bit-stream into a sequence of octets. When 47 octets of information have been accumulated they are loaded into the information field of a cell together with a one-octet sequence number. The appropriate header is added, identifying the connection, and the cell is sent into the network for routing to the remote user as described above. This process of chopping the user information up so that it fits into ATM cells is known as segmentation. At the receiving end the adaptation function performs the inverse operation of extracting the 47 octets of user information from the cell and clocking them out to the recipient as a continuous bit-stream, a process known as re-assembly. This is not trivial. The network will inevitably have introduced cell delay variation, which will have to be compensated for by the adaptation process. The clock will have to be recreated so that the bit stream can be clocked out to the recipient at the same rate at which it was input by the sender. The one-octet sequence number sent in every cell allows the terminating equipment to detect whether any cells have been lost in transit through the network (not 98 ASYNCHRONOUS TRANSFER MODE (ATM) . that anything can be done in this case to recover the lost information but the loss can be signalled to the application). To overcome the cell delay variation the adaptation process will use a re-assembly buffer (sometimes called the play-out buffer). The idea is that the re-assembly buffer stores the payloads of all the cells received for that connection, for a period equal to the maximum time a cell is expected to take to transit the network, which includes cell delay variation. This means that if the information is clocked out of the re-assembly buffer at the same clock rate as the original bit stream (256 kbit/s in this example) the re-assembly buffer should never empty and the original bit stream would be recreated (neglecting any loss of cells). The re-assembly buffer would also be used to recreate the play-out clock. Typically a phase-locked loop would be used to generate the clock. The fill level of the buffer, i.e. the number of cells stored, would be continously compared with the long-term mean fill level to produce an error signal for the phase-locked loop to maintain the correct clock signal. It is clear from this simple (and simplified!) example that the adaptation process must reflect the nature of the service to be carried and that a single adaptation process such as that outlined above will not work for all services. But it is equally clear that having a different adaptation process for every possible application is not practicable. CCITT has defined a small number of adaptation processes, four in all, each applicable to a broad class of services having features in common. The example shown above (circuit emulation) could be used to support any continuous bit rate service, though the bit rate and quality of service needed would depend on the exact service required. The ATM protocol reference model A layered reference model for ATM has been defined as a framework for the detailed definition of standard protocols and procedures, as shown in Figure 5.4 (I.321). There are essentially three layers relating to ATM: the physical layer; the ATM layer; and the ATM adaptation layer. (It should be noted that these layers do not generally correspond exactly with those of the OSI 7-layer model.) Each of the layers is composed of distinct sublayers, as shown. Management protocols, not shown, are also included in the reference model, for both layer management and plane management. For the sake of brevity these are not covered here. The physical layer The physical layer is concerned with transporting cells from one interface through a transmission channel to a remote interface. The standards embrace a number of types of transmission channel, both optical and electrical, including SDH (synchronous digital hierarchy) and PDH (plesiochronous 995.1 THE BASICS OF ATM . Figure 5.5 ATM bearer service Figure 5.4 The ATM protocol reference model digital hierarchy). The physical layer may itself generate and insert cells into the transmission channel, either to fill the channel when there are no ATM cells to send or to convey physical layer operations and maintenance information: these cells are not passed to the ATM Layer. The physical layer is divided into a physical medium (PM) sublayer, which is concerned only with medium-dependent functions such as line coding, and a transmission convergence (TC) sublayer, which is concerned with all the other aspects mentioned above of converting cells from the ATM layer into bits for transmission, and vice versa for the other direction of transmission. 100 ASYNCHRONOUS TRANSFER MODE (ATM) .