1. Trang chủ
  2. » Kỹ Thuật - Công Nghệ

Optical Networks: A Practical Perspective - Part 42 pdf

10 267 0

Đang tải... (xem toàn văn)

THÔNG TIN TÀI LIỆU

Thông tin cơ bản

Định dạng
Số trang 10
Dung lượng 595,24 KB

Nội dung

380 CLIENT LAYERS OF THE OPTICAL LAYER quite common to use multiple overlaid rings, particularly in backbone networks, each operating over a different wavelength provided by an underlying optical layer. Two types of ring architectures are used: unidirectional path-switched rings (UPSRs) and bidirectional line-switched rings (BLSRs). The BLSRs can use either two fibers (BLSR/2) or four fibers (BLSR/4). We will discuss these architectures and the protection mechanisms that they incorporate in detail in Chapter 10. In general, UPSRs are used in the access part of the network to connect multiple nodes to a hub node residing in a central office, and BLSRs are used in the interoffice part of the network to interconnect multiple central offices. Another major component in the SONET infrastructure is a digital crossconnect (DCS). A DCS is used to manage all the transmission facilities in the central office. Before DCSs arrived, the individual DSls and DS3s in a central office were man- ually patched together using a patch panel. Although this worked fine for a small number of traffic streams, it is quite impossible to manage today's central offices, which handle thousands of such streams, using this approach. A DCS automates this process and replaces a patch panel by crossconnecting these individual streams under software control. It also does performance monitoring and has grown to incorporate multiplexing as well. DCSs started out handling only PDH streams but have evolved to handle SONET streams as well. Although the overall network topology including the DCSs is a mesh, note that only rings have been standardized so far. A variety of DCSs are available today, as shown in Figure 6.9. Typically, these DCSs have hundreds to thousands of ports. The term grooming refers to the grouping together of traffic with similar destinations, quality of service, or traffic type. It includes multiplexing of lower-speed streams into high-speed streams, as well as extracting lower-speed streams from different higher-speed streams and combining them based on specific attributes. In this context, the type of grooming that a DCS performs is directly related to the granularity at which it switches traffic. If a DCS is switching traffic at granularities of DS1 rates, then we say that it grooms the traffic at the DS 1 level. At the bottom of the hierarchy is a narrowband DCS, which grooms traffic at the DS0 level. Next up is a wideband DCS, which grooms traffic at DS1 rates, and then a broadband DCS, which grooms traffic at DS3/STS-1 rates. These DCSs typically have interfaces ranging from the grooming rate to much higher-speed interfaces. For instance, a wideband DCS will have interfaces ranging from DS1 to OC-12, while a broadband DCS will have interfaces ranging from DS3 to OC-48 or OC-192. Today we are also seeing a new generation of DCSs that groom at DS3 rates and above, with primarily high-speed optical interfaces. While such a box could be called broadband DCS, it is more commonly called an optical crossconnect. However, we also have other types of optical crossconnects that groom traffic at STS-48 rates, and yet others that use purely optical switch fabrics and groom traffic in units of wavelengths or more. 6.2 ATM 381 l I All-optical Optical layer Optical y Broadband SONET e Wideband Narrowband Wavelength, waveband, fiber grooming STS-48 grooming DS3 grooming DS 1 grooming DSO grooming Figure 6.9 Different types of crossconnect systems. Instead of having this hierarchy of crossconnect systems, why not have a single DCS with high-speed interfaces, which grooms at the lowest desired rate, say, DS0? This is not possible due to practical considerations of scalability, cost, and footprint. For instance, it is difficult to imagine building a crossconnect with hundreds to thousands of 10 Gb/s OC-192 ports that grooms down to the DS1 level. In general, the higher the speed of the desired interfaces on the crossconnect, the higher up it will reside in the grooming hierarchy of Figure 6.9. DCSs can also incorporate ADM functions and perform other network functions such as restoration against failures, the topic of Chapter 10. 6.2 ATM Voice and data networks have traditionally been separate even though almost the entire telephone network is digital. ATM (asynchronous transfer mode) is a network- ing standard that was developed with many goals, one of which was the integration of voice and data networks. An ATM network uses packets or cells with a fixed size of 53 bytes; this packet size is a compromise between the conflicting requirements of voice and data applications. A small packet size is preferable for voice since the packets must be delivered with only a short delay. A large packet size is preferable for data since the overheads involved in large packets are smaller. Of the 53 bytes in an ATM packet, 5 bytes constitute the header, which is the overhead required to carry information such as the destination of the packet. ATM networks span the whole gamut from local-area networks (LANs) to metropolitan-area networks (MANs) to wide-area networks (WANs). One of the key advantages of ATM is its ability to provide quality-of-service guarantees, such as bandwidth and delay, to applications even while using statistical multiplexing of packets to make efficient use of the link bandwidth (see Chapter 1). 382 CLIENT LAYERS OF THE OPTICAL LAYER 6.2.1 ATM achieves this by using a priori information about the characteristics of a connec- tion (say, a virtual circuit), for example, the peak and average bandwidth required by it. ATM uses admission control to block new connections when necessary to satisfy the guaranteed quality-of-service requirements. Another advantage of ATM is that it employs switching even in a local-area environment, unlike other LAN technologies like Ethernets, token rings, and FDDI, which use a shared medium such as a bus or a ring. This enables it to provide quality-of-service guarantees more easily than these other technologies. The fixed size of the packets used in an ATM network is particularly advantageous for the development of low-cost, high-speed switches. Various lower or physical layer standards are specified for ATM. These range from 25.6 Mb/s over twisted-pair copper cable to 622.08 Mb/s over single-mode optical fiber. Among the optical interfaces is a 100 Mb/s interface whose specifica- tions, such as transmit power, maximum allowed attenuation, and line coding, are identical to that of FDDI, which we have described. A 155.52 Mb/s optical interface that operates over distances up to 2 km using LEDs over multimode fiber in the 1300 nm band is also defined. Using the specified minimum transmit and receive powers, the loss budget for this interface is 9 dB. The line code used in this case is the (8, 10) line code specified by the Fibre Channel standard. These two interfaces are called private user-network interfaces in ATM terminol- ogy, since they are meant for interconnecting ATM users and switches in networks that are owned and managed by private enterprises. A number of public user-network interfaces, which are meant for connecting ATM users and switches to the public or carrier network, are also defined. In these latter interfaces, ATM uses either PDH or SONET/SDH as the immediately lower layer. These interfaces are defined at many of the standard PDH and SONET/SDH rates shown in Tables 6.1 and 6.2, respectively. Among these are DS3, STS-3c, STS-12c, and STS-48c interfaces. In the terminology of the ATM standards, since the layer below ATM is called the physical layer, these interfaces to PDH and SONET/SDH are called physical layer interfaces. On the other hand, in the classical layered view of networks, which we discussed in Section 1.4, PDH and SONET/SDH must be viewed as data link layers when ATM is viewed as a network layer. Functions of ATM ATM data can either be transmitted from an ATM user to an ATM network across a user-to-network interface (UNI) or the data can be transmitted across a network-to-network interface (NNI) between two ATM switches. Of the 53 bytes in an ATM cell, 48 bytes form the payload, that is, carry information sent from the higher layers, and 5 bytes constitute the header inserted by the ATM layer. The 6.2 ATM 383 I Figure 6.10 The header structure of ATM cells across (a) the UNI and (b) the NNI. The GFC field is used for flow control across the UNI. The VPI and VCI fields are used for forwarding the cells within the network. PT indicates the payload type and CLP is the cell loss priority bit. The HEC field provides error checking for the ATM header. structure of the 5-byte ATM header is slightly different for the UNI and NNI. The two headers are shown in Figure 6.10. The fields in the ATM header are as follows. 9 GFC or Generic Flow Control: 4 bits on UNI, not present on NNI. 9 VPI or Virual Path Identifier: 8 bits on UNI, 12 bits on NNI. 9 VCI or Virtual Circuit Identifier: 16 bits. 9 PT or Payload Type: 3 bits. 9 CLP or Cell Loss Priority: 1 bit. 9 HEC or Header Error Control: 8 bits. The HEC constitutes a CRC on the 5 ATM header bytes and is used to detect corrupted ATM cells. The functions of each of these fields are described in the following sections. Connections and Cell Forwarding ATM establishes a connection between two end points for the purpose of transferring data between them. This is unlike IP (which we study in the next section), which transfers data in a connectionless manner. ATM connections are termed virtual chan- nels and are assigned a virtual channel identifier (VCI). The VCI for a connection is unique for each link that the ATM connection traverses between its end points but can vary from link to link on the path, as illustrated in Figure 6.11(a). For example, the top connection has a VCI of al, a2, and b on the three links it traverses. The VCIs for each connection on every link of the path are determined at the time of connection setup and released when the connection is torn down. Each node (switch) maintains a VCI table as illustrated in Figure 6.11(b). The table specifies, for each incoming VCI, the outgoing link and the outgoing VCI. For 384 CLIENT LAYERS OF THE OPTICAL LAYER VCI = a l VCI = a2 1 I 1 E VCI cl VCI =c2 (a) Incoming Outgoing Outgoing _ VCI link VCI al 1 to 2 a2 cl 1 to 2 c2 VCI=d "~"'~ 4 ] (b) Figure 6.11 The use of ATM VCIs for cell forwarding across a path. The ATM switches use the VCI to determine the outgoing link for a cell. The switches also rewrite the VCI field with the value assigned to the virtual channel on the outgoing link. (a) Illustration of the cell forwarding and VCI swapping. (b) The VCI table maintained at node 1 of (a). example, at node 1, incoming cells with a VCI of al are sent on the link 1-2 with a VCI of a2. Virtual Paths There could be millions of virtual channels sharing a link. Looking up a VCI table larger than 216 65,536 entries for forwarding every single cell is expensive. Thus we need to have some mechanism for bundling or aggregating virtual channels for the purpose of forwarding. It is quite likely that thousands of virtual channels will have the same path, if not end to end, at least over significant parts of the network. This property of virtual channels can be used for aggregation and is accomplished by the use of VPIs. The use of VPIs can be understood through the following example. Consider Figure 6.12. Here we have four links, connecting the nodes 0, 1, 2, and 3, as shown. The two virtual circuits shown share the links 0-1 and 1-2. These virtual channels can be assigned a common VPI on each of these links (which can be, and generally is, different on the two individual links). For example, a VPI of x can be assigned on link 0-1, and a VPI of y on link 1-2. The set of two links constitutes a virtual path in the network, with node 0 constituting the beginning of the virtual path, and node 2 constituting the end of the virtual path. All cells belonging to any virtual circuit assigned to this path are routed on these links based on the smaller VPI value. When the cells reach the end of the virtual path, node 2 in this example, they are again forwarded based on the VCI values. Simply put, the virtual channels treat each virtual path as a segment in their route between the source and destination: the switches within a virtual path forward cells based only on the VPI field. The use of the two level labels, VPI and VCI, simplifies the cell forwarding process and enables the development of cost-effective ATM switches. If a single field were 6.2 ATM 385 VPI = x, VCI = a "I I 1 VPI = x, VCI = c VPI = y, VCI = a v i_y,v ii:i (a) VCI=d "~"~ 4 I Incoming Outgoing VPI link x 1 to2 (b) Outgoing VPI Figure 6.12 The use of ATM VPIs for simplifying cell forwarding across a shared route segment. Virtual channels sharing a common route segment are assigned the same VPI values on the links of this segment and routing within this segment is based on the smaller VPI field rather than on the VCI field. (a) The two virtual channels are assigned the same VPIs x and y, on the links 0-1 and 1-2, respectively. (b) The switching at node 1 is now based on the VPI field and thus results in a smaller table, enabling more efficient switching. used, it would be 24 bits long across the UNI and 28 bits long across the NNI. Such a large field would make the cell forwarding process expensive. Another advantage of the use of virtual paths is that it enables the creation of logical links between nodes: the virtual path between two nodes is treated like a logical link by the virtual channels. In the example of Figure 6.12, the virtual path from node 0 to node 2 is treated as a logical link by the virtual channels. 6.2.2 Adaptation Layers ATM uses fixed-size cells for transport, but applications using ATM either are con- tinuous media such as voice or video, or use variable (and large) packets like IP. In this case, it is necessary to map the user data (voice, video, IP packets) into ATM cells. This is accomplished by an ATM adaptation layer (AAL). The main function of an AAL is segmentation and reassembly (SAR): an AAL segments the user data at the source into ATM cells and reassembles the ATM cells into user data at the destination. Four ATM adaptation layers, AAL-1, AAL-2, AAL-3/4 and AAL-5, are described in ITU recommendation 1.363. (AAL-3 and AAL-4 started life separately but have since been merged into a single AAL.) We briefly describe AAL-1 and AAL-5. AAL- 1 AAL-1 is meant for transport of constant bit rate data such as circuits, voice, and video. Here, the source can be considered to send a continuous stream of data. This data is segmented by AAL-1 into 47-byte AAL payloads. AAL-1 adds a 1-byte 386 CLIENT LAYERS OF THE OPTICAL LAYER header, containing a sequence number field, and sends the resulting 48-byte packet, which constitutes the ATM payload, to the ATM layer for transport to the peer AAL-1 process at the destination node in the network. While the sequence number field is protected by a CRC (4 bits of SN are protected by a 3-bit CRC and a 1-bit parity check), the 47-byte payload is unprotected. This is considered adequate for the circuit emulation and voice applications that AAL-1 is designed to support. /~i!_-5 AAL-5 is designed to transport variable-sized packets, up to 216 = 65,536 bytes in length, over an ATM network. Its most significant use is for the transport of IP packets over an ATM network. AAL-5 segments the user packets into cells but does not add any overhead (AAL header or trailer) in every cell. Instead, it uses the Payload Type field in the ATM header to indicate whether a cell is the last cell of a segmented IP packet or not. If a cell is the last cell of a segmented IP packet, the last 2 bytes of the cell constitute the AAL-5 trailer and contain the length of the IP packet and a CRC covering the entire IP packet. Thus, in all but one cell, the AAL-5 payload is equal to the 48-byte ATM payload, and AAL-5 has lower overhead compared to AAL-1. Also note that AAL-5 provides error detection for its payload through the use of a CRC, whereas AAL-1 does not. 6.2.3 Quality of Service The primary motivation for use of ATM is that it is capable of providing quality-of-service (QoS) guarantees for connections. These guarantees take the form of bounds on cell loss, cell delay, and jitter. ATM is able to provide such guarantees through a combination of traffic shaping and admission control. Roughly speaking, this works as follows: 1. Traffic Shaping: ATM requires that all user traffic adhere to a contract that has been established between the user and the network. This contract usually specifies the peak cell rate, the average cell rate, and the burst size (number of consecutive cells at the peak cell rate) that the user can transfer across the UNI. The ATM network may monitor these contracted parameters for each connection across the UNI and can drop those cells that violate this contract. Alternatively, it can admit the violating cells but mark the CLP bit for these cells so that they are preferentially dropped in the event of congestion. As a result of this, ATM can carefully control the traffic from each connection that enters the network. The network's half of this bargain is the QoS guarantees that it provides to the user in terms of cell loss, delay, and jitter. 6.2 ATM 387 2. Admission Control: Based on the knowledge of the user traffic characteristics that are enforced through traffic shaping, the ATM network can determine the set of connections it can admit without violating the guranteed QoS for the connections when the cells from these connections are transferred across the network. A new connection will not be admitted if it would potentially result in the violation of QoS guarantees provided to connections that have already been established. Based on the QoS parameters that the network can guarantee (cell loss, delay, jitter) and the traffic parameters that the user can specify (peak cell rate, average cell rate, burst size), ATM identifies a number of service classes to which a connection can belong. Among these are the constant bit rate (CBR) and the unspecified bit rate (UBR) service classes. A CBR connection specifies only the peak cell rate and is guaranteed a specified cell loss, delay, and jitter. A UBR connection also specifies only the peak cell rate but has no QoS guarantees. AAL-1 has been designed specifically to support CBR connections, whereas AAL-5 is used for UBR connections. Another aspect of guaranteeing QoS, in addition to traffic shaping and admission control, is the use of queueing policies. ATM uses sophisticated queueing techniques to ensure that the QoS guarantees for each service class are met in the face of misbe- having traffic from other service classes. ATM also uses sophisticated mathematical techniques to determine the admission control policy so that QoS guarantees are met. 6.2.4 Flow Control ATM also provides a mechanism to control the traffic from a user, not based on a prespecified contract, but based on feedback about congestion levels in the network. Such a mechanism is applicable to some service classes designed primarily for data traffic, such as file transfers, which are capable of being flow controlled (but not for CBR). The flow control is implemented across the UNI using the GFC bytes in the ATM UNI header. Using messages encoded by these bytes, the ATM network can instruct the user across the UNI whether data can be transmitted, or if data transmission should be halted. 6.2.5 Signaling and Routing While the VCI and VPI fields are used for forwarding ATM cells on a given route, the determination of this route is the responsibility of a routing protocol. The rout- ing protocols used in ATM networks are the PNNI (private network-to-network interface) and B-ICI (broadband intercarrier interface) protocols standardized by the ATM forum. Here we provide a brief overview of PNNI routing. 388 CLIENT LAYERS OF THE OPTICAL LAYER The goal of PNNI routing is to determine a path through the network from the source to the destination. This path should be capable of meeting the QoS require- ments of the user. Each link in the network is characterized by a set of parameters, which describes the state of the link. Examples of link state parameters include cell loss, maximum cell delay, and available link bandwidth. Another parameter for each link is its administrative cost or weight. This is meant to reflect the cost to the net- work for using this link. These parameters are advertised by each ATM switch for all the links outgoing from it. The link state advertisements are flooded to all other ATM switches in the network. As a result of these link state advertisements, each ATM switch has the current topology of the network with the states of all the links. Using this topology and link state information, the ingress switch in the network that receives an ATM connection request can calculate a path through the network that is capable of satisfying the QoS requested by the connection and that also minimizes some administrative cost in the network. Once a route has been computed, each switch on the route should be informed of the new connection and its QoS requirement. The VCI/VPI labels also need to be set up at each switch. This is accomplished by the PNNI signaling protocol. Once the signaling protocol terminates successfully, the connection setup is complete and data traffic can begin to flow. The signaling protocol is invoked again to tear down the connection. 6.3 IP IP (Internet Protocol) is by far the most widely used wide-area networking technology today. IP is the underlying network protocol used in the all-pervasive Internet and is equally important in most private intranets to link up computers. IP is a networking technology, or protocol, that is designed to work above a wide variety of lower layers, which are termed data link layers in the classical layered view of networks (Section 1.4). This is one of the important reasons for its widespread success. Figure 6.13 shows IP within the layered architecture framework. Some tradi- tional data link layers over which IP operates are those associated with popular local-area networks such as Ethernet and token ring. IP also operates over low-speed serial lines as well as high-speed optical fiber lines using well-known data link layer protocols for example, high-level data link control (HDLC) or point-to-point pro- tocol (PPP). Several layering structures are possible to map IP into the optical layer. The term "IP over WDM" is commonly used to refer to a variety of possible mappings shown in Figure 6.14. Figure 6.14(a) shows an implementation where IP packets are mapped into ATM cells, which are then encoded using SONET framing. The 6.3 IP 389 telnet ftp rlogin SNMP TCP UDP IP Ethemet Token ring PPP HDLC Coaxial/twisted pair cable Optical layer SONET layer Applications Transport layer Network layer Data link layer Physical layer Figure 6.13 IP in the layered hierarchy, working along with a variety of data link layers and transport layers. IP ATM SONET Optical (WDM) IP I ppp SONET I Optical (WDM) (a) (b) r IP I Ethemet MAC I [' Ethernet pHy I [Optical (WDM) I (c) Figure 6.14 Various implementations of IP over WDM. (a) A traditional implemen- tation, which maps IP packets into ATM cells, which are then encoded using SONET framing, before being transmitted over a wavelength. (b) The packet-over-SONET (POS) variant, where IP packets are mapped into PPP frames and then encoded using SONET framing. (c) Using Gigabit or 10-Gigabit Ethernet media access control (MAC) as the link layer and Gigabit or 10-Gigabit Ethernet physical layer (PHY) for encoding the frames for transmission over a wavelength. SONET-framed signal is then transmitted over a wavelength. Figure 6.14(b) shows the packet-over-SONET (POS) implementation. Here, IP packets are mapped into PPP frames, and then encoded into SONET frames for transmission over a wave- length. Figure 6.14(c) shows an implementation using Gigabit or 10-Gigabit Ethernet as the underlying link (media access control) layer and Gigabit/10-Gigabit Ethernet physical layer (PHY) for encoding the frames for transmission over a wavelength. We will study the implications of these different approaches in Chapter 13. . (SAR): an AAL segments the user data at the source into ATM cells and reassembles the ATM cells into user data at the destination. Four ATM adaptation layers, AAL-1, AAL-2, AAL-3/4 and AAL-5,. are described in ITU recommendation 1.363. (AAL-3 and AAL-4 started life separately but have since been merged into a single AAL.) We briefly describe AAL-1 and AAL-5. AAL- 1 AAL-1 is meant. covering the entire IP packet. Thus, in all but one cell, the AAL-5 payload is equal to the 48-byte ATM payload, and AAL-5 has lower overhead compared to AAL-1. Also note that AAL-5 provides error

Ngày đăng: 02/07/2014, 12:21

TỪ KHÓA LIÊN QUAN