Mission-Critical Network Planning phần 8 docx

43 203 0
Mission-Critical Network Planning phần 8 docx

Đang tải... (xem toàn văn)

Tài liệu hạn chế xem trước, để xem đầy đủ mời bạn chọn Tải xuống

Thông tin tài liệu

Magnetic tape has long been a staple for data backup for many years. Advance - ments in tape technology have given rise to tape formats that can store up to 100 GB of data in a single cartridge. However, advancements in disk technology have resulted in disk alternatives that enable faster and less tedious backup and recovery operations. Storage vault services have been in use for some time. They involve using a serv - ice provider to pick up data backups from a customer’s site and transport them to a secure facility on a regular basis. Electronic vaulting goes a step further by mirroring or backing up data over a network to a remote facility. Firms can also elect to out - source the management of their entire data operations to SSPs. A storage network is dedicated to connecting storage devices to one another. A SAN is a network that is separate from a primary production LAN. A SAN provides connectivity among storage devices and servers. SANs are not intrinsically fault tol - erant and require careful network planning and design. But they do improve scal - ability and availability by making it easier for applications to share data. They also offer a cost-effective way to implement data mirroring and backup mechanisms. Fibre Channel technology has found widespread use in SANs. Fibre Channel trans - fers files in large blocks without a lot of overhead. Point to point, FC-AL, and FC-SW are common Fibre Channel SAN architectures. Rather than implementing entirely new networking technologies to accommo- date storage, alternative strategies are being devised that can leverage more conven- tional legacy technologies. A NAS is a storage device that attaches directly to a LAN, instead of attaching to a host server. This enables users to access data from all types of platforms over an existing network. NAS can also be used in conjunction with SANs. There are also several approaches to further leverage legacy IP networks for block storage transport. FCIP, iFCP, and iSCSI transfer Fibre Channel commands over IP. iSCSI, in particular, is of great interest, as it leverages the installed base of SCSI storage systems. SANs enable isolated storage elements to be connected and managed. Many ven - dors offer automated software tools and systems to assist in this management. HSMs are software and hardware systems that manage data across many devices. They compress and move files from operational disk drives to slower, less expensive media for longer term storage. Virtualization is a technique that abstracts the data view from that of the physical storage device. This can be most appealing to organizations having large, complex IT environments because it offers a way to centrally manage data and make better use of storage resources. Data redundancy for mission-critical needs can be more easily managed across different storage platforms. Data restoration is just one component of an overall recovery operation. A most effective approach to recover primary data images is to make snapshot copies of the data at specified time intervals and restore from the most recent snapshot. Recovery software and systems can automate the recovery process. Regardless of the approach, there are several basic steps to data recovery that should be fol - lowed—these were reviewed earlier in this chapter. Networked storage and storage management applications provide the ability to efficiently archive and restore information, while improving scalability, redun - dancy, and diversity. Not only do such capabilities aid survivability, they improve accountability and auditing during an era when organizations are being held liable 284 Storage Continuity for information and, depending upon the industry, must comply with regulatory requirements [55]. References [1] Kirkpatrick, H. L., “A Business Perspective: Continuous Availability of VSAM Application Data,” Enterprise Systems Journal, October 1999, pp. 57–61. [2] Golick, J., “Distributed Data Replication,” Network Magazine, December 1999, pp. 60–64. [3] Collar, R. A., “Data Replication Is the Key to Business Continuity (Part 2),” Disaster Recovery Journal, Summer 2001, pp. 64–65. [4] Bruhahn, B. R., “Continuous Availability…A Reflection on Mirroring,” Disaster Recovery Journal, Summer 2001, pp. 66–70. [5] Dhondy, N. R., “Round ‘Em Up: Geographically Dispersed Parallel Sysplex,” Enterprise Systems Journal, September 2000, pp. 25–32. [6] Adcock, J., “High Availability for Windows NT,” Enterprise Systems Journal, July 1999, pp. 46–51. [7] Gordon, C., “High Noon—Backup and Recovery: What Works, What Doesn’t and Why,” Enterprise Systems Journal, September 2000, pp. 42, 46–48. [8] Flesher, T., “Special Challenges Over Extended Distance,” Disaster Recovery Journal, Winter 2002, pp. 54–58. [9] Mikkelsen, C., “Disaster Recovery Using Real-Time Disk Data Copy,” Enterprise Systems Journal, November 1998, pp. 34–39. [10] Talon, M., “Achieve the Level of Disaster Recovery that the Enterprise Requires,” Tech Republic, January 25, 2002, www.techrepublic.com. [11] Toigo, J. W., “Storage Disaster: Will You Recover?” Network Computing, March 5, 2001, pp. 39–46. [12] Elstein, C., “Reliance on Technology: Driving the Change to Advanced Recovery,” Enter- prise Systems Journal, July 1999, pp. 38–40. [13] Rhee, K., “Learn the Different Types of Data Backups,” Tech Republic, December 7, 2001, www.techrepublic.com. [14] Chevere, M., “Dow Chemical Implements Highly Available Solution for SAP Environ - ment,” Disaster Recovery Journal, Spring 2001, pp. 30–34. [15] Marks, H., “The Hows and Whens of Tape Backups,” Network Computing, March 5, 2001, pp. 68, 74–76. [16] Baltazar, H., “Extreme Backup,” eWeek, Vol. 19, No. 32, April 15, 2002, pp. 39–41, 43, 45. [17] Wilson, B., “Think Gambling Only Happens in Casinos? E–Businesses Without Business Continuity Processes Take High Risk Chances Daily,” Disaster Recovery Journal, Winter 2001, pp. 62–64. [18] Fletcher, D., “The Full Range of Data Availability,” Computing News & Review, Novem - ber 1998, p. 26. [19] “Using RAID Arrays for Data Recovery,” Tech Republic, November 6, 2001, www.techre - public.com. [20] Rigney, S., “Server Storage: Rely on RAID,” ZD Tech Brief, Spring 1999, pp. S7–S8. [21] Fetters, D., “The Emerging Tape Backup Market,” Network Computing, July 24, 2000, pp. 86–92. [22] Pannhausen, G., “A Package Deal: Performance Packages Deliver Prime Tape Library Per - formance,” Enterprise Systems Journal, November 1999, pp. 52–55. [23] Rigney, S., “Tape Storage: Doomsday Devices,” ZD Tech Brief, Spring 1999, pp. S9–S12. 10.9 Summary and Conclusions 285 [24] “Vaulting Provides Disaster Relief,” Communications News, July 2001, pp. 48–49. [25] Murtaugh, Jerry, “Electronic Vaulting Service Improves Recovery Economically,” Disaster Recovery Journal, Winter 2001, pp. 48–50. [26] Edwards, M., “Storage Utilities Make Case for Pay-as-You-Go Service,” Communications News, August 2000, pp. 110–111. [27] Connor, D., “How to Take Data Storage Traffic off the Network,” Network World, April 10, 2000, p. 30. [28] Gilmer, B., “Storage Area Networks,” Broadcast Engineering, December 2000 pp. 42–44. [29] Eaton, S., “The Fibre Channel Infrastructure,” Enterprise Systems Journal, November 1999, pp. 38–40. [30] Fetters, D., “Building a Storage Area Network,” Network Computing, May 15, 2000, pp. 169–180. [31] Toigo, J. W., “Mission: Impossible? Disaster Recovery and Distributed Environments,” Enterprise Systems Journal, June 1998, pp. 48–52. [32] Karve, A., “Lesson 136: Storage Area Networks,” Network Magazine, November 1, 1999, pp. 28–30. [33] Massiglia, P., “New I/O System Possibilities with Fibre Channel,” Computer Technology Review, April 1998, pp. 52–54. [34] Hubbard, D., “The Wide Area E-SAN: The Ultimate Business Continuity Insurance,” Enterprise Systems Journal, November 1999, pp. 42–43. [35] Gilmer, B., “Fibre Channel Storage,” Broadcast Engineering, June 2001, pp. 38–42. [36] Gilmer, B., “Fibre Channel Storage,” Broadcast Engineering, August 2000, pp. 34–38. [37] Fetters, D., “Siren Call of Online Commerce Makes SANs Appealing,” Storage Area Net- works, CMP Media Supplement, May/June 1992, pp. 4SS–22SS. [38] Clark, T., “Evolving IP Storage Switches,” Lightwave, April 2002, pp. 56–63. [39] Helland, A., “SONET Provides High Performance SAN Extension,” Network World, Janu- ary 7, 2002. [40] Jacobs, A., “Vendors Rev InfiniBand Engine,” Network World, March 4, 2002. [41] McIntyre, S., “Demystifying SANs and NAS,” Enterprise Systems Journal, July 2000, pp. 33–37. [42] Baltazar, H., “Deciphering NAS, SAN Storage Wars,” eWeek, April 9, 2001, p. 26. [43] Wilkinson, S., “Network Attached Storage: Plug and Save,” ZD Tech Brief, Spring 1999, pp. S12–S13. [44] Clark, E., “Networked Attached Storage Treads New Turf,” Network Magazine, July 2002, pp. 38–42. [45] Lewis, M., “Creating the Storage Utility—The Ultimate in Enterprise Storage,” Enterprise Systems Journal, November 1999, pp. 67–72. [46] Tsihlis, P., “Networks: How to Avoid the ‘SAN Trap’,” Enterprise Systems Journal, July 2000, pp. 38–41. [47] Rigney, S., “On-line Storage: Protecting the Masses,” ZD Tech Brief, Spring 1999, p. S18. [48] Connor, D., “IP Storage NICs May Disappoint,” Network World, February 25, 2002, pp. 1, 65. [49] Wilkinson, S., “Hierarchical Storage: Taking it from the Top,” ZD Tech Brief, Spring 1999, p. S20. [50] Swatik, D., “Easing Management of Storage Devices,” Network World, March 20, 2000, p. 69. [51] Toigo, J. W., “Nice, Neat Storage: The Reality,” Network Computing, May 27, 2002, pp. 36–45. [52] Connor, D., “Every Byte into the Pool,” Network World, March 11, 2002, pp. 60–64. [53] Zaffos, S., and P. Sargeant, “Designing to Restore from Disk,” Gartner Research, Novem - ber 14, 2001. 286 Storage Continuity [54] Cozzens, D. A., “New SAN Architecture Benefit Business Continuity Re-Harvesting Stor - age Subsystems Investment for Business Continuity,” Disaster Recovery Journal, Winter 2001, pp. 72–76. [55] Piscitello, D., “The Potential of IP Storage,” www.corecom.com. 10.9 Summary and Conclusions 287 . CHAPTER 11 Continuity Facilities All network infrastructures must be housed in some kind of physical facility. A mission-critical network facility is one that guarantees continued operation, regard - less of prevailing conditions. As part of the physical network topology, the facility’s design will often be driven by the firm’s business, technology, and application archi - tecture. The business architecture will drive the level of continuity, business func - tions, and processes that must be supported by the facility. The technology architecture will define the physical requirements of facility. The application archi - tecture will often drive location and placement of service operations and connec - tivity, as well as the physical facility requirements. 11.1 Enterprise Layout Any individual facility is a single point of failure. By reducing a firm’s dependence on a single location and distributing the organization’s facilities across several locations, the likelihood of an adverse event affecting the entire organization is reduced. Geographic diversity in facilities buys time to react when an adverse event occurs, such as a physical disaster. Locations unaffected by the same disaster can provide mutual backup and fully or partially pick up service, reducing the recovery time. Recognizing this, many firms explore geographic diversity as a protection mechanism. Decentralization avoids concentration of assets information processing systems in a single location. It also implies greater reliance on communication net - working. However, a decentralized architecture is more complex and usually more costly to maintain. For this reason, many firms have shifted to centralizing their data center operations to at least two data centers or one data center and a recovery site. In the end, a compromise solution is usually the best approach. The level organizational centricity will often drive logical and physical network architecture. The number and location of branch offices, for instance, will dictate the type of wide area network (WAN) connectivity with headquarters. Quite often, firms locate their information centers, Internet access, and key operations personnel at a headquarters facility, placing greater availability requirements on the WAN architecture. Quite often, Internet access is centralized at the corporate facility such that all branch office traffic must travel over the WAN for access. On the other hand, firms with extranets that rely heavily on Internet access will likely use a more distributed Internet access approach for greater availability. 289 11.1.1 Network Layout Convergence has driven the consolidation of voice, data, and video infrastructure. It has also driven the need for this infrastructure to be scalable so that it can be cost effectively planned and managed over the long term. The prerequisite for this is an understanding of a network’s logical topology. A layered hierarchical architecture makes planning, design, and management much easier. The layers that comprise an enterprise network architecture usually consist of the following: • Access layer. This is the where user host systems connect to a network. These include elements such as hubs, switches, routers, and patch panels found in wiring or telecom closets (TCs). A failure of an access layer element usually affects local users. Depending on the availability requirements, these devices may need to be protected for survivability. These devices also see much of the network administrative activity in the form of moves, adds, or changes and are prone to human mishaps. • Distribution layer. This layer aggregates access layer traffic and provides con - nectivity to the core layer. WAN, campus, and virtual local area network (VLAN) traffic originating in the access layer is distributed among access devices or through a network backbone. At this layer, one will see high-end switching and routing systems, as well as security devices such as firewalls. Because these devices carry greater volumes of traffic, they likely require greater levels of protection within a facility than access devices. They will usu- ally require redundancy in terms of multiple components in multiple locations. Redundant and diverse cable routing between access and distribution switches is necessary. Access-layer devices are often grouped together with their con- necting distribution layer devices in switch blocks. An access-layer switch on will often connect to a pair of distribution-layer switches, forming a switch block. • Core layer. Switch blocks connect to one another through the core layer. Core layer devices are typically high-density switches that process traffic at very high line speeds. Administrative and security processing of the traffic is less likely to occur at this layer, so as not to affect traffic flow. These devices nor - mally require physical protection because of the amount of traffic they carry and their high-embedded capital cost. 11.1.2 Facility Location Network layout affects facility location. Locations that carry enterprisewide traffic should be dependable locations. As geographically redundant and diverse routing is necessary for mission-critical networks, locations that are less vulnerable to disas - ters and that have proximity to telecom services and infrastructure are desirable. The location of branch offices and their proximity to carrier point of presence (POP) sites influences the type of network access services and architecture. Peripheral to a decentralized facility strategy is the concept of ruralization. This entails organizations locating their critical network facilities in separate locations but in areas that are local to a main headquarters. Typically, these are rural areas within the same metropolitan region as the headquarters. The location is far enough 290 Continuity Facilities away from headquarters so that it is still accessible by ground transportation. This is done for the reason that a disaster will less likely simultaneously damage both loca - tions. If one location is damaged, staff can relocate to the other location. However, this strategy may not necessarily protect against sweeping regional disasters such as hurricanes or earthquakes. 11.1.3 Facility Layout Compartmentalization of a networking facility is fundamental for keeping it opera - tional in light of a catastrophic event [1]. Many facilities are organized into a central equipment room that houses technical equipment. Sometimes, these rooms are sub - divided into areas based on their network functionality. For instance, networking, data center, and Web-hosting systems are located in separate areas. Because these areas are the heart of a mission-critical facility, they should be reinforced so that they can function independently from the rest of the building. Size and location within a facility are key factors. As many of these rooms are not within public view, they are often not spaciously designed. Such rooms should have adequate spacing between rows of racks so that personnel can safely move equipment without damage. Although information technology (IT) equipment is getting smaller, greater numbers of units are being implemented, resulting in higher density environments. Swing space should be allocated to allow new technology equipment to be installed or staged prior to the removal of older equipment. A facil- ity should be prewired for growth and provide enough empty rack space for expansion. 11.2 Cable Plant A mission-critical network’s integrity starts with the cable plant—the cables, con- nectors, and devices used to tie systems together. Cable plant is often taken for granted and viewed as a less important element of an overall network operation. However, many network problems that are often unsolvable at the systems level are a result of lurking cabling oversights. Quite often, cabling problems are still the most difficult to troubleshoot and solve, and sometimes are irreparable. For these reasons, a well-developed structured cabling plan is mandatory for a mission- critical facility [2]. Mission-critical cabling plant should be designed with the fol - lowing features: • Survivability and self healing capabilities, typically in the form of redundancy, diversity, and zoning such that a problem in a cable segment will minimally impact network operation; • Adequately support the transmission and performance characteristics of the prevailing networking and host system technologies; • Support easy identification, testing, troubleshooting, and repair of cable problems; • Easily and cost effectively facilitate moves, adds, and changes without service disruption; • Support scalable long-term growth. 11.2 Cable Plant 291 11.2.1 Cabling Practices Network cabling throughout a facility should connect through centralized locations, so that different configurations can be created through cross connection, or patch - ing. But inside an equipment room, a decentralized approach is required. Physically separating logically clustered servers and connecting them with diversely routed cabling avoids problems that could affect a rack or section of racks. Centralization of cable distribution can increase risks, as it becomes a single point of failure. Redundant routing of cables involves placing multiple cable runs between loca - tions. Quite often, two redundant host systems will operate in parallel for reliabil - ity. However, more often than not, the redundant devices connect to wires or fibers in the same cable, defeating the purpose of the redundancy altogether. Placing extra cables in diverse routes between locations provides redundancy and accommo - dates future growth or additional backup systems that may be required in an emergency. This notion revisits the concept of logical versus physical link reliability, dis - cussed earlier in this book. If a cable is damaged and a redundant cable path is avail - able, then the systems on each end must continue to either send traffic on the same logical path over the redundant cable path or redirect traffic on a new logical path on the redundant cable path. In WANs, synchronous optical network (SONET) ring networks have inherent disaster avoidance so that the physical path is switched to a second path, keeping the logical path intact. In a local area network (LAN), a redun- dant physical path can be used between two devices only after reconvergence of the spanning tree. Newer LAN switching devices have capabilities to logically tie together identical physical links so that they can load share as well as provide redundancy. 11.2.1.1 Carrier Entry A service entrance is the location where a service provider’s cable enters a building or property [3]. Because a service provider is only required to provide one service entrance, a redundant second entrance is usually at the expense of the property owner or tenant. Thus, a redundant service entrance must be used carefully in order to fully leverage its protective features and costs. The following are some practices to follow: • A redundant service entrance should be accompanied by diverse cable paths outside and within the property. A cable path from the property to the carrier’s central office (CO) or POP should be completely diverse from the other cable path, never converging at any point. Furthermore, the second path should connect to a CO or POP that is different from that which serves the other path. (See Chapter 7 for different access path configurations.) • The redundant cable should be in a separate cable sheath or conduit, instead of sharing the same sheath or conduit as the primary cable. • Circuits, channels, or traffic should be logically split across both physical cables if possible. In a ring network access topology, as in the case of SONET, one logical path will enter through one service entrance and exit through the other. 292 Continuity Facilities • Having the secondary access connect to an additional carrier can protect against situations where one access provider might fail while another might survive, as in the case of a disaster. There should be at least one main cross connect (MC) [4]. The MC should be in close proximity to or collocated with the predominant equipment room. For redun - dancy or security purposes, selected cable can be passed through the MC to specific locations. 11.2.1.2 Multiple Tenant Unit Multiple tenant units (MTUs) are typical of high-rise buildings. Figure 11.1 illus - trates a typical MTU cable architecture. It is clear that many potential single failure points exist. Vertical backbone cables, referred to as riser cables, are run from the MC to a TC on each floor in a star topology. For very tall MTUs, intermediate cross connects (ICs) are inserted between the TC and MC for better manageability. Some - times, they are situated within a TC. Although risers are predominantly fiber cables, many installations include copper twisted pair and coaxial cable. All stations on each floor connect to a horizontal cross connect (HC) located in the TC [5]. 11.2 Cable Plant 293 Demarc Horizontal cross connect (HC) Main cross connect (MC) Riser cable Horizontal cable Intermediate cross connect (IC) Intermediate cable Service entrance To service provider TC TC TC TC TC TC Figure 11.1 MTU cable example. [...]... generated by 1W of computing power, the amount required to run a 20-MHz 80 386 chip of 316 Continuity Facilities 1 980 s vintage This implies about 50% more power consumption for cooling [55] Today’s 80 0-MHz Pentium III chip requires about 75W of power [56] If this trend continues, a fully equipped rack of servers could approach 18 KW of power in 5 years (as of this writing) Countering this trend are... Summary and Conclusions A mission-critical network facility must provide an environment for continued network operation The first step to achieving this is through proper location of the facility Spreading operations across multiple facilities can reduce a firm’s dependence on a single location Network operations should be viewed in a layered hierarchical fashion for facility planning purposes These... or another, has become a critical component in mission-critical networking Fiber has found home in long-haul WAN 300 Continuity Facilities implementations and metropolitan area networks (MANs) As Gigabit Ethernet has been standardized for use with fiber-optic cabling, the use of fiber in conjunction with Ethernet LAN systems has grown more attractive [ 18] Yet, many organizations cannot easily justify... ac power availability is typically in the range of 99.9% to 99. 98% If each power disruption requires systems to reboot and services to reinitialize, additional availability can be lost Because this implies downtimes on the order of 2 to 8 hours, clearly a high-uptime mission-critical facility will require power protection capabilities A mission-critical facility should be supplied with sufficient power... high-density installations can see on the order of 40 to 80 systems per rack, this is expected to climb to 200 to 300 systems per rack in 5 years Although advances in chip making will reduce power consumption per system, unprecedented power densities on the order of 3,000 to 5,000W per rack are still envisioned Telecom networking systems use –48V of direct current (dc) Rectifiers are used, which are... not governed by national electric code (NEC) requirements [31] Because of the trend in convergence, networking facilities are seeing more widespread use of ac-powered systems In telecom facilities, ac power is delivered via inverters, which are devices that convert ac to dc, drawing power from the –48V dc Mission-critical facilities obtain their power from a combination of two primary sources Public power... Redundant riser cable Mission-critical MTU cable architecture example 11.2 Cable Plant 295 other for added redundancy From each MC, a separate riser cable runs to each IC This creates a ring-like architecture and can thus leverage networking technologies such as SONET or resilient packet ring (RPR) The redundant riser is routed via a separate conduit or shaft for diversity [8] Although not explicitly... the case of the UPS, unprotected computer-driven intelligent HVAC equipment can tell other equipment that it has failed, forcing unwanted shutdowns 11.4.2 Fire Protection Planning Another challenge in designing a mission-critical network facility is creating capabilities to prevent, detect, contain, and suppress smoke and fire to limit injuries, equipment, and facility damage Fire detection and suppression... topology also enables patching a new building into the network from the central MC [10] The campus network design can be enhanced with features to add survivability, similar to the MTU case (Figure 11.4) Redundant MCs and tie links between ICs add further redundancy in the event a building, cross connect, or cable is damaged The result is a mesh-like network, which adds versatility for creating point-to-point,... location and layout is highly affected by the composition of these layers Cable plant is often taken for granted and viewed as a less important element of an overall network operation However, many network problems result from cabling oversights Network cabling throughout a facility should entail redundant routing of cables, multiple carrier service entrances, and multiple cross connects Whether in an MTU . Environments,” Enterprise Systems Journal, June 19 98, pp. 48 52. [32] Karve, A., “Lesson 136: Storage Area Networks,” Network Magazine, November 1, 1999, pp. 28 30. [33] Massiglia, P., “New I/O System. Enterprise Systems Journal, November 1999, pp. 38 40. [30] Fetters, D., “Building a Storage Area Network, ” Network Computing, May 15, 2000, pp. 169– 180 . [31] Toigo, J. W., “Mission: Impossible?. Wilkinson, S., Network Attached Storage: Plug and Save,” ZD Tech Brief, Spring 1999, pp. S12–S13. [44] Clark, E., “Networked Attached Storage Treads New Turf,” Network Magazine, July 2002, pp. 38 42. [45]

Ngày đăng: 14/08/2014, 14:20

Mục lục

    Network Management for Continuity

    12.1 Migrating Network Management to the Enterprise

Tài liệu cùng người dùng

Tài liệu liên quan