1. Trang chủ
  2. » Công Nghệ Thông Tin

Mission-Critical Network Planning phần 6 potx

43 218 0

Đang tải... (xem toàn văn)

Tài liệu hạn chế xem trước, để xem đầy đủ mời bạn chọn Tải xuống

THÔNG TIN TÀI LIỆU

Thông tin cơ bản

Định dạng
Số trang 43
Dung lượng 385,59 KB

Nội dung

8.4 Network Platforms The trend in network convergence is having far-reaching effects. In addition to the merging of voice and data traffic over a common network, it also implies consolida - tion of platform form factors. For the first time, we are seeing switches that look like servers and servers that look like switches. Many of the principles just discussed per - taining to servers can also apply to network platforms. Network switching and routing platforms contain many of the same elements as a server: OS, CPU, memory, backplane, and I/O ports (in some cases, storage). But network platforms typically run a more focused application aimed at switching or routing of network traffic. They also must have more versatile network ports to accommodate various networking technologies and protocols. Because they have a more entangled involvement with a network than they do a server host, they are sub - ject to higher availability and performance requirements. Routers, for instance, were known to have availability that was several orders of magnitude lower than carrier class requirements would allow. Much has changed in the last several years. Conver - gence of voice, video, and data onto IP-based packet-switching networks and e-commerce, and a blurring of the distinction between enterprise and service pro - vider, have placed similar high-availability requirements on data networking as on telephony equipment, to the point where now these devices must have the inherent recovery and performance mechanisms found in FT/FR/HA platforms. Switches and routers have become complex devices with numerous configura- tion and programmable features. Unplanned outages are typically attributed to hardware failures in system controller cards or line cards, software failures, or even memory leaks. Some of the most major outages in recent years were attributed to problematic software and firmware upgrades. The greater reliance on packet (IP) networks as a routing fabric for all services has unintentionally placed greater responsibility on IP-based network platforms. As was seen in our discussions in this book, problems in an IP network device can compound across a network and have wide-scale effects. 8.4.1 Hardware Architectures The service life of a network platform will typically be longer than that of host server platform. With this comes the need for reliability and serviceability. Network plat - form reliability has steadily improved over the years, due largely in part to improve - ments in platform architecture. The differentiation between carrier class and enterprise class products has grown less and less. For one thing, architectures have become more simplified, with fewer hardware components, added redundancy, and more modularity. On the other hand, many network platform adjunct devices, such as voice mail systems and intelligent voice response (IVR) systems, are often built on or in conjunction with a general-purpose platform. Switch-like fabric of some type is interconnected with the server, which maintains all of the software processes to drive the platform. Figure 8.9 illustrates some of the generic functional components in a network platform [18, 19]. A physical network interface would provide functions for inter - preting incoming and outgoing line signals, such as encoding/decoding and multi - plexing/demultiplexing. Protocol processing would perform media address control 198 Mission-Critical Platforms (MAC) layer processing and segmentation/reassembly of frames. The classification function would classify frames based on their protocol. The network processor would provide the network-layer functional processing. The security function would apply any needed encryption or decryption. The traffic-management func- tion applies inherent or network management based traffic controls. The fabric por- tion manages port-to-port connectivity and traffic activity. Depending on the networking functions (e.g., switching, routing, and transmission) functional archi- tectures will vary and can be realized through various software and hardware archi- tectures. Many of the platform attributes already discussed in this chapter apply. The following are some general principles regarding network platforms for use in mission-critical networks. Because of the variety of form factors on the market, they may not apply to all platforms. Many of the principles discussed with respect to server platform hardware architecture will in many places apply: • Modularity. Modularity, as talked about earlier in this book, is a desirable feature of mission-critical systems. Modular architectures that incorporate externally developed components are steadily on the rise. This trend is further characterized by the ever-increasing integration of higher level processing modules optimized for a specific feature versus lower level generic compo - nents. For example, one will likely find systems having individual boards with firmware and on-board processing to provide T1/E1, SS7, voice over IP (VoIP), asynchronous transfer mode (ATM), Ethernet frame relay, and inte - grated services digital network (ISDN) services. This kind of modularity reduces the number of components, board-level interconnects, EMI, power, and ultimately cost. It also enables more mixing and matching of features through various interface configurations. Additionally, it lengthens the platform-technology curve by enabling incremental board-level upgrades, versus costly quantum-platform upgrades. • Backplanes. Redundant components with failover capabilities are becoming the norm. This includes redundant backplanes. Alternate pathing is often used to keep modules uncoupled so that if one fails the others are unaffected. Path redundancy, particularly between I/O modules and switch-fabric modules, 8.4 Network Platforms 199 Memory Network management Element management Physical network interface Protocol processing Classification Network processing CPU(s) Traffic management Fabric interface Switch fabric/ routing functions Security processing Network Figure 8.9 Generic network platform functions. allows concurrent support of bearer traffic (i.e., data and voice) and network- management instructions. Additionally, establishing redundant bearer paths between I/O and switch modules can further enhance traffic reliability. • Dual controllers. The presence of redundant controllers or switch processors, used either in a hot or cold standby configuration, can further improve plat - form reliability. This feature can also improve network reliability, depending on the type of network function the platform provides. For example, failover to a hot standby secondary controller in a router is prone to longer initializa - tion times, as it must converge or in other words learn all of the IP layer rout - ing and forwarding sessions. This process has been known to take several minutes. To get around some of these issues, vendors are producing routers in which routing sessions and packet forwarding information states are mirrored between processors, thereby reducing extended downtime. Such routers, often referred to as hitless routers, are finding popularity as edge network gateway devices, which are traditionally known to be a single point of failure. • Clocking. Clocking places a key role in time division multiplexing (TDM)– based devices but also has a prominent role in other network applications. Having dual clocking sources protects against the possibility of a clock outage and supports maintenance and upgrade of a system-timing source. Improper clocking, particularly in synchronous network services, can destroy the integ- rity of transmitted bits, making a service useless. Recently, many systems have been utilizing satellite-based global positioning system (GPS) timing for accu- racy. Regardless, it is imperative to use a secure source where reliability and survivability are guaranteed. • Interface modules. Traditionally, network devices have often used a distrib- uted approach to the network-interface portion of the platform. Line cards and switch port modules have been a mainstay in switching equipment to sup- port scalable subscriber and user growth. Interface modules, however, are a single point of failure. A failure in one processor card can literally bring down an entire LAN. Simply swapping out an interface card was usually the most popular restoration process. However, the higher tolerance mandated by today’s environment requires better protective mechanisms. N+Kredundant network interface boards with alternate backplane paths to the switch or rout - ing processor can provide added survivability. Use of boards with multiple dif - ferent protocols enables diversity at the networking technology level as well. Edge gateway devices, in particular, are being designed with individual proces - sor cards for the serial uplink ports and each user interface port. Some designs put some routing or switching intelligence inside the port modules in the event a catastrophic switch fabric or controller failure takes place. • Port architecture and density. Port density has always been a desirable feature in network platforms. The more channels that can be supported per node (or per rack unit/shelf), the greater the perceived value and capacity of the plat - form. Dense platforms result in less nodes and links, simplifying the network. But one must question whether the added density truly improves capacity. For one thing, real-time processing in a platform is always a limiting factor to plat - form capacity. Products such as core switches are typically high-end devices that have a nonblocking architecture. In these architectures, the overall 200 Mission-Critical Platforms bandwidth capacity that the device can support is equivalent to the sum of the bandwidth over all of the ports. Lower end or less expensive workgroup or edge switches have a blocking architecture, which means that the total switch - ing bandwidth capacity is less than the sum across all of the ports. The band - width across all user ports will typically exceed the capacity of an uplink port. These switches are designed under the assumption that not all ports will be engaged at the same time. Some devices use gigabit uplinks and stacking ports to give the sense of nonblocking. • Hot swapping. As discussed earlier, components that are hot swappable are desirable. This means not only that a component can be swapped while a plat - form remains powered and running, it also means nondisruptive service operation during the swap. Network interface modules should have the ability to be swapped while preserving all active sessions (either data or voice) during failover to another module. • Standards compliance. High-end carrier grade equipment is usually subject to compliance with the Telcordia NEBS and/or open systems modification of intelligent network elements (OSMINE) process. NEBS has become a de facto standard for organizations, typically service providers, looking to purchase premium quality equipment. NEBS certification implies that a product has passed certain shock, earthquake, fire, environmental, and electrostatic dis- charge test requirements. Equipment will usually be required to comply with Federal Communications Commission (FCC) standards as well. Some systems may also require interface modules to satisfy conformance with communica- tion protocols. In addition, many vertical market industries have certain equipment standards as well, such as the American Medical Association (AMA), the Securities and Exchange Commission (SEC), and the military. 8.4.2 Operating Systems An OS in a networking platform must continuously keep track of state information and convey it to other components. In addition to items such as call processing, sig - naling, routing, or forwarding information, administrative and network manage - ment transactions, although not as dynamic, must also be retained. During processor failover, such information must be preserved to avoid losing standing transactions. A platform should also enable maintenance functions, such as con - figuration and provisioning, to continue operation during a failover. Quite often, provisioning data is stored on a hard drive device. As in previous discussions, there are numerous ways to protect stored information (see the chapter on storage). Networking platforms using mirrored processors or controllers may also require mirrored storage as well, depending on the platform architecture. Con - figuration or subscriber databases typically require continuous auditing so that their content is kept as consistent as possible and not corrupted in the event of an outage. Some appliance-based network products, in order to stay lean, offload some of this responsibility to external devices. As discussed earlier, router OSs have classically been known to take extended amounts of time to reinitialize after a controller failure. Furthermore, the ability to retain all routing protocols and states during failover can be lacking, as the standby 8.4 Network Platforms 201 processor was often required to initialize the OS and undergo a convergence process. This not only led to service disruption, it also required disrupting service during upgrades. Routing involves two functions. A routing engine function obtains network topology information from neighboring routers, computes paths, and disseminates that information. A forwarding engine uses that information to forward packets to the appropriate ports. A failure of the routing engine to populate accurate routes in the forwarding table could lead to erroneous network routing. Many routers will assume a forwarding table to be invalid upon a failure, thus requiring a reconver - gence process. Additionally, system configuration information must be reloaded and all active sessions must be reestablished. Before routing sessions can be restored, sys - tem configurations (e.g., frame relay and ATM virtual circuit mappings) must be loaded. A failed router can have far-reaching effects in a network, depending on where it is located. Router OSs are being designed with capabilities to work around some of these issues. Furthermore, many switch and router platforms are coming to market with application programming interfaces (APIs) so that organizations can implement more customized features and functions that are otherwise unavailable in a platform module or module upgrade. APIs enable configuration of modules using available software libraries or customized programming. The platform OS will encapsulate many of the platform hardware functions, making them accessible through the APIs. Use of APIs can reduce time to implement system or service-level features. 8.5 Platform Management Manageability is a vital quality of any mission-critical server or networking plat- form. The platform should enable monitoring and control for hardware and soft- ware fault detection, isolation, diagnosis, and restoration at multiple levels. The platform should also enable servicing through easy access to components and well- documented operations and procedures. Some systems come with modules, soft - ware, and procedures for emergency management. Serviceability, or lack thereof, is typically a common cause of many system outages. Human errors made during soft - ware or hardware upgrades are a result of complex system and operational processes. Such situations are avoided through a user-friendly element management system (EMS) with an easy to use graphical user interface (GUI). 8.5.1 Element Management System An EMS integrates fault management, platform configuration, performance man - agement, maintenance, and security functions. A mission-critical EMS should come with redundant management modules, typically in the form of system processor cards each with SNMP (or comparable) network-management agents and interface ports for LAN (typically Ethernet) or serial access to the platform. LAN ports, each with an IP address, might be duplicated on each management board as well for redundant connectivity to the platform. Many network management software implementations are centered on SNMP. Other implementations include common management information protocol (CMIP), 202 Mission-Critical Platforms geared towards the telecom industry, and lately Intel’s Intelligent Platform Manage - ment Interface (IPMI) specification. These solutions are designed primarily to inter - face with platform components in some way in order to monitor their vital signs. These include such items as temperature, fans, and power. Much discussion has been given to monitoring thus far. In all, any effective monitoring solution must provide accurate and timely alerts if there is malfunction in a component, anticipate potential problems, and provide trending capabilities so that future problems are avoided. Hardware alerts are usually provided by an alarm board subsystem. As dis - cussed earlier, such systems have interfaces so that generic alarms can be communi - cated through various means, such as a network, dial-up modem, or even a pager. Alarm systems come in many different forms, ranging from an individual processor board to an entire chassis-based system. Alarm systems should have, as an option, the ability to have their own power source and battery backup in case of a power outage. Many will have their own on-board features, LEDs, and even some level of programmability. Alarm communication to external systems is usually achieved using various industry-standard protocols or languages. In telecom applications, Telcordia’s Man-Machine Language (MML) protocol is widely used, while enterprise networks commonly use SNMP. To the extent possible, alarm communication should kept out of band so that it can persist during a network or platform CPU failure. 8.5.2 Platform Maintenance There will come a time during the service life of any platform when some type of pre- ventive maintenance or upgrade is required. Upgrades usually refer to the process of modifying the platform’s hardware, such as adding or changing processors, memory, NICs, or even storage. It also refers to software modifications, such as installing a new or patch version of an OS or application. Some general rules should be used with respect to the upgrade process in a mission-critical environment. First, other network nodes should be unaffected by the node undergoing an upgrade. Availability goals may warrant that upgrades are performed while a system is in an operational state, actively providing service. This requires upgrading without service interruption. Many of the types of platform characteristics, such as redun - dancy and failover, can be leveraged for this purpose. A good user-friendly GUI can help minimize manual errors, which are quite common during the upgrade process. If availability requirements permit a platform to be taken out of service for an upgrade or repair, it should be taken off line in the off hours or during a time when the least disruption would result from the shutdown. Network level redundancy techniques, many of which were discussed earlier in this book, can be leveraged so that another corresponding device elsewhere in the network can tentatively provide service during the upgrade. Once the repair or upgrade is completed and the system is reinitialized, it should be in the identical state as it was prior to the shutdown, especially with respect to transaction and connection states. Its state and health should be verified before it is actually placed on active duty. In some situations, an upgrade that has gone awry might require backing out of the upgrade. It is recommended that a service agreement be in effect with the system vendor to provide on-site repair or repair instructions by phone. Quite often, difficulties arise 8.5 Platform Management 203 during the startup process versus shutdown. Retaining backup copies of configuration data and applying those configurations upon restart will ensure that the platform is in a state consistent with that prior to shutdown. Sound configuration-management practices should include saving backup copies of configuration files and keeping them updated with every configuration change, even the most minor ones. Some multiprocessor platforms can operate in split mode, which permits the upgraded environment to be tested while the preexisting environment continues to operate and provide service [20]. This allows the upgrade to be tested before it is committed into service, while the platform is in an operational service mode. This minimizes service interruption and improves availability. Split mode in essence divides a platform into primary and secondary operating domains, each served by a CPU and at least one I/O component (Figure 8.10). The primary domain retains the preexisting system and continues to actively process applications. It keeps the secon - dary domain abreast of application states and data, so that it can eventually transfer service after testing. Some applications from the primary domain can participate in the testing of the secondary domain if administratively specified. Maintaining onsite spare components for those that are most likely to fail should be part of any maintenance program. Of course, this also requires having trained personnel with the expertise to install and activate the component. However, keeping replacements for every component can be expensive. Platform vendors will normally ship needed components or send repair technicians, especially if it is part of a service agreement. Replacement part availability should be a negotiated clause in the service agreement. This last point cannot be emphasized enough. A platform vendor can be a single point of failure. If a widespread disaster occurs, chances are good that many organi- zations having the same platform and similar service agreements will be vying for the same replacement parts and technician repair services. Component availability typi- cally diminishes the more extensive a widespread disaster grows, regardless of the terms in a service agreement. One countermeasure is to use a secondary vendor or component distributor. If a particular component is commonly found in platforms across an enterprise, another strategy that is often used is to maintain a pool of spares that can be shared across company locations. Spares can be stored centrally or spread across several locations, depending on how geographically dispersed the enterprise is. 204 Mission-Critical Platforms I/O Mod I/O Mod I/O Mod I/O Mod CPU Mod CPU Mod I/O Mod I/O Mod I/O Mod I/O Mod Backplane Primary domain (old system) Secondary domain (new system) Figure 8.10 Split-mode operation example. The use of fixed spares requires having a spare per functional component. An alternative is the use of tunable spares, which are spares that have most of the under - lying native capabilities for use but require some tuning to configure and prepare them for their service function. For example, many types of I/O components may share the same type of processor board. All they would need is last-minute configu - ration based on their use in the platform (e.g., network interface or device interface). This can include such things as installing firmware or software or flipping switches or jacks on the board. Figure 8.11 illustrates the concept. Thus, a pool of universal spares can be retained at low cost and configured when needed on a case basis. This reduces the size of the inventory of spares. 8.6 Power Management Power is a major consideration in the operation of any server or networking plat - form. Power supplies and modules are commonly flawed elements. Other than com - ponent failures, voltage surges due to lightning strikes or problematic local transformers cannot only disrupt platform service, but can damage a platform and destroy the embedded investment. The growing cost of power consumption is also a lurking concern with many enterprises. Advances in power-efficient integrated cir- cuitry are offset by the trend in high-density rack server platforms. Strengthening a platform’s power supply and associated components is the first line of defense against power-related mishaps. The following are some suggested platform-specific measures (other general precautions are discussed later in this book in a chapter on facilities): • Redundant power supplies, once a staple in high-end computing and net- working systems, has become more prevalent across a wide range of plat- forms. Many backplane architectures can accommodate power modules 8.6 Power Management 205 Function A Function B Function C Function D Function E Fixed components Universal components Configure spare for required function Spare pool Spare pool Tuning Mechanism Function C Fixed spares Tunable spares Figure 8.11 Fixed spares versus tunable spares. directly in the chassis and are hot swappable. Redundant or segmented back - planes will typically each have their own power supply. N+Kredundancy can be used, providing an extra power supply than is required to run a platform. • Load-sharing power supplies can be used to spread power delivery among sev - eral power supplies, minimizing the chance that one of them will be over - stressed. If one of the power supplies fails, the platform would consume all of its power using the remaining supply. Because each supply would run at half the load during normal operation, each must be able to take up the full load if the other supply becomes inactive. As power consumption can vary as much as 25%, having a higher rated power supply may be wise. Load sharing provides an added advantage of producing less heat, extending the life of a power sup - ply, and even that of the overall platform [21]. • Independent power feeds for each power supply further eliminates a single point of failure. Each power supply should have its own cord and cabling, as well as power sources. This ultimately includes the transformer and other power plant components. For large-scale mission-critical operations, it may even require knowledge of the power plant infrastructure supporting the facil - ity and the locale. This topic is discussed in the chapter on facilities. • Secure power control requires features that avoid the inadvertent shutting off of power to the platform. A protected on/off switch can avoid accidental or maliciously intended shut off of the system. Secure cabling and connections will also safeguard against power cord disconnects or cuts. • Sparing of replacement components can facilitate hot swapping and improve availability. Many of the sparing practices discussed in the previous section can be applied to power components as well. • Power line conditioning protects against a surge or drop in power, which can be more debilitating to certain equipment than complete power loss. Power conditioning is discussed in the chapter on facilities. 8.7 Summary and Conclusions This chapter reviewed capabilities that are desired of mission-critical platforms. The trend toward server- and appliance-based architectures has given rise to both server and networking platforms with many integrated COTS components. FT/FR/HA capabilities are the product of vendor integration of these components. Regardless of the type of platform, simplicity, cost, serviceability, redundancy, failover, certifica - tions, and quality are common characteristics desirable of mission-critical platforms. FT is achieved through hardware and software by incorporating redundant components and building in mechanisms to rapidly detect, isolate, and correct faults. All of this creates extra cost, making FT platforms the most expensive to use. This is why they are often found for specialized applications such as telecommunica - tions, air-traffic control, and process control. FR and HA are lower cost alternatives that may resemble FT platforms on the surface, but they cannot guarantee the low levels of transaction loss found in FT platforms. Server platforms have evolved into backplane systems supporting several bus standards, including ISA, PCI, cPCI, CPSB, VME, and Infiniband. The use of a 206 Mission-Critical Platforms bus-based architecture enhances both survivability and performance, as it enables the connection of many redundant components within the same housing. Multiple CPU systems, which are preferred in mission-critical environments, should have the appropriate failover and fault-management mechanisms to ensure a platform’s required tolerance level. Because power supplies and modules are commonly flawed elements, extra precautions are required to ensure continuous power. This includes using redundant/load-sharing power supplies, independent power feeds, and power line conditioning. Many of the qualities desired of mission-critical servers hold true for network - ing platforms, as they contain many of the same elements as servers. But networking platforms typically run a more focused application—the switching or routing of net - work traffic. For this purpose, they require interworking with a switch/routing fab - ric comprised of many versatile network ports. Modularity, controller or switch processor redundancy, reliable timing, and multiple interface modules are common characteristics of a mission-critical networking platform. The platform must also be able to retain all protocol and state information during a failover. Stability is the most important characteristic one should look for in a mission- critical platform. The ability to predict platform behavior is the key to mission- critical platform success. Manageability and serviceability are also vital qualities. The use of FT/FR/HA platforms must be accompanied with good network design to achieve tolerance at the network level. In the end, the overall efficacy of a mission- critical platform transcends its hardware and software capabilities. The finishing touch lies in an organization’s operating environment, encompassing everything from network architecture and management, applications, data storage, and even business processes. References [1] Desmond, P., “Reliability Checklist,” Network World, August 30, 1999, pp. 53–54. [2] Grigonis, R., “Faultless Computing,” Computer Telephony, May, 1998, pp. 48–50, 71, 92–96. [3] Grigonis, R., and J. Jainschigg, “Platforms and Resources,” Interactive Voice Response— Supplement to Miller Freeman, 1998. [4] Wallner, P., “Bringing Carrier-Class Reliability to IP Telephony,” Telecommunications, April 1999, pp. 54–55. [5] Ruber, P., “Server Fault Tolerance,” Network Magazine, November 1998, pp. 30–37. [6] Grigonis, R., “Bullet-Proof Software,” Convergence Magazine, September 2001, pp. 70–79. [7] Sullivan, J., “High Availability RTOSes: A Buyer’s Guide,” Communications Systems Design, April 2001, pp. 44–50. [8] Grigonis, R., “Fault Resilience Takes New Forms,” Computer Telephony, February 2000, pp. 112–116. [9] Grigonis, R., “Fault Resilient Failover,” Convergence Magazine, July 2001, pp. 36–46. [10] Grigonis, R., “Fault-Resilient PCs: Zippy’s Mega-Update (cPCI, Too!),” Computer Teleph - ony, May 1999, pp 79–82. [11] Lelii, S., “Right Technology, Wrong Economy,” VAR Business, September 30, 2002, pp. 56–58. [12] Grigonis, R., “Fault Resilience for Communications Convergence,” Supplement to Com - puter Telephony Magazine, Spring 2001, pp. 5–16. 8.7 Summary and Conclusions 207 [...]... “Application Performance Measurement Grows Up,” Network Computing, May 14, 2001, pp 123–130 [11] Hollows, P., “Essentials of a Smooth-Running Network, ” Communications News, July 2002, p 16 CHAPTER 10 Storage Continuity For many organizations, information is a valuable asset Information is also the “fuel” that drives the mission-critical network engine Business continuity planning must include measures to preserve... email, storage and retrieval, system control, database management, and network management are examples of types of standalone functional applications that must interact with other elements over a network A mission-critical network should be an enabler for service applications, be they for revenue generation or support functions A network should recognize the most critical applications and provide them... Network, ” Network Reliability—Supplement to America’s Network, December 2000, pp 26S–28S [17] Katan, A., and J Wright, “High Availability: A Perspective,” Tech Republic, June 29, 2000, www.techrepublic.com [18] Telikepalli, A., “Tackling the Make-Versus-buy Decision,” Integrated Communications Design Magazine, February 2002, p 20 [19] Denton, C., “Modular Subsystems Will Play a Key Role in Future Network. .. Conclusions A mission-critical network is an enabler for applications Performance and survivability of critical applications is just as important as network and facility infrastructure Critical applications should be identified based on their mission or business importance and recoverability requirements Recoverability should be stated in terms of an RTO range Procuring or developing applications for a mission-critical. .. Week, January 29, 2001, pp 69 –78 [5] Boar, B H., Practical Steps for Aligning Information Technology with Business Strategies, New York: John Wiley & Sons, 1994, pp 114– 160 [6] Savage, D., “Buggy Code Blues,” CIO Insight, January, 2002, pp 77–78 [7] Margulius, D L., “Reeling in the Tiers,” Infoworld, May 20, 2002, pp 42–43 [8] Steinke, S., “Don’t Let Applications Get out of Hand!” Network Magazine, January... Resilient Computer,” Uptime—Supplement to Teleconnect Magazine, February 1998, pp 6 10 CHAPTER 9 Software Application Continuity Application software is fundamental to network continuity because it drives most network services Equal emphasis should be placed on performance and survivability of application as well as network infrastructure and facilities Compounding this challenge is the fact that... environment so that application failover can be managed The APIs should provide interfaces to the event-manager and network- management functions, which communicate platform events to a network manager The end result should be the ability to cohesively portray the state of all platform components and allow network management or system administrators to control the states as needed As mentioned in the chapter on...208 Mission-Critical Platforms [13] Hill, C., “High Availability Systems Made Easy: Part 2,” Communications Systems Design, December 2000, pp 45–52 [14] Lesniak, N., “Challenges of High Availability Software Development,” Integrated Communications Design Magazine, April 16, 2001, pp 48–50 [15] Lawrence, J., “Make It Run Forever,” Computer Telephony Integration Buyer’s Guide, 1999/2000, pp 44–47 [ 16] ... constantly poll vendors for upgrades and patches Furthermore, managing the volume and diversity of upgrades for large feature-rich networks can be become quite cumbersome 2 16 Software Application Continuity For reliability, applications are often spread across different network locations on independently run systems A more preferred approach is to create a single virtual image, or domain, of an application... identify locations or groupings that may be experiencing performance degradation Media-based aggregation accumulates client-based aggregations by network access technology, such as local area network (LAN) or dial up This information is used to identify network access problems for particular users Although standard and packaged applications on can be easily accommodated by APM software that use this . or routing of network traffic. They also must have more versatile network ports to accommodate various networking technologies and protocols. Because they have a more entangled involvement with a network. and switch-fabric modules, 8.4 Network Platforms 199 Memory Network management Element management Physical network interface Protocol processing Classification Network processing CPU(s) Traffic management Fabric interface Switch fabric/ routing functions Security processing Network Figure. February 1998, pp. 6 10. 208 Mission-Critical Platforms CHAPTER 9 Software Application Continuity Application software is fundamental to network continuity because it drives most network services.

Ngày đăng: 14/08/2014, 14:20

TỪ KHÓA LIÊN QUAN