Tài liệu hạn chế xem trước, để xem đầy đủ mời bạn chọn Tải xuống
1
/ 96 trang
THÔNG TIN TÀI LIỆU
Thông tin cơ bản
Định dạng
Số trang
96
Dung lượng
1,04 MB
Nội dung
7 Clustering Servers Exam Objectives in this Chapter: ■ Plan services for high availability ❑ Plan a high availability solution that uses clustering services ❑ Plan a high availability solution that uses Network Load Balancing ■ Implement a cluster server ❑ Recover from cluster node failure ■ Manage Network Load Balancing. Tools might include the Network Load Balanc- ing Manager and the WLBS cluster control utility Why This Chapter Matters As organizations become increasingly dependent on their computer networks, clustering is becoming an increasingly important element of those networks. Many businesses now rely on the World Wide Web for all their contact with cus- tomers, including order taking and other revenue-producing tasks. If the Web servers go down, business stops. Understanding how clustering works, and how Microsoft Windows Server 2003 supports clustering, is becoming an important element of the network administrator’s job. Lessons in this Chapter: ■ Lesson 1: Understanding Clustering . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7-2 ■ Lesson 2: Using Network Load Balancing. . . . . . . . . . . . . . . . . . . . . . . . . . 7-14 ■ Lesson 3: Designing a Server Cluster . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7-30 Before You Begin This chapter assumes a basic understanding of Transmission Control Protocol/Internet Pro- tocol (TCP/IP) communications, as described in Chapter 2, “Planning a TCP/IP Network Infrastructure,” and of Microsoft Windows network services, such as DNS and DHCP. To perform the practice exercises in this chapter, you must have installed and config- ured Windows Server 2003 using the procedure described in “About This Book.” 7-1 7-2 Chapter 7 Clustering Servers Lesson 1: Understanding Clustering A cluster is a group of two or more servers dedicated to running a specific application (or applications) and connected to provide fault tolerance and load balancing. Cluster- ing is intended for organizations running applications that must be available, making any server downtime unacceptable. In a server cluster, each computer is running the same critical applications, so that if one server fails, the others detect the failure and take over at a moment’s notice. This is called failover. When the failed node returns to service, the other nodes take notice and the cluster begins to use the recovered node again. This is called failback. Clustering capabilities are installed automatically in the Windows Server 2003 operating system. In Microsoft Windows 2000 Server, you had to install Microsoft Clustering Service as a separate module. After this lesson, you will be able to ■ List the types of server clusters ■ Estimate your organization’s availability requirements ■ Determine which type of cluster to use for your applications ■ Describe the clustering capabilities of the Windows Server 2003 operating systems Estimated lesson time: 0 minutes 3 Clustering Types Windows Server 2003 supports two different types of clustering: server clusters and Net- work Load Balancing (NLB). The difference between the two types of clustering is based on the types of applications the servers must run and the nature of the data they use. Important Server clustering is intended to provide high availability for applications, not data. Do not mistake server clustering for an alternative to data availability technologies, such as RAID (redundant array of independent disks) and regular system backups. Server Clusters Server clusters are designed for applications that have long-running in-memory states or large, frequently changing data sets. These are called stateful applications, and include database servers such as Microsoft SQL Server, e-mail and messaging servers such as Microsoft Exchange, and file and print services. In a server cluster, all the com- puters (called nodes) are connected to a common data set, such as a shared SCSI bus or a storage area network. Because all the nodes have access to the same application data, any one of them can process a request from a client at any time. You configure each node in a server cluster to be either active or passive. An active node receives and Lesson 1 Understanding Clustering 7-3 processes requests from clients, while a passive node remains idle and functions as a fallback, should an active node fail. For example, a simple server cluster might consist of two computers running both Windows Server 2003 and Microsoft SQL Server, connected to the same Network- Attached Storage (NAS) device, which contains the database files (see Figure 7-1). One of the computers is an active node and one is a passive node. Most of the time, the active node is functioning normally, running the database server application, receiving requests from database clients, and accessing the database files on the NAS device. However, if the active node should suddenly fail, for whatever reason, the passive node detects the failure, immediately goes active, and begins processing the client requests, using the same database files on the NAS device. Server NAS Server Figure 7-1 A simple two-node server cluster See Also The obvious disadvantage of this two-node, active/passive design is that one of the servers is being wasted most of the time, doing nothing but functioning as a passive standby. Depending on the capabilities of the application, you can also design a server clus- ter with multiple active nodes that share the processing tasks among themselves. You learn more about designing a server cluster later in this lesson. A server cluster has its own name and Internet Protocol (IP) address, separate from those of the individual computers in the cluster. Therefore, when a server failure occurs, there is no apparent change in functionality to the clients, which continue to send their requests to the same destination. The passive node takes over the active role almost instantaneously, so there is no appreciable delay in performance. The server cluster ensures that the application is both highly available and highly reliable, because, despite a failure of one of the servers in the cluster, clients experience few, if any, unscheduled application outages. Windows Server 2003, Enterprise Edition, and Windows Server 2003, Datacenter Edi- tion, both support server clusters consisting of up to eight nodes. This is an increase over the Microsoft Windows 2000 operating system, which supports only two nodes in the Advanced Server product and four nodes in the Datacenter Server product. Neither Windows Server 2003, Standard Edition, nor Windows 2000 Server supports server clusters at all. 7-4 Chapter 7 Clustering Servers Planning Although Windows Server 2003, Enterprise Edition, and Windows Server 2003, Datacenter Edition, both support server clustering, you cannot create a cluster with computers running both versions of the operating system. All your cluster nodes must be running either Enterprise Edition or Datacenter Edition. You can, however, run Windows 2000 Server in a Windows Server 2003, Enterprise Edition, or Windows Server 2003, Datacenter Edition, cluster. Network Load Balancing Network Load Balancing (NLB) is another type of clustering that provides high avail- ability and high reliability, with the addition of high scalability as well. NLB is intended for applications with relatively small data sets that rarely change (or may even be read- only), and that do not have long-running in-memory states. These are called stateless applications, and typically include Web, File Transfer Protocol (FTP), and virtual pri- vate network (VPN) servers. Every client request to a stateless application is a separate transaction, so it is possible to distribute the requests among multiple servers to bal- ance the processing load. Instead of being connected to a single data source, as in a server cluster, the servers in an NLB cluster all have identical cloned data sets and are all active nodes (see Figure 7-2). The clustering software distributes incoming client requests among the nodes, each of which processes its requests independently, using its own local data. If one or more of the nodes should fail, the others take up the slack by processing some of the requests to the failed server. Server Server Server Server Figure 7-2 A Network Load Balancing cluster Network Load Balancing and Replication Network Load Balancing is clearly not suitable for stateful applications such as database and e-mail servers, because the cluster nodes do not share the same data. If one server in an NLB cluster were to receive a new record to add to the database, the other servers would not have access to that record until the next database replication. It is possible to replicate data between the servers in an NLB cluster, for example, to prevent administrators from having to copy modified Web pages to each server individually. However, this replication is an occasional event, not an ongoing occurrence. Lesson 1 Understanding Clustering 7-5 Network Load Balancing provides scalability in addition to availability and reliability, because all you have to do when traffic increases is add more servers to the cluster. Each server then has to process a smaller number of incoming requests. Windows Server 2003, Standard Edition, Windows Server 2003, Enterprise Edition, and Windows Server 2003, Datacenter Edition, all support NLB clusters of up to 32 computers. Off the Record There is also a third type of clustering, called component load balancing (CLB), designed for middle-tier applications based on Component Object Model (COM+) pro- gramming components. Balancing COM+ components among multiple nodes provides many of the same availability and scalability benefits as Network Load Balancing. The Windows Server 2003 operating systems do not include support for CLB clustering, but it is included in the Microsoft Windows 2000 Application Center product. Exam Tip Be sure you understand the differences between a server cluster and a Network Load Balancing cluster, including the hardware requirements, the difference between stateful and stateless applications, and the types of clusters supported by the various versions of Windows Server 2003. ! Designing a Clustering Solution The first thing to decide when you are considering a clustering solution for your net- work is just what you expect to realize from the cluster—in other words, just how much availability, reliability, or scalability you need. For some organizations, high availability means that any downtime at all is unacceptable, and clustering can provide a solution that protects against three different types of failures: ■ Software failures Many types of software failure can prevent a critical applica- tion from running properly. The application itself could malfunction, another piece of software on the computer could interfere with the application, or the operating system could have problems, causing all the running applications to fal- ter. Software failures can result from applying upgrades, from conflicts with newly installed programs, or from the introduction of viruses or other types of malicious code. As long as system administrators observe basic precautions (such as not installing software updates on all the servers in a cluster simultaneously), a cluster can keep an application available to users despite software failures. 7-6 Chapter 7 Clustering Servers ■ Hardware failures Hard drives, cooling fans, power supplies, and other hard- ware components all have limited life spans, and a cluster enables critical applica- tions to continue running despite the occurrence of a hardware failure in one of the servers. Clustering also makes it possible for administrators to perform hardware maintenance tasks on a server without having to bring down a vital application. ■ Site failures In a geographically dispersed cluster, the servers are in different buildings or different cities. Apart from making vital applications locally available to users at various locations, a multisite cluster enables the applications to con- tinue running even if a fire or natural disaster shuts down an entire site. Estimating Availability Requirements The degree of availability you require depends on a variety of factors, including the nature of the applications you are running, the size, location, and distribution of your user base, and the role of the applications in your organization. In some cases, having applications available at all times is a convenience; in others, it is a necessity. The amount of availability an organization requires for its applications can affect its cluster- ing configuration in several ways, including the type of clustering you use, the number of servers in the cluster, the distribution of applications across the servers in the cluster, and the locations of the servers. Real World High Availability Requirements The technical support department of a software company might need the com- pany’s customer database available to be fully productive, but can conceivably function without it for a time. For a company that sells its products exclusively through an e-commerce Web site, however, Web server downtime means no incoming orders, and therefore no income. For a hospital or police department, non-functioning servers can literally be a matter of life and death. Each of these organizations might be running similar applications and servicing a similar num- ber of clients, but their availability requirements are quite different, and so should their clustering solutions be. Availability is sometimes quantified in the form of a percentage reflecting the amount of time that an application is up and running. For example, 99% availability means that an application can be unavailable for up to 87.6 hours during a year. An application that is 99.9% available can be down for no more than 8.76 hours a year. Achieving a specific level of availability often involves more than just implementing a clustering solution. You might also have to install fault tolerant hardware, create an Lesson 1 Understanding Clustering 7-7 extensive hardware and software evaluation and testing plan, and establish operational policies for the entire IT department. As availability requirements get higher, the amount of time, money, and effort needed to achieve them grows exponentially. You might find that achieving 95% to 99% reliability is relatively easy, but pushing reliability to 99.9% becomes very expensive indeed. Scaling Clusters Both server clusters and Network Load Balancing are scalable clustering solutions, meaning that you can improve the performance of the cluster as the needs of your organization grow. There are two basic methods of increasing cluster performance, which are as follows: ■ Scaling up Improving individual server performance by modifying the com- puter’s hardware configuration. Adding random access memory (RAM) or level 2 (L2) cache memory, upgrading to faster processors, and installing additional pro- cessors are all ways to scale up a computer. Improving server performance in this way is independent of the clustering solution you use. However, you do have to consider the individual performance capabilities of each server in the cluster. For example, scaling up only the active nodes in a server cluster might establish a level of performance that the passive nodes cannot meet when they are called on to replace the active nodes. It might be necessary to scale up all the servers in the cluster to the same degree, to provide optimum performance levels under all circumstances. ■ Scaling out Adding servers to an existing cluster. When you distribute the pro- cessing load for an application among multiple servers, adding more servers reduces the burden on each individual computer. Both server clusters and NLB clusters can be scaled out, but it is easier to add servers to an NLB cluster. In Network Load Balancing, each server has its own independent data store con- taining the applications and the data they supply to clients. Scaling out the clus- ter is simply a matter of connecting a new server to the network and cloning the applications and data. Once you have added the new server to the cluster, NLB assigns it an equal share of the processing load. Scaling out a server cluster is more complicated because the servers in the clus- ter must all have access to a common data store. Depending on the hardware configuration you use, scaling out might be extremely expensive or even impos- sible. If you anticipate the need for scaling out your server cluster sometime in the future, be sure to consider this when designing its hardware configuration. 7-8 Chapter 7 Clustering Servers Real World Scalability in the Real World Be sure to remember that the scalability of your cluster is also limited by the capa- bilities of the operating system you are using. When scaling out a cluster, the maximum numbers of nodes supported by the Windows operating systems are as follows: When scaling up a cluster, the operating system limitations are as follows: Operating System Network Load Balancing Server Clusters Windows Server 2003, Standard Edition 32 Not Supported Windows Server 2003, Enterprise Edition 32 8 Windows Server 2003, Datacenter Edition 32 8 Windows 2000 Advanced Server 32 2 Windows 2000 Datacenter Server 32 4 Operating System Maximum Number of Processors Maximum RAM Windows Server 2003, Standard Edition 2 GB Windows Server 2003, Enterprise Edition 8 32 GB Windows Server 2003, Datacenter Edition 32 64 GB Windows 2000 Advanced Server 8 8 GB Windows 2000 Datacenter Server 32 64 GB 4 How Many Clusters? If you want to deploy more than one application with high availability, you must decide how many clusters you want to use. The servers in a cluster can run multiple applications, of course, so you can combine multiple applications in a single cluster deployment, or you can create a separate cluster for each application. In some cases, you can even combine the two approaches. Lesson 1 Understanding Clustering 7-9 For example, if you have two stateful applications that you want to deploy using server clusters, the simplest method would be to create a single cluster and install both appli- cations on every computer in the cluster, as shown in Figure 7-3. In this arrangement, a single server failure affects both applications, and the remaining servers must be capable of providing adequate performance for both applications by themselves. App 1 App 1 App 1 App 1 App 2 App 2 App 2 App 2 Server Server Server Server Figure 7-3 A cluster with two applications running on each server Another method is to create a separate cluster for each application, as shown in Figure 7-4. In this model, each cluster operates independently, and a failure of one server only affects one of the applications. In addition, the remaining servers in the affected cluster only have to take on the burden of one application. Creating separate clusters provides higher availability for the applications, but it can also be an expensive solution, because it requires more servers than the first method. App 1 App 1 App 2 App 2 Server Server Server Server Figure 7-4 Two separate clusters running two different applications It is also possible to compromise between these two approaches by creating a single cluster, installing each of the applications on a separate active node, and using one passive node as the backup for both applications, as shown in Figure 7-5. In this arrangement, a single server failure causes the passive node to take on the burden of running only one of the applications. Only if both active nodes fail would the passive node have to take on the full responsibility of running both applications. It is up to you to evaluate the odds of such an occurrence and to decide whether your organization’s availability requirements call for a passive node server with the capability of running both applications at full performance levels, or whether a passive node scaled to run only one of the applications is sufficient. [...]... (c) 199 7-2 003 Microsoft Corporation Cluster 192.168.2.101 Retrieving parameters Current time = 3/19/2003 1 :55 :24 AM HostName = cz3net.int.adatum.com ParametersVersion = 4 CurrentVersion = 00000204 EffectiveVersion = 00000201 InstallDate = 3E779B7C HostPriority = 3 ClusterIPAddress = 192.168.2.101 ClusterNetworkMask = 255 . 255 . 255 .0 DedicatedIPAddress = 192.168.2.3 DedicatedNetworkMask = 255 . 255 . 255 .0 McastIPAddress... 255 . 255 . 255 .0 McastIPAddress = 0.0.0.0 ClusterName = www.int.adatum.com ClusterNetworkAddress = 03-bf-c0-a 8-0 2-6 5 IPToMACEnable = ENABLED MulticastSupportEnable = ENABLED IGMPSupport = DISABLED MulticastARPEnable = ENABLED MaskSourceMAC = ENABLED AliveMsgPeriod = 1000 AliveMsgTolerance = 5 NumActions = 100 NumPackets = 200 7-2 4 Chapter 7 Clustering Servers NumAliveMsgs = DescriptorsPerAlloc = MaxDescriptorAllocs... TeamID = Master = ReverseHash = IdentityHeartbeatPeriod = IdentityHeartbeatEnabled = 66 51 2 51 2 60 86400 DISABLED STARTED STARTED NONE DISABLED ENABLED DISABLED 60000 300000 ENABLED 250 4 00000000 00000000 NO NO NO 10000 ENABLED PortRules (1): VIP Start End Prot Mode Pri Load Affinity - - - - -All 80 80 TCP Multiple Eql None Statistics: Number of active connections = Number of... rule containing the port spec ified by the port variable, as follows: WLBS Cluster Control Utility V2.4 (c) 199 7-2 003 Microsoft Corporation Cluster 192.168.2.101 Retrieving state for port rule 80 Rule is enabled Packets: Accepted=0, Dropped=17 Lesson 2 ! Using Network Load Balancing 7-2 5 Exam Tip Be sure to understand that the NLB.EXE and WLBS.EXE programs are one and the same, with identical functions... connections can use copper-based as well as fiber-optic cable as a network medium The most common method for implementing a Fibre Channel storage solution on a server cluster is to install a Fibre Channel host adapter in each cluster server and then use them to connect the computers to one or more external storage devices The 7-3 4 Chapter 7 Clustering Servers storage devices are typically self- contained drive... Open Internet Explorer, and in the Address drop-down list, type http:// 10.0.0.100, and then press Enter The “Hello, world” page you created earlier appears in the browser This test is successful because the NLB service has created the 10.0.0.100 address you specified for the cluster 2 Next, type http://10.0.0.1 in the Address drop-down list, and then press Enter The “Hello, world” page appears again... e-mail stores on a drive array (typically using RAID or some other data availability technique) that is connected to all the serv ers in the cluster Therefore, all the application’s clients, no matter which server in the cluster they connect to, are working with the same data files, as shown in Figure 7-1 0 Lesson 3 Server Server Server Designing a Server Cluster 7-3 1 Server Storage Data Figure 7-1 0... Designing a Server Cluster 7-3 3 Because of the limitations of the SCSI architecture, Windows Server 2003 only supports two-node clusters using SCSI, and only with the 32-bit version of Windows Server 2003, Enterprise Edition SCSI hubs are also not supported In addition, you cannot use SCSI for a geographically dispersed cluster, as the maximum length for a SCSI bus is 25 meters Real World SCSI Clustering... server cluster 4 In the Subnet Mask text box, type 255 .0.0.0 5 In the Full Internet Name text box, type www.contoso.com This fully qualified domain name (FQDN) will represent the cluster on the network Web users type this name in their browsers to access the Web server cluster Important Specifying a name for the cluster in this dialog box does not in itself make the cluster available to clients by that... as in FC-AL; the full bandwidth of the network is available to all communications Lesson 3 Server Server Server Designing a Server Cluster 7-3 5 Server Fibre Channel Switch Storage Storage Figure 7-1 3 A cluster using a Fibre Channel switched fabric network An FC-SW network that is wholly dedicated to giving servers access to data storage devices is a type of SAN Building a SAN to service your server . that have long-running in-memory states or large, frequently changing data sets. These are called stateful applications, and include database servers such as Microsoft SQL Server, e-mail and messaging. six-byte hexadecimal value hard- coded into the adapter by the manufacturer. Three bytes of the address contain a code identi- fying the manufacturer, and three bytes identify the adapter itself possible NLB configurations, each of which has advan- tages and disadvantages, as shown in Table 7-1 . 7-1 8 Chapter 7 Clustering Servers Table 7-1 NLB Configuration Advantages and Disadvantages