BEST PRACTICE GUIDELINES FOR DEPLOYING EX8200 VIRTUAL CHASSIS CONFIGURATIONS

28 536 0
BEST PRACTICE GUIDELINES FOR DEPLOYING EX8200 VIRTUAL CHASSIS CONFIGURATIONS

Đang tải... (xem toàn văn)

Tài liệu hạn chế xem trước, để xem đầy đủ mời bạn chọn Tải xuống

Thông tin tài liệu

Tài liệu này giúp các bạn cấu hình và triển khai Virtual chassis trên thiết bị của juniper.

IMPLEMENTATION GUIDE Copyright © 2011, Juniper Networks, Inc. 1 BEST PRACTICE GUIDELINES FOR DEPLOYING EX8200 VIRTUAL CHASSIS CONFIGURATIONS Although Juniper Networks has attempted to provide accurate information in this guide, Juniper Networks does not warrant or guarantee the accuracy of the information provided herein. Third party product descriptions and related technical details provided in this document are for information purposes only and such products are not supported by Juniper Networks. All information provided in this guide is provided “as is”, with all faults, and without warranty of any kind, either expressed or implied or statutory. Juniper Networks and its suppliers hereby disclaim all warranties related to this guide and the information contained herein, whether expressed or implied of statutory including, without limitation, those of merchantability, fitness for a particular purpose and noninfringement, or arising from a course of dealing, usage, or trade practice. 2 Copyright © 2011, Juniper Networks, Inc. IMPLEMENTATION GUIDE - Best Practice Guidelines for Deploying EX8200 Virtual Chassis Configurations Table of Contents Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4 Scope . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4 Terminology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4 Design Considerations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4 EX8200 Virtual Chassis Configurations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4 EX8200 Virtual Chassis Ports . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5 Virtual Chassis Ports on the XRE200 External Routing Engine . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5 Virtual Chassis Ports on EX8200 Member Chassis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5 Comparison Between EX4200 and EX8200 Virtual Chassis Configurations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6 EX8200 Virtual Chassis High Availability and Resiliency . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .7 Hardware Redundancy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .7 Control Plane Redundancy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8 Graceful Routing Engine Switchover . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8 Nonstop Active Routing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8 Data Plane Redundancy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8 Nonstop Software Upgrade (NSSU) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8 The Makeup of an EX8200 Virtual Chassis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9 XRE200 External Routing Engines . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9 EX8200 Chassis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9 Member ID and Interface Numbering . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9 Network Ports . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9 Virtual Chassis Ports . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .10 Connecting an XRE200 into an EX8200 Virtual Chassis Configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .10 Building Virtual Chassis Configurations over Long Distances . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .10 New EX8200 Virtual Chassis Configuration Steps . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11 Upgrade All XRE200s and EX8200 Switches to the Same Version . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11 Prepare EX8200 Switches to be Part of a Virtual Chassis Configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12 Create Preprovisioned Virtual Chassis Configuration on Master XRE200 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12 Connect the EX8200 Switches to the XRE200s . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13 Interconnect the XRE200s . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13 Convert EX8200 Switch 10GbE Network Ports to Virtual Chassis Ports . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13 Interconnect the EX8200 Switches via the Converted 10GbE Virtual Chassis Ports . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13 Virtual Chassis Show Commands . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13 Migrating EX8200 Standalone Switches to EX8200 Virtual Chassis Configurations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .16 Upgrade the Switches to Junos OS Version 11.1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17 Revert the Access Switch Links Back to the First EX8208, Now in Virtual Chassis Mode . . . . . . . . . . . . . . . . . . . . . . . . . . . .19 Attach the Second EX8208 to the Pair of XRE200s and Configure It in Virtual Chassis Mode . . . . . . . . . . . . . . . . . . . . . 20 Restore the Access Switch Links Back to the Second EX8208, Now in Virtual Chassis Mode . . . . . . . . . . . . . . . . . . . . . . . 21 Disable VRRP on the EX8200 Switches, Now in Virtual Chassis Mode . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22 Copyright © 2011, Juniper Networks, Inc. 3 IMPLEMENTATION GUIDE - Best Practice Guidelines for Deploying EX8200 Virtual Chassis Configurations EX8200 Virtual Chassis High Availability Best Practices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23 EX8200 Virtual Chassis Failover Convergence Times . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24 EX8200 Virtual Chassis over Long Distances Configuration Steps . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25 Physical Connections of XRE200s and Intermediate Switches . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25 UFD Configuration on Intermediate Switches . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25 XRE200 Configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28 About Juniper Networks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28 Table of Figures Figure 1: EX8200 Virtual Chassis configuration with XRE200 connecting to every member . . . . . . . . . . . . . . . . . . . . . . . . . . 5 Figure 2: EX8200 four-member Virtual Chassis configuration with full mesh connection between every access switch and line card chassis. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6 Figure 3: Two-member EX8200 Virtual Chassis with XRE200s connected to each other directly over a GbE interface. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .10 Figure 4: Two-member EX8200 Virtual Chassis over long distance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11 Figure 5: Redundant pair of EX8200 running VRRP and RTG with all traffic flowing over master EX8200 switch . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .16 Figure 6: All traffic flows through the backup EX8200 switch while master is being prepared to run EX8200 Virtual Chassis configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17 Figure 7: Single-member EX8200 Virtual Chassis is formed while all traffic flows through the backup EX8200 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .18 Figure 8: All traffic is migrated from backup EX8200 to the single-member EX8200 Virtual Chassis . . . . . . . . . . . . . . . . .19 Figure 9: All traffic flows through single-member EX8200 Virtual Chassis while backup EX8200 is configured to join EX8200 Virtual Chassis configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20 Figure 10: Traffic is load-balanced over both EX8200 switches migrated to two-member EX8200 Virtual Chassis . . . . 21 Figure 11: Logical network topology of EX8200 Virtual Chassis configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22 Figure 12: Two-member EX8200 Virtual Chassis with dual-homed access devices. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23 Figure 13: A pair of two-member EX8200 Virtual Chassis configurations with dual-homed access devices interconnected via MPLS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24 Figure 14: Two XRE200s connected to each other via EX2200 over long distance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25 4 Copyright © 2011, Juniper Networks, Inc. IMPLEMENTATION GUIDE - Best Practice Guidelines for Deploying EX8200 Virtual Chassis Configurations Introduction With the explosion of video, mobile services, and other real-time applications, network users now impose more stringent requirements and have greater expectations about what networks can deliver. Modern enterprise networks must deploy robust, loop-free, multipath technologies to meet user demands and to satisfy the increased reliance on highly available network infrastructures, avoiding the high cost of downtime. Traditional LAN designs depend on Spanning Tree Protocol (STP) to prevent logical loops in switched networks with redundant links. In addition to being difficult to deploy, manage, and troubleshoot, technologies underutilize costly network capacity, driving up costs. Finally, running STP in a virtualized network with redundant switches requires compute-intensive protocols such as Virtual Router Redundancy Protocol (VRRP) on each switch, limiting the number of simultaneous logical connections that can be supported. Juniper Networks Virtual Chassis technology provides an innovative technique for building highly available and resilient Layer 2 networks without having to rely on protocols like STP or VRRP. Scope The purpose of this guide is to provide network architects and engineers with best practice guidelines for designing, deploying, and configuring Juniper Networks ® EX8200 line of Ethernet switches with Virtual Chassis technology. The guide is divided into three parts: EX8200 Virtual Chassis architectures at the network operational level; deploying Virtual Chassis technology using hardware components and connections; and migration, failover, and nonstop software upgrade (NSSU) scenarios. Terminology • XRE200: Juniper Networks XRE200 External Routing Engine. • LCC: Line card chassis (used in this document to describe an EX8200 member chassis) • SRE: Switch Fabric and Routing Engine • VCP: Virtual Chassis port • Virtual Chassis Control Protocol • FPC: Flexible PIC Card (i.e., line card module) • Network ports: Non-Virtual Chassis ports that carry only data traffic • PFE: Packet Forwarding Engine Design Considerations EX8200 Virtual Chassis Configurations Virtual Chassis technology was first introduced on the Juniper Networks EX4200 line of Ethernet switches. Virtual Chassis technology allows up to 10 interconnected EX4200 switches to operate as a single, unified, high bandwidth device. These interconnections can be made using any combination of dedicated high-speed Virtual Chassis ports (VCPs) on the switch’s rear panel or front panel gigabit Ethernet (GbE) or 10GbE fiber links. EX8200 Virtual Chassis technology was first introduced with the Juniper Networks Junos ® operating system 10.4 release, enabling up to two EX8200 chassis to be interconnected as a single logical device, with the ability to expand to four with a later software release. The EX8200 Virtual Chassis architecture consists of redundant external Routing Engines, the XRE200 External Routing Engine, capable of managing up to four EX8200 Line Card Chassis (LCCs) connected using 1GbE or 10GbE VCPs and operating as a single chassis. Unlike other virtual system technologies, EX8200 Virtual Chassis technology separates the controller of the virtual system from the chassis Routing Engine. The XRE200s connect to the EX8200 chassis via the 1GbE out-of-band management ports on the Routing Engine modules installed in the modular switch, forming a single Virtual Chassis configuration as shown in Figure 1. These interconnections, known as dedicated VCP links, constitute the control plane connection and do not carry data traffic. Copyright © 2011, Juniper Networks, Inc. 5 IMPLEMENTATION GUIDE - Best Practice Guidelines for Deploying EX8200 Virtual Chassis Configurations Figure 1: EX8200 Virtual Chassis configuration with XRE200 connecting to every member Figure 1 shows a fully meshed two-member Virtual Chassis configuration with two XRE200s and a single 10GbE Link Aggregation Group (LAG) between LCCs. In an EX8200 Virtual Chassis configuration, each EX8200 chassis becomes an LCC and are interconnected through EX8200-8XS line cards using either a 10GbE link or a LAG bundle with up to twelve 10GbE line-rate links. This connectivity serves two functions: to allow data traffic between LCCs for single homed access devices; and to pass control traffic between the EX8200 chassis in case of the failure of all dedicated VCP links. Since the EX8200- 8XS line cards use small form-factor pluggable transceivers (SFP+) that can support connections up to 40 km in distance, EX8200-based Virtual Chassis configurations can, for instance, span a large metropolitan area. If the Virtual Chassis members are located in the same or adjacent racks, low-cost direct attach cables (DACs) can be used as the interconnect mechanism. Members of an EX8200 Virtual Chassis configuration can include a mix of the Juniper Networks EX8208 Ethernet Switch (eight-slot) and EX8216 Ethernet Switch (16-slot). EX8200 Virtual Chassis Ports A Virtual Chassis port (VCP) is any port that is capable of sending and receiving Virtual Chassis Control Protocol traffic to create, monitor, and maintain the Virtual Chassis configuration. There are three types of VCPs on the EX8200—Inter-XRE200, XRE-LCC, and intra-Virtual Chassis. The Inter-XRE200 and XRE-LCC are called “dedicated” VCPs as they carry control traffic, while intra-Virtual Chassis ports carry data traffic between LCCs. In some cases, intra-Virtual Chassis ports may carry data as well as control traffic. Virtual Chassis Ports on the XRE200 External Routing Engine All GbE ports on the XRE200 Virtual Chassis Control Interface (VCCI) modules are VCPs. Any of the VCPs can be used to connect EX8200 switches to the XRE200 to form a Virtual Chassis configuration and also to connect XRE200s together to provide redundancy within the Virtual Chassis configuration. Any link connecting an XRE200 to an EX8200 switch or to another XRE200, therefore, is a VCP link. No user configuration is required to configure these VCP links. All VCP links on the XRE200 only carry Virtual Chassis Control Protocol traffic. Virtual Chassis Ports on EX8200 Member Chassis An EX8200 switch in standalone mode has no VCPs. When a standalone EX8200 switch is enabled to function as a Virtual Chassis switch, the management ports on the switch’s Routing Engines are converted into dedicated XRE-LCC Virtual Chassis ports that carry the Virtual Chassis Control Protocol traffic over the XRE-LCC VCP links. No further configuration is required to configure these VCP links. Lastly, the intra-Virtual Chassis ports, which can only reside on the EX8200 switches, can only be configured on the 10GbE EX8200-8XS line cards. VCPs on the 10GbE EX8200-8XS line card are enabled in pairs, i.e., ports that reside on the same Packet Forwarding Engine (PFE). The EX8200-8XS line card offers eight ports—0 through 7—with two Intra-XRE connection for HA (1GbE) Active XRE200 EX8200 Virtual Chassis Switch EX8200 Virtual Chassis Switch Standby XRE200 XRE-LCC – Active XRE to internal Routing Engine connection (1GbE) XRE-LCC – Standby XRE to internal Routing Engine connection (1GbE) Intra-VC – 10GbE LAG connection 6 Copyright © 2011, Juniper Networks, Inc. IMPLEMENTATION GUIDE - Best Practice Guidelines for Deploying EX8200 Virtual Chassis Configurations contiguous ports residing on the same PFE. If port 0 is enabled as a VCP, Junos OS will automatically enable port 1 as a VCP. Intra-Virtual Chassis links between member switches in Virtual Chassis configuration are automatically configured to form a single LAG; no further user configuration is required. It is possible to configure up to 12 Ethernet ports as VCPs to form a LAG between member switches in a Virtual Chassis configuration. For highest availability, it is recommended to have a two-member LAG at a minimum. However, a four-member LAG with two pairs of port members residing on different line cards is preferred. Comparison Between EX4200 and EX8200 Virtual Chassis Configurations The EX4200 switch has inherent Routing Engine support for up to 10 switches in a Virtual Chassis configuration, while the EX8200 uses two scalable external routing engines—the XRE200s—that allow two, and in the future four, EX8200 chassis to form a Virtual Chassis configuration. One of the XRE200s is a hot-standby backup that takes over in case of failure on the active XRE200. With the EX8200, the dedicated Inter-XRE200 and XRE-LCC VCPs carry control traffic only while the intra-VCPs carry data traffic between LCCs. The intra-VCPs, which are also referred to as VCP-extension (VCPe) ports, carry control traffic only in the event of a dedicated VCP failure. With the EX4200, the same VCP or VCPe ports carry both data and control traffic. Note that EX4200 Virtual Chassis configurations can be built using dedicated high-speed (128 Gbps) VCPs on the switch’s rear panel, or by using front panel GbE or 10GbE fiber links. In an EX8200 Virtual Chassis configuration, mastership is limited to XRE200 members, as the mastership priority setting on the EX8200 chassis themselves is fixed to 0, making them ineligible for mastership election. In an EX4200 Virtual Chassis configuration, any of the members are eligible for mastership. To reduce the amount of traffic between LCCs over the VCP links, Virtual Chassis technology on the EX8200 switch employs an intelligent chassis, local, load-balancing model, which is not present on the EX4200. Figure 2 depicts a four- member EX8200 Virtual Chassis configuration with full mesh connectivity between all LCCs and access switches. If the source and destination reside on Access Switch 1 and Access Switch 4, the traffic between them will be directly switched via a single node—say Node 4 of the EX8200 Virtual Chassis configuration. The traffic never crosses multiple nodes. Figure 2: EX8200 four-member Virtual Chassis configuration with full mesh connection between every access switch and line card chassis. Access Switch 1 Access Switch 2 Access Switch 3 Access Switch 4 Node 1 EX8200 Virtual Chassis Trac flow between Access Switch 1 and Switch 2 is switched locally Node 2 Node 3 Node 4 Copyright © 2011, Juniper Networks, Inc. 7 IMPLEMENTATION GUIDE - Best Practice Guidelines for Deploying EX8200 Virtual Chassis Configurations Table 1 lists a side-by-side comparison between EX4200 and EX8200 switches with Virtual Chassis technology. Table 1: EX4200 vs. EX8200 Virtual Chassis Configuration Side-by-Side Comparison CATEGORY EX4200 VIRTUAL CHASSIS EX8200 VIRTUAL CHASSIS Native Virtual Chassis support Inherent support for Virtual Chassis by the Forwarding Engine. Virtual Chassis emulated using XRE200. Virtual Chassis ports Fixed Virtual Chassis ports. Any 10GbE port on 8XS line card can be a Virtual Chassis port. Virtual Chassis mastership Any member is eligible to be a master or backup. Routing Engine has an election mechanism to choose master and backup. Master and backup is fixed to XRE200s. Virtual Chassis management All members are managed from the master member. All members are managed from master XRE200. Link Aggregation Group LAG load balancing is hash based. LAG load balancing is chassis-local. Virtual Chassis path calculation Every PFE is a node in the Virtual Chassis topology. Every chassis is a node in the Virtual Chassis topology. EX8200 Virtual Chassis High Availability and Resiliency Hardware Redundancy Any network design that is to provide high availability must be built on a solid foundation that offers stability, resiliency, and redundancy. The hardware design of the EX8200 Virtual Chassis configuration separates control and data planes through the use of separate hardware components—Routing Engines for control functions and Flexible PIC Cards (FPCs) for the data traffic. Dual External Routing Engines (XRE200s): In EX8200 Virtual Chassis configurations, Routing Engine functionality is externalized in a special, purpose-built, server-class appliance called the XRE200. With its 2.1 GHz dual-core CPU, 4 GB DRAM, 160 GB RAID hard disk, and dual redundant power supplies, the XRE200 supports control plane processing requirements for large-scale systems such as EX8200 Virtual Chassis configurations, and also provides an extra layer of availability and redundancy. Two XRE200s, running in active/hot standby mode, are required in an EX8200 Virtual Chassis configuration to provide for data, control, and management plane redundancy. To achieve high availability (HA) and resiliency goals, the EX8200 Virtual Chassis must connect in a fully meshed topology. The master XRE200 plays the role of the protocol master and takes care of interface creation and management. All control protocols such as OSPF, Internet Group Management Protocol (IGMP), Link Aggregation Control Protocol (LACP), 802.3ah, Virtual Chassis Control Protocol, etc., as well as all management plane functions, run or reside on the master XRE200. Junos OS control plane HA features such as graceful Routing Engine switchover (GRES), nonstop active routing (NSR), and nonstop bridging (NSB) available in Junos OS 11.3, are enabled on both XRE200s. In the event of an active XRE200 failure, the standby XRE200 takes over and Junos OS HA features ensure that the state of the Virtual Chassis, L2/L3 protocols, and forwarding information are not lost. Dual Switch Fabric and Routing Engines (SREs): Redundant Switch Fabric and Routing Engines (SREs) are supported on the EX8200 switches in both the standalone and Virtual Chassis configuration modes. One Routing Engine functions as the master, while the other is a hot standby should the master fail. In the EX8200 Virtual Chassis configuration, dual Routing Engines are installed in each of the LCCs. The master Routing Engine in the LCCs provides chassis control operations such as FPC power budgeting, power supply unit (PSU) management, line card management, and environmental monitoring and control. It also provides network port connectivity and data plane functionality while maintaining the PFE forwarding tables. Lastly, it acts as a conduit or a relay agent for communications between the master XRE200 and the FPCs. The Junos OS GRES feature is enabled by default on the SREs in the EX8200 Virtual Chassis configuration, so that the backup SRE stays in sync with the master Routing Engine in terms of line card states, PFE forwarding tables, and so forth. If the master becomes unavailable, the backup SRE takes over the functions that the master SRE performs. 8 Copyright © 2011, Juniper Networks, Inc. IMPLEMENTATION GUIDE - Best Practice Guidelines for Deploying EX8200 Virtual Chassis Configurations Control Plane Redundancy Graceful Routing Engine Switchover In dual XRE200 and SRE configurations, the GRES feature leverages the Junos OS software’s separation of control and data plane functionality to provide continuous packet forwarding even when one Routing Engine fails. GRES preserves interface and kernel information so that traffic is not interrupted. However, GRES alone does not preserve the control plane. Even though the data traffic continues to flow through the device (switch or router) during and after switchover between Routing Engines, the protocol timers expire, the neighbor relationship between routers is dropped, and traffic is stopped at the neighbor device. Neighboring devices detect that the EX8200 Virtual Chassis switch has experienced a restart and react to the event in a manner prescribed by individual protocol specifications. To preserve all control plane functions during an XRE200 switchover, GRES must be combined with either graceful restart protocol or nonstop active routing (NSR) and nonstop bridging (NSB). Any updates to the master XRE200 are replicated to the backup XRE200 as soon as they occur. If the kernel on the master XRE200 stops operating, the master XRE200 experiences a hardware failure, or the administrator initiates a manual switchover, mastership switches to the backup XRE200. Nonstop Active Routing NSR enables a routing device with redundant Routing Engines to switch from a primary Routing Engine to a backup Routing Engine without alerting peer nodes/neighbors that a change has occurred. NSR uses the same infrastructure as GRES to preserve interface and kernel information. NSR preserves routing information and protocol sessions by running the routing protocol process on both Routing Engines, so tracing options are replicated on the backup Routing Engine as well. In addition, NSR preserves TCP connections maintained in the kernel. Stateful replication of routing table adjacency information on standby Routing Engines uses routing protocol messages such as routing updates, hello messages, and adjacency states so the standby can immediately take up adjacencies without the disruption of protocol adjacencies. Therefore, the switchover is transparent to neighbors. Configuration of NSR requires that GRES be enabled on the XRE200, with the default setting disabled. With Junos OS 11.3, NSR supports the following routing protocols: • RIP, OSPF, IS-IS, BGP, IGMP and Bidirectional Forwarding Detection (BFD) (for all IPv4 unicast protocols) • OSPFv3 and RIPng IGMP snooping, Physical Interface Module (PIM), and Multicast Listener Discovery (MLD) will have NSR capabilities at a later time. These features can still be configured when NSR is enabled but will be excluded from NSR. Data Plane Redundancy Multi-LCC LAGs: Link aggregation on Ethernet networks is a simple yet effective way to address bandwidth limitations and lack of resilience. EX8200 Virtual Chassis technology supports combining up to 12 individual interfaces as a single bundle known as a Link Aggregation Group (LAG). Connecting access switches to the EX8200 Virtual Chassis configuration over LAGs whose members are connected to different LCC members ensures a redundant multipath for the data plane. All links are active at the same time and traffic is load-balanced across them using the intelligent chassis local load-balancing model explained earlier in this guide. This reduces the amount of traffic between LCCs over the VCP links while ensuring resiliency. Nonstop Soware Upgrade (NSSU) Nonstop software upgrade (NSSU) provides a mechanism for upgrading Junos OS on Juniper Networks EX Series Ethernet Switches (in standalone mode as well as Virtual Chassis mode) using a single command-line interface (CLI) command with minimal traffic disruption. NSSU is available on EX8200 switches with redundant Routing Engines and on EX8200 Virtual Chassis configurations with redundant XRE200 External Routing Engines. NSSU leverages the underlying HA features such as GRES and NSR to enable upgrading the Junos OS version running on a switch or in a Virtual Chassis configuration with no disruption to the control plane and sub-second interruption to the data plane. In addition, NSSU upgrades line cards one at a time, permitting traffic to continue to flow through the line cards that are not being upgraded. By configuring LAGs such that the member links reside on different line cards, it is possible to achieve sub-second traffic disruption when performing an NSSU. Copyright © 2011, Juniper Networks, Inc. 9 IMPLEMENTATION GUIDE - Best Practice Guidelines for Deploying EX8200 Virtual Chassis Configurations The Makeup of an EX8200 Virtual Chassis As mentioned in the previous sections, with Junos OS release 10.4 and later, the EX8200 Virtual Chassis configurations are formed using two XRE200 External Routing Engines and two EX8200 chassis with dual SREs. XRE200 External Routing Engines The function of each hardware device in a Virtual Chassis configuration is determined by that device’s role. The master role in an EX8200 Virtual Chassis configuration is assigned to an XRE200 External Routing Engine only. One of the XRE200s will take the role of Junos OS master Routing Engine and the other one will act as the Junos OS backup Routing Engine. The functions of the XRE200s include: • Master XRE200 controls most Routing Engine functions for all switches in the Virtual Chassis configuration. • Master XRE200 provides a single point for viewing and managing all functionality for all devices in the Virtual Chassis configuration. • Backup XRE200 maintains a state of readiness to take over the master role if the master fails. EX8200 Chassis All EX8200 switches in an EX8200 Virtual Chassis configuration are assigned a line card role. Each switch is referred to as a line card chassis, or LCC. The LCCs are excluded from running the chassis control protocols; their primary function is to forward data traffic. The only role available for the switches in an EX8200 Virtual Chassis configuration is a line card role. Member ID and Interface Numbering An EX8200 Virtual Chassis configuration contains member IDs 0 through 9. Member IDs 0 through 7 must be assigned to EX8200 LCCs only. Member IDs 8 and 9 must be assigned to XRE200 External Routing Engines only. Table 2 summarizes the EX8200 Virtual Chassis member IDs and roles. Table 2: EX8200 Virtual Chassis Member IDs and Roles DEVICE ROLE MEMB ER IDS EX8208 or EX8216 switch Line card 0-7 XRE200 Master or backup Routing Engine 8-9 Network Ports Interface numbering of the network ports on the EX8200 Virtual Chassis configuration follows the standard Junos OS interface nomenclature: type-<fpc>/<pic>/<port>, for example xe-0/0/0. The FPC numbers are based on the member ID of the respective switch member and can be noncontiguous in a two- member Virtual Chassis configuration. The value for PIC is always 0. Table 3 shows the FPC and interface numbering for EX8216 or EX8208 members for EX8200-8-XS line card modules. Table 3: EX8216 or EX8208 FPC and Interface Numbering MEMB ER ID FPC NU MBERIN G NETWORK P ORT INTERFACE RANGE 0 0 through 15 for EX8216 0 through 7 for EX8208 xe-0/0/0 to xe-15/0/7 xe-0/0/0 to xe-7/0/7 1 16 through 31 for EX8216 16 through 23 for EX8208 xe-16/0/0 to xe-31/0/7 xe-16/0/0 to xe-23/0/7 2 32 through 47 for EX8216 32 through 39 for EX8208 xe-32/0/0 to xe-47/0/7 xe-32/0/0 to xe-39/0/7 3 48 through 63 for EX8216 48 through 55 for EX8208 xe-48/0/0 to xe-63/0/7 xe-48/0/0 to xe-55/0/7 10 Copyright © 2011, Juniper Networks, Inc. IMPLEMENTATION GUIDE - Best Practice Guidelines for Deploying EX8200 Virtual Chassis Configurations Virtual Chassis Ports Interface numbering of the VCPs on the EX8200 Virtual Chassis configuration varies with the type of the VCPs and the member switches on which the ports reside. When an EX8200 switch in standalone mode is enabled to function as a Virtual Chassis member, the management ports on the SREs installed in the switch are converted into the dedicated XRE-LCC VCPs. All VCPs on the SRE follow the nomenclature vcp-<slot>/<port>, for example vcp-0/0 for a port that resides on SRE0. Intra-Virtual Chassis ports that reside on EX8200-8XS line cards follow nomenclature vcp-<fpc>/<pic>/<port>, for example vcp-2/0/0. Connecting an XRE200 into an EX8200 Virtual Chassis Configuration The GbE interfaces on the active XRE200 (up to eight) can be used to connect to the active Routing Engines in each of the EX8200 chassis participating in the Virtual Chassis configuration. Similarly, the GbE interfaces on the standby XRE200 (again, up to eight) can be used to connect to the standby Routing Engines in each of the EX8200 chassis in the Virtual Chassis configuration. The two XRE200s can also be connected to each other directly over any available GbE interface (Figure 3). Figure 3: Two-member EX8200 Virtual Chassis with XRE200s connected to each other directly over a GbE interface. Building Virtual Chassis Configurations over Long Distances In large campus or data center environments where the distance between XRE200s and the EX8200 chassis exceeds the maximum reach of a Cat5 or Cat6 cable, dedicated low-end Layer 2 switches such as the Juniper Networks EX2200 Ethernet Switch can be deployed in each location to act as media converters, allowing the XRE200s to be connected over fiber links. At a later time, the XRE200s will have SFP-based interface modules that will eliminate the need for media converters. Currently, a port pair consisting of an RJ-45 and an SFP port on each L2 switch is required for every long-distance connection. All pair ports dedicated to supporting a long-distance connection are assigned to the same static VLAN. This simple configuration enables users to easily deploy EX8200 Virtual Chassis configurations in a wide variety of environments. Intra-XRE connection for HA (1GbE) Active XRE200 EX8200 Virtual Chassis Switch EX8200 Virtual Chassis Switch XE Series (10GbE) Standby XRE200 Active XRE to Internal Routing Engine connection (1GbE) Standby XRE to Internal Routing Engine connection (1GbE) 10GbE LAG intra-Virtual Chassis connection 10GbE links Trac flow from access to access and access to core/WAN XE Series (10GbE) . IMPLEMENTATION GUIDE - Best Practice Guidelines for Deploying EX8200 Virtual Chassis Configurations EX8200 Virtual Chassis High Availability Best Practices . .. GUIDE - Best Practice Guidelines for Deploying EX8200 Virtual Chassis Configurations Virtual Chassis Ports Interface numbering of the VCPs on the EX8200 Virtual

Ngày đăng: 28/06/2013, 08:38

Từ khóa liên quan

Tài liệu cùng người dùng

  • Đang cập nhật ...

Tài liệu liên quan