1. Trang chủ
  2. » Giáo Dục - Đào Tạo

Cisco networkers 2009 session BRKCOM 2986 unified data center architecture integrating of unified computing system technology DDU

41 28 0

Đang tải... (xem toàn văn)

Tài liệu hạn chế xem trước, để xem đầy đủ mời bạn chọn Tải xuống

THÔNG TIN TÀI LIỆU

Thông tin cơ bản

Định dạng
Số trang 41
Dung lượng 7,83 MB

Nội dung

Unified Data Center Architecture: Integrating Unified Compute System Technology BRKCOM-2986 marregoc@cisco.com Presentation_ID © 2009 Cisco Systems, Inc All rights reserved Cisco Public Agenda  Overview  Design Considerations  A Unified DC Architecture  Deployment Scenarios BRKCOM-2986_c1 © 2009 Cisco Systems, Inc All rights reserved Cisco Public UCS Building Blocks UCS Manager Embedded– manages entire system UCS 6100 Series Fabric Interconnect 20 Port 10Gb FCoE – UCS-6120 40 Port 10Gb FCoE – UCS-6140 UCS Fabric Extender – UCS 2100 Series Remote line card UCS 5100 Series Blade Server Chassis Flexible bay configurations UCS B-Series Blade Server Industry-standard architecture UCS Virtual Adapters Choice of multiple adapters BRKCOM-2986_c1 © 2009 Cisco Systems, Inc All rights reserved Cisco Public Legend Catalyst 6500 Multilayer Switch Nexus 1000V (VEM) Nexus 1000V (VSM) Catalyst 6500 L2 Switch Virtual Switching System Embedded VEM with VMs VM VM VM VM VM VM Generic Cisco Multilayer Switch Nexus 2000 (Fabric Extender) Generic Virtual Switch Nexus 5K with VSM Nexus Multilayer Switch UCS Fabric Interconnect Nexus L2 Switch UCS Blade Chassis Nexus Virtual DC Switch (Multilayer) Nexus Virtual DC Switch (L2) ACE Service Module MDS 9500 Director Switch Virtual Blade Switch MDS Fabric Switch BRKCOM-2986_c1 ASA © 2009 Cisco Systems, Inc All rights reserved IP Cisco Public IP Storage Agenda  Overview  Design Considerations Blade Chassis Rack Capacity & Server Density System Capacity & Density Chassis External Connectivity Chassis Internal Connectivity  A Unified DC Architecture  Deployment Scenarios BRKCOM-2986_c1 © 2009 Cisco Systems, Inc All rights reserved Cisco Public Blade Chassis – UCS 5108 Details Functional View Actual View Front View slot Slots slot 10.5” 26.7 cm slot 32” slot slot 81.2 cms slot 17.5” slot slot Front to back Airflow 44.5 cm Rear View Fabric Extenders BRKCOM-2986_c1 © 2009 Cisco Systems, Inc All rights reserved Cisco Public Blade Chassis – UCS 5108 Details UCS 5108 Chassis Characteristics Functional View 8 Blade Slots Chassis slot  half-width servers or;  full-width servers Up to two Fabric Extenders slot  Both concurrently active slot Redundant and Hot Swappable slot ▪ Power Supplies ▪ Fan Modules slot slot slot slot Additional Chassis Details: Size: 10.5” (6U) x 18.5” x 32” Total Power Consumption Slots Fabric Extenders ▪ Nominal Estimates** Half-width Servers: 1.5 – 3.5 kW Full-width Servers: 1.5 – 3.5 kW Cabling: SFP+ Connectors  ▪ ▪ ▪ Actual Front and Rear Views BRKCOM-2986_c1 © 2009 Cisco Systems, Inc All rights reserved For FEX to Fabric Interconnect CX1: & meters USR: 100 meters – Late summer SR: 300 meters Airflow: front to back Cisco Public Blades, Slots, and Mezz Cards B-Series Blades Chassis Full-width Mezz Card Ports Half-width slot Full-width – B250-M1 ▪ Each blade uses two slots ▪ Always 1,2 or 3,4, or 5,6, or 7,8 Each Slot Takes a Single Mezz Card Each Mezz Card Has Ports slot slot ▪ Each port connect to one FEX slot slot ▪ Each blade uses one slot Slots and Mezz Cards slot slot Half-width Blade – B200-M1 Blade Slots Total of Mezz Cards Per Chassis slot Blades, Slots and Mezz Cards Half-width Blade – B200-M1 ▪ One mezz card two ports per server ▪ Each Port to different FEX Full-width Blade – B250-M1 ▪ Two mezz cards four ports per server ▪ Two: from each mezz card to each FEX BRKCOM-2986_c1 © 2009 Cisco Systems, Inc All rights reserved Cisco Public UCS 2100 Series Fabric Extender Details UCS 2104 XP Fabric Extender Port Usage Two Fabric Extenders per Chassis 4 x 10GE Uplinks per Fabric Extender Fabric Extender Connectivity  1, or uplinks per Fabric Extender  All uplinks connect to one Fabric Interconnect Port Combination Across Fabric Extenders ▪ FEX Port count must match per enclosure ▪ Any port on FEX could be utilized slot slot slot slot slot slot slot slot Fabric Extenders Ports Fabric Extender Capabilities Managed as part of UC System 802.1q Trunking and FCoE Capabilities Uplink Traffic Distribution Fabric Extenders ▪ Selected when blades are inserted ▪ Slot assignment occurs at power up ▪ All slot traffic is assigned to a single uplink Other Logic ▪ Monitor and control of environmentals ▪ Blade insertion/removal events BRKCOM-2986_c1 © 2009 Cisco Systems, Inc All rights reserved Cisco Public UCS 6100 Series Fabric Interconnect Details Fabric Interconnect Overview Management of Compute System Network Connectivity  to/from Compute Nodes  to/from LAN/SAN Environments Fabric Interconnects Types of Fabric Interconnect ▪ UCS-6120XP – 1U 20 Fixed 10GE/FCoE ports & expansion slot ▪ UCS-6140XP – 2U 40 Fixed 10GE/FCoE ports & expansion slots slot slot slot slot slot slot slot slot Fabric Interconnect Details Expansion Modules ▪ Fibre Channel: x 1/2/4G FC ▪ Ethernet: Fabric Extenders x 10GE SFP+ ▪ Fibre Channel & Ethernet: 1/2/4G FC & x 10GE SFP+ Fixed Ports: FEX or uplink connectivity Expansion Slot Ports: uplink connectivity only BRKCOM-2986_c1 © 2009 Cisco Systems, Inc All rights reserved Cisco Public 10 Network Equipment Distribution EoR, ToR, Blade Switches, and Distributed Access Fabric EoR ToR Network Modular Switch at the end of Low RU, lower port density switch per server rack Fabric & a row of server racks Location Copper: server to ToR switch Cabling Copper: server to access Port Density Blade Switches DAF Switches Integrated in to blade enclosures per server racks Access fabric on top of rack & access switch on end of row Copper or Fiber: access to aggregation Copper of fiber: In rack patch Fiber: access fabric to fabric switch Fiber: access to aggregation Fiber: ToR to aggregation 240 – 336 ports - C6500 40 – 48 ports C49xx – GE 288 – 672 ports – N7000 – 12 multi-RU servers 20 – 4000 ports N2K-5K – GE 8-30 RU servers blade enclosures per rack – Applicable to low and high density server racks – most 12-48 blade servers flexible Rack Server Density Tiers on Typically 2: Access and aggregation POD BRKCOM-2986_c1 Typically 2: Access and aggregation © 2009 Cisco Systems, Inc All rights reserved Cisco Public 14 – 16 Servers (dual-homed) Ranges from classic EoR to ToR designs Typically 2: Access and aggregation One or Two Collapsed access / aggregation or classic aggregation & access 27 Agenda  Overview  Design Considerations  A Unified DC Architecture  Deployment Scenarios Overall Architecture Ethernet Environments Switch mode EHM UIO Environments BRKCOM-2986_c1 © 2009 Cisco Systems, Inc All rights reserved Cisco Public 28 UCS and Network Environments Fabric A Storage Arrays Core Fabric B Fabric Interconnect Fabric Extenders Unified Compute System Unified Compute System LAN & SAN Deployment Considerations Other Considerations Details Deployment Options Mezz Cards: CNAs  Uplinks per Fabric Extender Fan-out, Oversubscription & Bandwidth  Uplinks per Fabric Interconnect Fabric Interconnect FC Uplinks Flavor of Expansion Module  Fabric Interconnect Connectivity Point Fabric Interconnect is the Access Layer Should connect to L2/L3 Boundary switch BRKCOM-2986_c1 © 2009 Cisco Systems, Inc All rights reserved  Up to 10G FC (FCoE)  Enet traffic could take full capacity if needed Cisco Public  4G FC: or Ports and N-port channels  Uplinks per Fabric Interconnect or FC uplinks per Fabric or 10GE uplinks to each upstream switch 29 UCS Fabric Interconnect Operation Modes Switch Mode & End-Host-Mode POD POD VSS or VPC VSS or VPC 6 8 Fabric Interconnect  Behaves like any other Ethernet Switch  participates on STP topology  Follow STP design best practices Provides L2 Switching Functionality Uplink Capacity based on Port-Channels Switch Looks Like a Host Switch Still Performs L2 Switching MAC addresses Active on link at a time  MAC addresses pinned to uplinks  Local MAC learning – not on uplinks  Forms Loop-free Topology UCS mgr Syncs Fabric Interconnects Uplink Capacity  6140: Expansion Slots 8-port Port Channels  6120: Expansion Slot 6-port Port Channels © 2009 Cisco Systems, Inc All rights reserved End-Host-Mode Switch Mode BRKCOM-2986_c1 Cisco Public  6140: 12-way (12 uplinks)  6120: 6-way (6 uplinks) 30 UCS on Ethernet Environments Fabric Interconnect & Network Topologies POD POD VSS or VPC Classic STP Topologies 8 Simplified Topologies Fabric Interconnect Upstream Switches Provide Non-blocking path mechanism: VSS or VPC Interaction with VSS or VPC  Switch runs STP  participates on STP topology  Follow STP design best practices Upstream Devices: Any L3/L2 boundary switch  Catalyst 6500  Nexus 7000    No Special feature needed Topology is Loopfree Less Reliance on STP Increased Overall Bandwidth Capacity  8-way multi-pathing A combination of VPC/VSS and End-Host-Mode provides optimized network environments by reducing MAC scaling constraints, increasing available bandwidth and lowering processing load on agg switches BRKCOM-2986_c1 © 2009 Cisco Systems, Inc All rights reserved Cisco Public 31 UCS on Ethernet Environments Connectivity Point POD 16 16 256 10GE Access 112 10GE 6 6 112 10GE UCS-6140 UCS-6140 8 Switch 112 10GE Mode 5:1 L2 4.5:1 16 14.5:1 L2 POD 256 10GE L3 6.5:1 16 Aggregation UCS-6140 UCS-6140 End-Host Mode 3.3:1 L2 Rack Rack N Wire speed 10GE ports Rack Oversubscription Rates Rack N Fabric Interconnect to ToR Access Switches Fabric Interconnect to Aggregation Switches  Fabric Interconnect in End-Host Mode – No STP  Fabric Interconnect in switch mode  Leverage Total Uplink Capacity: 60G or 120 G Capacity  Leverage Total Uplink Capacity: 60G or 80G Capacity  L2 topology remains 2-tier  L2 topology remains 2-tier Scalability Scalability  6140 pair: 10 chassis = 80 compute nodes – 3.3:1 subscription  6140 pair: 10 chassis = 80 compute nodes – 5:1 subscription  5040 pair: UCS systems = 480 compute nodes – 15:1 subscription  7018 pair: 14 6140 pairs: 1,120 compute nodes  7018 pair: 13 5040 pairs: 6,240 compute node  Enclosure 80GE – Compute node: 10GE Attached  Enclosure: 80 GE – Compute Node: 10GE Attached BRKCOM-2986_c1 © 2009 Cisco Systems, Inc All rights reserved Cisco Public 32 UCS on Ethernet Environments Connectivity Point POD 16 16 16 POD 8 8 UCS-6140 6 UCS-6140 UCS-6140 UCS-6140 Rack Rack 16 Rack N Rack N What other factors need to be considered BRKCOM-2986_c1  East to West Traffic Profiles  Size of L2 domain: VLANs, MAC, number of ports, servers, switches, etc  Tiers in the L2 topology: STP domain, diameter  Oversubscription  Latency  Server Virtualization Load: MAC addresses, VLANs, etc  Switch Mode vs End-Host-Mode © 2009 Cisco Systems, Inc All rights reserved Cisco Public 33 Unified Compute System POD Example Ethernet Only - Switch Mode L3 L2 POD POD Aggregation 6 UCS6120 16 UCS6120 16 UCS6120 UCS6120 8 L2 Access 8 UCS6140 16 UCS6140 16 8 UCS6140 16 UCS6140 16 8 vL2 Rack Rack Rack Rack Virtual Access Rack Rack Rack Rack Access Layer: UCS-6120 x Access Layer: UCS-6140 x 52 10GE 1:1 ports: 12 uplinks & 40 downlinks = chassis 104 10GE 1:1 ports: 12 uplinks & 80 downlinks = 10 chassis chassis = 40 halfwidth servers 10 chassis = 80 halfwidth servers Chassis Subscription: 16:8 = 2:1 – Blade Bandwidth: 10G Chassis Subscription: 16:8 = 2:1 – Blade Bandwidth: 10G Access to aggregation oversubscription: 20:6 ~ 3.3:1 Access to Aggregation oversubscription: 40:8 ~ 5:1 Aggregation Layer: 7010 + vPC Aggregation Layer: 7018 + vPC 128 10GE 1:1 ports = UCS = 320 servers BRKCOM-2986_c1 © 2009 Cisco Systems, Inc All rights reserved Cisco Public 256 10GE 1:1 ports = 14 UCS = 1,120 servers 34 UCS POD Density Ethernet Only POD POD NEXUS 7010 UCS6120 UCS6120 UCS6120 UCS6120 UCS6140 320 - 1,280 Blades Rack Rack X Rack uplinks – 20 downlinks UCS-6140 uplinks – 40 downlinks UCS6140 UCS6140 UCS6140 1,120 - 4,480 Blades Rack X Nexus 7018: 256 10GE 1:1 per pair UCS-6120 NEXUS 7018 ~18 UCS-6120 pairs 2,880,1440 & 720 ~ 14 UCS-6140 pairs 4,480, 2240 & 1120 Rack Rack X Rack Rack X x 10GE FEX uplink 2.5G per slot x 10GE FEX uplinks x 10GE FEX Uplinks 5G per slot 10G per slot 20 Chassis 160 blades 40 Chassis 320 blades 10 Chassis 80 blades 20 Chassis 160 blades Chassis 40 blades 10 Chassis 80 blades 20 Chassis 160 blades 40 Chassis 320 blades 10 Chassis 80 blades 20 Chassis 160 blades Chassis 40 blades 10 Chassis 80 blades Nexus 7010: 128 10GE 1:1 per pair UCS-6120 uplinks – 20 downlinks UCS-6140 uplinks – 40 downlinks Session_ID Presentation_ID ~8 6120 pairs 1,280, 640 & 320 ~ 6140 pairs 1,920, 960 & 480 © 2008 Cisco Systems, Inc All rights reserved Cisco Confidential 35 SAN Environments NPV Mode A B Core Edge Edge Core B A NPIV Edge NPV POD POD Fabric Interconnect NPV Mode:  Treat UCS as a host  Useful addressing domain scalability (No domain ID needed on fabric switch)  SAN Fabric agnostic  Two Models to consider Core-Edge Edge-Core-Edge BRKCOM-2986_c1 © 2009 Cisco Systems, Inc All rights reserved Cisco Public Connecting Point: Based on Scalability and Port Density NPIV could add significant load Load: FLOGI, Zoning, other services Core Layer: Tends to be less scalable due to limited port count Edge Layer: Is more scalable because load is spread across multiple switches 36 UCS on a Storage Network FC/ FCoE Only – NPV Mode POD Core A B UCS6120 UCS6120 8 UCS6140 UCS6140 Access/Edge 8 Virtual Access Rack Rack Core: MDS9509  Rack Rack Core: MDS9509 12 4GFC x = 84 4G FC = 21 UCS = 1680 servers Edge: 6120 x  12 4GFC x = 84 4G FC = 10 UCS = 1600 servers Edge: 6140 x  40 10GE 1:1 ports, 4G FC uplinks = 10 chassis  80 10GE 1:1 ports, 16 4G FC uplinks = 20 chassis  10 chassis = 80 halfwidth servers  20 chassis = 160 halfwidth servers  FC chassis bandwidth: x 10 GE = 5G per blade  chassis Subscription: x 10GE : 5G per blade  6120 subscription: 8x4G FC: 20G = 6.25 :  6140 subscription: 16x4G FC: 40G = 6.25:1 BRKCOM-2986_c1 © 2009 Cisco Systems, Inc All rights reserved Cisco Public 37 UCS Density on LAN/SAN Environments Storage Arrays A 2 40x10GE: 8x10GE = 5:1 UCS6120 4 UCS6120 UCS6140 40x10GE: 8x4G = 12.5:1 UCS6140 8 Unified Compute System Unified Compute System LAN Environment SAN Environment Using 7018 at Aggregation Layer Using 9509 at Aggregation Layer  28 UCS based on 6120s  14 UCS based on 6140s Using 6120s in the access  21 UCS based on 6120s  10 UCS based on 6140s Using 6120s in the access  Chassis (4 ports per FEX) – 40 blades  Using 6140s in the access  © 2009 Cisco Systems, Inc All rights reserved Chassis (4 ports per FEX) – 40 blades Using 6140s in the access 10 Chassis (4 ports per FEX) – 80 blades BRKCOM-2986_c1 B Cisco Public  10 Chassis (4 ports per FEX) – 8- blades 38 Conclusion  End-host-mode vs Switch mode EHM to Access Switch is possible Switch mode to L2/L3 Boundary switch is appropriate If Aggregating to ToR switch is desired use EHM  Fan-out, Oversubscription & Bandwidth Understand All Three: Network folks: oversubscription Storage folks: fan-out Server folks: bandwidth  Get Trained Early on Lots of New and Useful Technology Architecture is Rapidly Evolving BRKCOM-2986_c1 © 2009 Cisco Systems, Inc All rights reserved Cisco Public 39 Complete Your Online Session Evaluation  Give us your feedback and you could win fabulous prizes Winners announced daily  Receive 20 Passport points for each session evaluation you complete  Complete your session evaluation online now (open a browser through our wireless network to access our portal) or visit one of the Internet stations throughout the Convention Center BRKCOM-2986_c1 © 2009 Cisco Systems, Inc All rights reserved Cisco Public Don’t forget to activate your Cisco Live Virtual account for access to all session material, communities, and on-demand and live activities throughout the year Activate your account at the Cisco booth in the World of Solutions or visit www.ciscolive.com 40 BRKCOM-2986_c1 © 2009 Cisco Systems, Inc All rights reserved Cisco Public 41 ... Enclosure BRKCOM- 2986_ c1 © 2009 Cisco Systems, Inc All rights reserved Cisco Public 13 UCS – Compute Node Density Fabric Interconnects Unified Compute System Density Unified Compute System Density... Considerations  A Unified DC Architecture The Unified DC Architecture The Virtual Access Layer Unified Compute Pods Distributed Access Fabric Model  Deployment Scenarios BRKCOM- 2986_ c1 © 2009 Cisco Systems,... Considerations  A Unified DC Architecture  Deployment Scenarios BRKCOM- 2986_ c1 © 2009 Cisco Systems, Inc All rights reserved Cisco Public UCS Building Blocks UCS Manager Embedded– manages entire system

Ngày đăng: 27/10/2019, 21:31

TÀI LIỆU CÙNG NGƯỜI DÙNG

TÀI LIỆU LIÊN QUAN