1. Trang chủ
  2. » Công Nghệ Thông Tin

CRC press packet forwarding technologies dec 2007 ISBN 084938057x pdf

448 89 0

Đang tải... (xem toàn văn)

Tài liệu hạn chế xem trước, để xem đầy đủ mời bạn chọn Tải xuống

THÔNG TIN TÀI LIỆU

Thông tin cơ bản

Định dạng
Số trang 448
Dung lượng 14,59 MB

Nội dung

PACKET FORWARDING TECHNOLOGIES AU8057_C000.indd i 11/14/2007 6:15:31 PM OTHER TELECOMMUNICATIONS BOOKS FROM AUERBACH Architecting the Telecommunication Evolution: Toward Converged Network Services Vijay K Gurbani and Xian-He Sun ISBN: 0-8493-9567-4 Business Strategies for the Next-Generation Network Nigel Seel ISBN: 0-8493-8035-9 Chaos Applications in Telecommunications Peter Stavroulakis ISBN: 0-8493-3832-8 Context-Aware Pervasive Systems: Architectures for a New Breed of Applications Seng Loke ISBN: 0-8493-7255-0 Fundamentals of DSL Technology Philip Golden, Herve Dedieu, Krista S Jacobsen ISBN: 0-8493-1913-7 Introduction to Mobile Communications: Technology,, Services, Markets Tony Wakefield, Dave McNally, David Bowler, Alan Mayne ISBN: 1-4200-4653-5 IP Multimedia Subsystem: Service Infrastructure to Converge NGN, 3G and the Internet Rebecca Copeland ISBN: 0-8493-9250-0 MPLS for Metropolitan Area Networks Nam-Kee Tan ISBN: 0-8493-2212-X Performance Modeling and Analysis of Bluetooth Networks: Polling, Scheduling, and Traffic Control Jelena Misic and Vojislav B Misic ISBN: 0-8493-3157-9 A Practical Guide to Content Delivery Networks Gilbert Held ISBN: 0-8493-3649-X Security in Distributed, Grid, Mobile, and Pervasive Computing Yang Xiao ISBN: 0-8493-7921-0 TCP Performance over UMTS-HSDPA Systems Mohamad Assaad and Djamal Zeghlache ISBN: 0-8493-6838-3 Testing Integrated QoS of VoIP: Packets to Perceptual Voice Quality Vlatko Lipovac ISBN: 0-8493-3521-3 The Handbook of Mobile Middleware Paolo Bellavista and Antonio Corradi ISBN: 0-8493-3833-6 Traffic Management in IP-Based Communications Trinh Anh Tuan ISBN: 0-8493-9577-1 Understanding Broadband over Power Line Gilbert Held ISBN: 0-8493-9846-0 Understanding IPTV Gilbert Held ISBN: 0-8493-7415-4 WiMAX: A Wireless Technology Revolution G.S.V Radha Krishna Rao, G Radhamani ISBN: 0-8493-7059-0 WiMAX: Taking Wireless to the MAX Deepak Pareek ISBN: 0-8493-7186-4 Wireless Mesh Networking: Architectures, Protocols and Standards Yan Zhang, Jijun Luo and Honglin Hu ISBN: 0-8493-7399-9 Wireless Mesh Networks Gilbert Held ISBN: 0-8493-2960-4 Resource, Mobility, and Security Management in Wireless Networks and Mobile Communications Yan Zhang, Honglin Hu, and Masayuki Fujise ISBN: 0-8493-8036-7 AUERBACH PUBLICATIONS www.auerbach-publications.com To Order Call: 1-800-272-7737 • Fax: 1-800-374-3401 E-mail: orders@crcpress.com AU8057_C000.indd ii 11/14/2007 6:15:32 PM PACKET FORWARDING TECHNOLOGIES WEIDONG WU New York AU8057_C000.indd iii London 11/14/2007 6:15:32 PM Auerbach Publications Taylor & Francis Group 6000 Broken Sound Parkway NW, Suite 300 Boca Raton, FL 33487-2742 © 2008 by Taylor & Francis Group, LLC Auerbach is an imprint of Taylor & Francis Group, an Informa business No claim to original U.S Government works Printed in the United States of America on acid-free paper 10 International Standard Book Number-13: 978-0-8493-8057-0 (Hardcover) This book contains information obtained from authentic and highly regarded sources Reprinted material is quoted with permission, and sources are indicated A wide variety of references are listed Reasonable efforts have been made to publish reliable data and information, but the author and the publisher cannot assume responsibility for the validity of all materials or for the consequences of their use No part of this book may be reprinted, reproduced, transmitted, or utilized in any form by any electronic, mechanical, or other means, now known or hereafter invented, including photocopying, microfilming, and recording, or in any information storage or retrieval system, without written permission from the publishers For permission to photocopy or use material electronically from this work, please access www.copyright.com (http:// www.copyright.com/) or contact the Copyright Clearance Center, Inc (CCC) 222 Rosewood Drive, Danvers, MA 01923, 978-750-8400 CCC is a not-for-profit organization that provides licenses and registration for a variety of users For organizations that have been granted a photocopy license by the CCC, a separate system of payment has been arranged Trademark Notice: Product or corporate names may be trademarks or registered trademarks, and are used only for identification and explanation without intent to infringe Library of Congress Cataloging-in-Publication Data Wu, Weidong Packet forwarding technologies / by Weidong Wu p cm Includes bibliographical references and index ISBN-13: 978-0-8493-8057-0 ISBN-10: 0-8493-8057-X Packet switching (Data transmission) Routers (Computer networks) I Title TK5105.W83 2008 621.39’81 dc22 2007026355 Visit the Taylor & Francis Web site at http://www.taylorandfrancis.com and the Auerbach Web site at http://www.auerbach-publications.com AU8057_C000.indd iv 11/14/2007 6:15:32 PM Contents Preface xiii Acknowledgments xv About the Author xvii Chapter Introduction 1.1 Introduction 1.2 Concept of Routers 1.3 Basic Functionalities of Routers 1.3.1 Route Processing 1.3.2 Packet Forwarding 1.3.3 Router Special Services 1.4 Evolution of Router Architecture 1.4.1 First Generation—Bus-Based Router Architectures with Single Processor 1.4.2 Second Generation—Bus-Based Router Architectures with Multiple Processors 1.4.2.1 Architectures with Route Caching 1.4.2.2 Architectures with Multiple Parallel Forwarding Engines 1.4.3 Third Generation—Switch Fabric-Based Router Architecture 1.4.4 Fourth Generation—Scaling Router Architecture Using Optics 1.5 Key Components of a Router 1.5.1 Linecard 1.5.1.1 Transponder/Transceiver 1.5.1.2 Framer 1.5.1.3 Network Processor 1.5.1.4 Traffic Manager 1.5.1.5 CPU 1.5.2 Network Processor (NP) 1.5.3 Switch Fabric 1.5.3.1 Shared Medium Switch 1.5.3.2 Shared Memory Switch Fabric 2 7 8 11 12 14 14 14 14 15 15 16 16 19 19 20 v AU8057_C000.indd v 11/14/2007 6:15:32 PM vi Ⅲ Contents 1.5.3.3 Distributed Output Buffered Switch Fabric 1.5.3.4 Crossbar Switch 1.5.3.5 Space-Time Division Switch 1.5.4 IP-Address Lookup: A Bottleneck References Chapter Concept of IP-Address Lookup and Routing Table 31 2.1 IP Address, Prefi x, and Routing Table 2.2 Concept of IP-Address Lookup 2.3 Matching Techniques 2.3.1 Design Criteria and Performance Requirement 2.4 Difficulty of the Longest-Prefi x Matching Problem 2.4.1 Comparisons with ATM Address and Phone Number 2.4.2 Internet Addressing Architecture 2.5 Routing Table Characteristics 2.5.1 Routing Table Structure 2.5.2 Routing Table Growth 2.5.3 Impact of Address Allocation on Routing Table 2.5.3.1 Migration of Address Allocation Policy 2.5.3.2 Impact of Address Allocations on Routing Table Size 2.5.3.3 Impact of Address Allocation on Prefixes with 24-Bit Length 2.5.4 Contributions to Routing Table Growth 2.5.4.1 Multi-Homing 2.5.4.2 Failure to Aggregate 2.5.4.3 Load Balancing 2.5.4.4 Address Fragmentation 2.5.5 Route Update 2.6 Constructing Optimal Routing Tables 2.6.1 Filtering Based on Address Allocation Policies 2.6.1.1 Three Filtering Rules 2.6.1.2 Performance Evaluation 2.6.2 Minimization of the Routing Table with Address Reassignments 2.6.2.1 Case of a Single IP Routing Table 2.6.2.2 General Case 2.6.3 Optimal Routing Table Constructor 2.6.3.1 Description of the Algorithm 2.6.3.2 Improvements 2.6.3.3 Experiments and Results References Chapter 31 32 33 34 36 36 36 39 40 41 43 44 45 46 46 48 48 49 50 50 52 52 52 54 55 56 59 63 63 66 67 68 Classic Schemes 69 3.1 Linear Search 3.2 Caching 3.2.1 Management Policies 3.2.1.1 Cache Modeling 3.2.1.2 Trace Generation AU8057_C000.indd vi 21 22 25 27 27 69 69 70 70 71 11/14/2007 6:15:32 PM Contents Ⅲ vii 3.2.1.3 Measurement Results 72 3.2.1.4 Caching Cost Analysis 79 3.2.2 Characteristics of Destination Address Locality 80 3.2.2.1 Locality: Concepts 80 3.2.2.2 Cache Replacement Algorithms 81 3.2.2.3 Stack Reference Frequency 83 3.2.2.4 Analysis of Noninteractive Traffic 86 3.2.2.5 Cache Design Issues 87 3.2.3 Discussions 89 3.3 Binary Trie 89 3.4 Path-Compressed Trie 91 3.5 Dynamic Prefi x Trie 92 3.5.1 Definition and Data Structure 93 3.5.2 Properties of DP-Tries 95 3.5.3 Algorithms for DP-Tries 97 3.5.3.1 Insertion 97 3.5.3.2 Deletion 102 3.5.3.3 Search 104 3.5.4 Performance 105 References 105 Chapter 4.1 4.2 4.3 4.4 4.5 AU8057_C000.indd vii Multibit Tries 107 Level Compression Trie 4.1.1 Level Compression 4.1.2 Representation of LC-Tries 4.1.3 Building LC-Tries 4.1.4 Experiments 4.1.5 Modified LC-Tries Controlled Prefi x Expansion 4.2.1 Prefi x Expansion 4.2.2 Constructing Multibit Tries 4.2.3 Efficient Fixed-Stride Tries 4.2.4 Variable-Stride Tries Lulea Algorithms 4.3.1 Level of the Data Structure 4.3.2 Levels and of the Data Structure 4.3.3 Growth Limitations in the Current Design 4.3.4 Performance Elevator Algorithm 4.4.1 Elevator-Stairs Algorithm 4.4.2 log W-Elevators Algorithm 4.4.3 Experiments Block Trees 4.5.1 Construction of Block Trees 4.5.2 Lookup 4.5.3 Updates 4.5.4 Stockpiling 107 107 109 111 112 113 113 114 115 116 118 123 124 127 128 128 128 129 132 136 138 138 140 142 143 11/14/2007 6:15:33 PM viii Ⅲ Contents 4.5.5 Worst-Case Performance 4.5.6 Experiments 4.6 Multibit Tries in Hardware 4.6.1 Stanford Hardware Trie 4.6.2 Tree Bitmap 4.6.3 Tree Bitmap Optimizations 4.6.4 Hardware Reference Design References Chapter 145 148 149 149 150 154 157 162 Pipelined Multibit Tries 165 5.1 Fast Incremental Updates for the Pipelined Fixed-Stride Tries 5.1.1 Pipelined Lookups Using Tries 5.1.2 Forwarding Engine Model and Assumption 5.1.3 Routing Table and Route Update Characteristics 5.1.4 Constructing Pipelined Fixed-Stride Tries 5.1.5 Reducing Write Bubbles 5.1.5.1 Separating Out Updates to Short Routes 5.1.5.2 Node Pullups 5.1.5.3 Eliminating Excess Writes 5.1.5.4 Caching Deleted SubTrees 5.1.6 Summary and Discussion 5.2 Two-Phase Algorithm 5.2.1 Problem Statements 5.2.2 Computing MMS(W − 1, k) 5.2.3 Computing T(W − 1, k) 5.2.4 Faster Two-Phase Algorithm for k = 2, 5.2.5 Partitioning Scheme 5.2.6 Experimental Results 5.3 Pipelined Variable-Stride Multibit Tries 5.3.1 Construction of Optimal PVST 5.3.2 Mapping onto a Pipeline Architecture 5.3.3 Experimental Results References 165 165 167 169 170 177 177 178 180 181 184 185 186 186 190 192 194 195 198 199 200 202 204 Chapter Efficient Data Structures for Bursty Access Patterns 205 6.1 Table-Driven Schemes 6.1.1 Table-Driven Models 6.1.2 Dynamic Programming Algorithm 6.1.3 Lagrange Approximation Algorithm 6.2 Near-Optimal Scheme with Bounded Worst-Case Performance 6.2.1 Definition 6.2.2 Algorithm MINDPQ 6.2.3 Depth-Constrained Weight Balanced Tree 6.2.4 Simulation 6.3 Dynamic Biased Skip List 6.3.1 Regular Skip List 6.3.2 Biased Skip List AU8057_C000.indd viii 205 205 207 209 211 211 213 216 217 217 218 219 11/14/2007 6:15:33 PM Ⅲ ix 6.3.2.1 Data Structure 6.3.2.2 Search Algorithm 6.3.3 Dynamic BSL 6.3.3.1 Constructing Data Structure 6.3.3.2 Dynamic Self-Adjustment 6.3.3.3 Lazy Updating Scheme 6.3.3.4 Experimental Results 6.4 Collection of Trees for Bursty Access Patterns 6.4.1 Prefi x and Range 6.4.2 Collection of Red-Black Trees (CRBT) 6.4.3 Biased Skip Lists with Prefi x Trees (BSLPT) 6.4.4 Collection of Splay Trees 6.4.5 Experiments References 219 220 221 221 222 223 224 225 225 226 227 229 230 234 Contents Chapter 7.1 Caching Technologies 237 Suez Lookup Algorithm 7.1.1 Host Address Cache 7.1.1.1 HAC Architecture 7.1.1.2 Network Address Routing Table 7.1.1.3 Simulations 7.1.2 Host Address Range Cache 7.1.3 Intelligent HARC 7.1.3.1 Index Bit Selection 7.1.3.2 Comparisons between IHARC and HARC 7.1.3.3 Selective Cache Invalidation 7.2 Prefi x Caching Schemes 7.2.1 Liu’s Scheme 7.2.1.1 Prefi x Cache 7.2.1.2 Prefi x Memory 7.2.1.3 Experiments 7.2.2 Reverse Routing Cache (RRC) 7.2.2.1 RRC Structure 7.2.2.2 Handling Parent Prefi xes 7.2.2.3 Updating RRC 7.2.2.4 Performance Evaluation 7.3 Multi-Zone Caches 7.3.1 Two-Zone Full Address Cache 7.3.2 Multi-Zone Pipelined Cache 7.3.2.1 Architecture of MPC 7.3.2.2 Search in MPC 7.3.2.3 Outstanding Miss Buffer 7.3.2.4 Lookup Table Transformation 7.3.2.5 Performance Evaluation 7.3.3 Design Method of Multi-Zone Cache 7.3.3.1 Design Model 7.3.3.2 Two-Zone Design 7.3.3.3 Optimization Tableau AU8057_C000.indd ix 237 237 237 240 242 243 244 244 246 248 248 249 249 250 251 252 252 252 253 255 256 256 257 257 258 258 260 261 261 262 264 265 11/14/2007 6:15:33 PM Index A ABIT See Alternative base interval tree (ABIT) ACBST See Alternative collection of binary search tree (ACBST) Access control list (ACL), 330 Access patterns independently skewed, 354 ACL See Access control list (ACL) Adaptive load balancing logic, 365 Address allocation impact, 43–46 impact on prefi xes, 66 impact on size, optimal routing table construction, 52–55 policy fi ltering, 52–55 policy migration, 44–45 routing table, 43–46 Address fragmentation routing table, 50 routing table growth contributions, 50 Address prefi x length distribution, 45 Address resolution protocol (ARP), Adjusting algorithm LBBTC, 362 Aggregate failure routing table, 48–49 Algorithm, 332 See also specific name or type build-graph, 58–59 postorder-split carving, 337 Allocated address blocks over time, 44 Allocated blocks contribution to routing table size, 45, 46 routing prefi xes/24s, 46 Alternative base interval tree (ABIT), 229, 383 Alternative collection of binary search tree (ACBST), 383 American Registry for Internet Numbers (ARIN), 53 APNIC See Asia-Pacific Network Information Center (APNIC) Application-specific integrated circuit (ASIC)-based architectures, 167 Architectural elements content-address memory, 317–318 TCAM-based forwarding engine, 317–318 Architecture implementation, 364 ARIN See American Registry for Internet Numbers (ARIN) ARP See Address resolution protocol (ARP) Array element, 374 Array representation LC-trie, 109 Asia-Pacific Network Information Center (APNIC), 54 ASIC See Application-specific integrated circuit (ASIC)-based architectures Asymmetric binary search refinements, 278–280 Asymmetric Bloom fi lter optimization, 302–303 Asymmetric trees weighing functions, 278 Asynchronous transfer mode (ATM), 15, 36 IP address lookup and routing table, 36 ATM See Asynchronous transfer mode (ATM) Average prefi x length distribution, 303 Average search time, 233 delete prefi x, 392 insert prefi x, 387 nontrace, 387 pseudotrace, 387 B Basic fast hash table (BFHT), 308, 312 using extended Bloom fi lter, 307 Basu algorithm execution time, 196 Best matching prefi x (BMP), 92 BFHT See Basic fast hash table (BFHT) 415 AU8057_C011.indd 415 11/15/2007 11:32:09 AM 416 Ⅲ Index BGP See Border gateway protocol (BGP) Bi-directional × SE, 270 Bi-directional multistage interconnection, 267 Biased skip lists with prefi x trees (BSLPT), 227 Binary searches hash tables, prefi x placing, 372 recomputations, 278 routing table, 211 table, 368 tree, 211 trie levels, 277 Binary strings, 108 Binary trie, 108 classic schemes, 89–90 comb extraction scheme, 413 prefi xes, 90 Bit auxiliary trie, 174 Bit selection architecture, 331–333 Bit trie algorithm postorder-split carving, 338 buckets, 338 prefi xes, 122 Bit-masks vs code words, 127 Bit-vector, 125 Block diagram, 157 Block trees, 138–148 construction, 138–140 experiments, 148 lookup, 140 stockpiling, 143–146 updates, 142–143 worst-case performance, 145–148 Bloom fi lter, 297–313 asymmetric, 302–303 counting, 299 direct lookup array, 304 fi lter reduction, 305–306 hashing schemes, 297–313 longest prefi x matching, 300 LPM configuration, 299–300 optimization, 301–306 standard, 297–298 BMP See Best matching prefi x (BMP) Border gateway protocol (BGP), 32 average prefi x length distribution, 303 IPv4 table snapshots, 303 table, 39, 42 BSLPT See Biased skip lists with prefi x trees (BSLPT) Bubble, 167 See also Write bubbles Buckets algorithm postorder-split carving, 338 approximate probability, 293 bit trie, 337 probability, 292 recomputing, 342 AU8057_C011.indd 416 single hash function, 292 trie-based architectures, 342 Bursty access pattern algorithms, 349–356 discussions, 355–356 dynamic architecture, 352–354 optimization, 356 power-efficient TCAM, 349–356 static architecture, 352 Bursty access pattern data structure, 205–234 dynamic biased skip list, 217–224 near-optimal scheme, 211–217 table-driven scheme, 205–209 tree collection, 225–234 worst-case performance, 211–217 Bus-based router architecture, 10 multiple parallel forwarding engines, 10 multiple processors, 8–11 single processor, C Cache hit ratio vs size, 77, 80 management, 269–270 miss ratio, 252, 257 normalized search time, 86 performance flow diagram, 259 pipeline diagram, 259 size vs hit ratio, 77, 80 size vs miss ratio, 262 size vs number of packets, 78, 79, 81 size vs percentage of hits, 76, 80 Cache miss probability cache replacement algorithms, 84 noninteractive frames, 89 Cache replacement algorithms cache miss probability, 84 interfault distance, 84 Cache-oriented multistage structure (COMS), 266–272 bi-directional multistage interconnection, 267 cache management, 269–270 operations, 267 routing table partitioning, 271 SEs’ details, 270 Caching, 69–89 classic schemes, 69–89 cost analysis, 79 deleted nodes, 183 deleted sub-trees, 181–184 design issues, 87–88 destination address locality characteristics, 82–88 locality, 80 11/15/2007 11:32:10 AM Index management policies, 70–79 measuring results, 72–78 modeling, 70 noninteractive traffic analysis, 86 optimization, 182 reducing write bubbles, 177–180 replacement algorithms, 83–84 stack reference frequency, 85 sub-trees, 183 trace generation, 73 Caching technologies, 237–272 cache-oriented multistage structure, 266–272 multi-zone caches, 256–266 prefi x caching schemes, 248–255 Suez’s lookup algorithm, 237–248 CAM See Content addressable memory (CAM) CAO_OPT See Chain-ancestor ordering constraint (CAO_OPT) CBQ See Class based queuing (CBQ) CBST base interval tree, 226 CES See Comb extraction scheme (CES) Chain length statistics, 323 Chain nodes, 102–103 Chain-ancestor ordering constraint (CAO_OPT) algorithm, 322 TCAM, 323 TCAM efficient updating, 322 CIDR See Classless inter-domain routing (CIDR) CIOB See Combined input and output buffered (CIOB) Class based queuing (CBQ), Class-based addresses, 37 lookup operation, 36, 37 Classic schemes, 69–105 binary trie, 89–90 caching, 69–89 destination address locality characteristics, 80–86 discussions, 89 dynamic prefi x trie, 92–105 linear search, 69 path-compressed trie, 91 Classless inter-domain routing (CIDR), 38–39, 41–43 Code words vs bit-masks, 127 Collection Red-Black Trees (CRBT), 226 Comb extraction scheme (CES), 407–413 binary trie, 413 comparison set, 411–412 IP lookup architecture, 413 splitting rule, 407–410 sub-tables, 412 Combined input and output buffered (CIOB), 25 Compacted table mask extension, 344 Compaction techniques EaseCAM, 347 mask extension, 343–345 AU8057_C011.indd 417 Ⅲ 417 power-efficient TCAM, 343–347 prefi x aggregation and expansion, 346–347 Comparator slice, 396 Comparison set CES, 407–408 Complete prefi x tree expansion (CPTE), 250–251 Composite 2-phase algorithm, 198 Computing MMS, 188–191 Computing T two-phase algorithm, 192 COMS See Cache-oriented multistage structure (COMS) Configurable network processor, 17 Configuration, 206 Constructing data structure BSL, 221 Content addressable memory (CAM), 15, 317–320, 392 architecture blocks, 318 basic architectural elements, 317–318 binary vs ternary, 319 longest-prefi x match using TCAM, 320 structure, 318 TCAM-based forwarding engine, 317–320 Controlled prefi x expansion (CPE), 115–124, 304 constructing multibit trie, 117 efficient fi xed-stride tries, 118–119 memory comparison, 67 prefi x expansion, 116 variable-stride tries, 120–124 COP vs latency, 261 Counting Bloom fi lter, 297 CPE See Controlled prefi x expansion (CPE) CPTE See Complete prefi x tree expansion (CPTE) CPU linecard routers, 16 CRBT See Collection Red-Black Trees (CRBT) CRC See Cyclic redundancy code (CRC) Crossbar switch, 23 routers, 22–24 Cube and prefi x relationship, 346 Cut bit-vector, 127 Cyclic redundancy code (CRC) checker, 294 CRC-32, 294 D Data flow HAC, 239 NART lookup, 241 Data structure, 113, 138 levels, 126 longest matching-prefi x, 232 PMHA, 294 Database tree bitmap, 152 Datasets, 171 measured performance, 149 11/15/2007 11:32:10 AM 418 Ⅲ Index Dense-wavelength-division-multiplexing (DWDM), 27 equipment, Densities stockling sizes, 147 Design optimization, IFPLUT architecture, 395 Differentiated service (Diff Serv), 15 DIR-24-8-BASIC architecture, 149 Direct lookup array optimization, 304 prefi x lengths, 304 Distinct prefi x lengths, 282 Distributed memory TCAM organization, 358 Distributed output buffered switch fabric, 22 Distributed TCAM architecture, 357–369 adjusting algorithm, 362 complete implementation architecture, 364–366 distributed memory TCAM organization, 358 forwarding engine, 357–369 index logic, 364 LBBTC algorithm, 359–362 mathematical model, 359–361 ordering logic, 366 performance analysis, 367–369 power efficiency analysis, 363 priority selector (adaptive load balancing logic), 365 routing tables analysis, 356–357 Distribution output buffered switch fabric, 21 DP-trie See Dynamic prefi x trie (DP-trie) DWDM See Dense-wavelength-division-multiplexing (DWDM) Dynamic architecture, 353 bursty access pattern algorithms, 352–355 power consumption, 355 Dynamic biased skip list, 217–224 biased skip list, 219–221 constructing data structure, 221 data structure, 219 dynamic BSL, 221–224 dynamic self-adjustment, 222 experimental results, 224 lazy updating scheme, 223 regular skip list, 218–219 search algorithm, 220–221 Dynamic prefi x trie (DP-trie), 94–107 data structure, 93–95 definition, 93–95 properties, 95–96 Dynamic prefi x trie (DP-trie) algorithms, 97–105 deletion, 102–103 insertion, 98–101 search, 104–105 Dynamic programming algorithm, 210 MMS, 188 table-driven scheme, 209–210 Dynamic routing, Dynamic self-adjustment, 222–223 AU8057_C011.indd 418 E Early packet discard (EPD), 16 EaseCAM, 348 compaction techniques, 348 EGP See Exterior gateway protocols (EGP) Elevator algorithm, 128–137 experiments, 136–138 logW-elevators algorithm, 132–136 Elevator-stairs algorithm, 131–133 IP address search, 133 multilevel hash table lookup, 133 NHP assigned to prefi x p, 136 prefi x p deletion, 135 prefi x p insertion, 135 Eliminating excess writes, 180–181 Encoding prefi xes, 372 Encoding range LPM, 373 End node optimization, 157 End points, 386 EPD See Early packet discard (EPD) Execution time Basu algorithm, 196, 197 MMS, 197 Expansion level p, 191 Expansion prefi x tree, 125 Exterior gateway protocols (EGP), F Failure-to-aggregate, 50 Fast hash table using extended Bloom fi lter, 305–311 basic fast hash table, 307 pruned fast hash table, 308–311 shared-node fast hash table, 312–313 Fast Internet protocol lookup (FIPL) architecture, 164 Faster two-phase algorithm, 192–193 FDDI See Fiber distributed data interface (FDDI) FE See Forwarding engine (FE) Fiber distributed data interface (FDDI), FIFO See First-in-first-out (FIFO) Filter reduction optimization, 303–304 FIPL See Fast Internet protocol lookup architecture First-in-first-out (FIFO), 11, 23, 25, 72 Fixed-stride optimization (FSTO), 118 Fixed-stride tries (FST), 118 algorithm, 121 Flow diagram cache performance, 259 Footprint models, 264 Forwarding engine (FE), 249 model and assumption, 169 pipelined fixed-stride trie fast incremental updates, 167–168 11/15/2007 11:32:10 AM Index trie-based power reduction schemes, 330 VLMP, 322 Forwarding table, 151 example, 389 expanded, 390 identical enclosures, 390 lookup, 142 partial, 391 PMHA, 293 stored TCAM, 344 Forwarding trie memory allocation, 174 Fragmented from allocated blocks, 47 Framer linecard routers, 14 Free space distribution TCAM, 322 FST See Fixed-stride tries (FST) FSTO See Fixed-stride optimization (FSTO) Full 32-bit search node sequence, 160 G Geometrically distributed data, 226 Global hit rate, 264 Greedy algorithm, 333 Greedy index bit selection algorithm, 246 H HAC See Host address cache (HAC) HARC See Host address range cache (HARC) Hash probes per lookup, 303, 306 Hash table binary search, 275–286 asymmetric binary search, 278–280 hashing schemes, 275–286 mutating binary search, 281–285 precomputation to avoid backtracking, 277 refinements, 278–285 Hash table linear search, 275 Hashing schemes, 275–312 Bloom fi lter, 297–313 fast hash table using extended Bloom fi lter, 307–314 hash table binary search, 275–286 multiple hashing schemes, 290–296 parallel hashing in prefi x length, 287 performance evaluation, 286 Head-of-line (HOL) blocking, 23, 24 Heuristic partition techniques bit selection architecture, 331–333 experiments, 340 power-efficient TCAM, 330–342 route updating, 341–342 trie-based table partitioning, 334–339 Heuristic reapplications, 342 AU8057_C011.indd 419 Ⅲ 419 Hit ratio RRC-ME, 255 HOL See Head-of-line (HOL) blocking Host address cache (HAC), 237–244 architecture, 237–239 data flow, 239 miss ratio, 243 network address routing table, 240–241 simulations, 242 Host address range cache (HARC), 237 Suez’s lookup algorithm, 242–243 Hybrid optical–electrical router architecture, 12 I IANA See Internet assigned numbers authority (IANA) ID groups distribution, 359 traffic distribution, 363 IFPLUT algorithm, 388–391, 392 partition algorithm based on next-hop, 389–391 primary lookup table transformation, 388 IFPLUT architecture, 392–399 basic architecture, 392 design optimization, 399 forwarding architecture, 394 M values, 400 memory assignment scheme, 395 prefi xes imbalance distribution, 393 search unit, 393–394 selector block, 395–396 TCAM, 398 updates, 397 IGP See Interior gateway protocols (IGP) IHARC See Intelligent host address range cache (IHARC) Incoming IP addresses, 369 Incremental withdraw algorithm, 347 Index bits number of address ranges, 247 routing table, 245 selection, 244–245 Index logic complete implementation architecture, 364 schematics, 366 Initial 16-bit array, 377 Intelligent host address range cache (IHARC), 237, 244–247 HARC comparison, 246 index bit selection, 244–246 selective cache invalidation, 248 Interfault distance cache replacement algorithms, 84 noninteractive frames, 90 Interior gateway protocols (IGP), 11/15/2007 11:32:10 AM 420 Ⅲ Index Internet addressing architecture, 36–38 Internet assigned numbers authority (IANA), 52, 53 Internet performance measurement and analysis (IPMA), 169 routing tables, 232 Internet protocol (IP), 237 Internet protocol (IP) address lookup, 27, 31–68 addressing architecture, 36–38 ATM address, 36 design criteria, 34 longest-prefi x matching problem, 36–38 matching techniques, 33 optimal routing table construction, 52–68 performance requirement, 34 phone number, 36 prefi x, 31 route update, 50–52 routing table characteristics, 39–52 Internet protocol (IP) address search algorithm, 132 elevator-stairs algorithm, 133 Internet protocol (IP) lookup architecture, 413 CES, 413 engine core, 159 line cards and packet sizes, 35 speed, 35 Internet service provider (ISP), 4, 42, 43, 249 Interval partitioning, 383–384 algorithm, 386 structures ABIT, 384 Interval skip list representation, 228 IP See Internet protocol (IP) IPMA See Internet performance measurement and analysis (IPMA) IPv4 BGP table snapshots, 303 total SRAM, 405 iSLIP, 24 ISP See Internet service provider (ISP) K K position, 215 K-level-tree algorithm, 130, 134 L Lagrange approximation algorithm, 209 Largest common sub-prefi x (LCS), 347 LAT See Local area terminals (LAT) Latency vs COP, 262 Layout tables TCAM, 324 Lazy updating scheme dynamic BSL, 223 AU8057_C011.indd 420 LBBA See Load-balance-based-adjusting (LBBA) algorithm LBBTC See Load-balanced-based table construction (LBBTC) algorithm LCS See Largest common sub-prefi x (LCS) LCT See Level compression trie (LCT) Least recent use (LRU), 72 Level compression trie (LCT), 107–113, 108 array representation, 109 building, 111 compression, 107–108 experiments, 112 modified, 113 pseudo code, 110, 111 representation, 109–110 Level-partitioning technology, 322–324 Linear search classic schemes, 71 Linecard CPU, 16 framer, 14 IP lookup speed, 35 labels, 398 network processor, 15 routers, 14–16 software system, 167 traffic manager, 15 transponder/transceiver, 14 LIR See Local Internet registries (LIR) Load balancing routing table, 50 growth contributions, 49 Load-balance-based-adjusting (LBBA) algorithm, 362 Load-balanced-based table construction (LBBTC) algorithm, 357 distributed TCAM architecture, 359–362 Local area terminals (LAT), 87 traffic, 88 Local Internet registries (LIR), 53 Local preference (LocPrf), 32 Log scale power reduction factor, 334 logW-elevators algorithm, 132–136 logW-elevators lookup algorithm, 135, 136, 137, 138 Longest prefi x matching (LPM), 293 configuration with Bloom fi lter, 297–298 narrowest encoding range, 373 problem, 34 Lookup operation, 36 Lookup table transformation, 260 LPM See Longest prefi x matching (LPM) LP vs S&V, 210 LRU See Least recent use (LRU) Lulea algorithm, 123–128 growth limitations, 128 level 1, 124–126 levels and 3, 127 performance, 128 11/15/2007 11:32:11 AM Index M M values IFPLUT, 400 Macro selector architecture, 397 Mapping onto pipeline architecture, 200–201 Markers, 274 Martian address block, 53 Mask extension compacted table, 344 compaction techniques, 343–345 Maximum number of routing table entries, 340 Maximum per-stage memory, 202 normalized by PVST, 203 Maximum transmission unit (MTU), Memory, 387 worst-case time, 381 Memory access distributions, 288 Memory allocation, 176 forwarding trie, 172 Memory assignment scheme, 391 Memory requirement, 233 Memory size entry efficiency analysis, 295 MGR See Multigigabit router (MGR) MIMD See Multiple instruction multiple data (MIMD) Minimaxspace (MMS), 193 algorithm, 191 dynamic programming algorithm, 188 execution time, 197 MINMAX algorithm, 175 Miss ratio cache, 257 vs cache size, 261 comparison, 244 Mixed space-time switch architecture, 26 MMS See Minimaxspace (MMS) Modified binary search table, 374 Modified selector block, 396 MPC See Multi-zone pipelined cache (MPC) MPLS See Multi-protocol label switching (MPLS) MTU See Maximum transmission unit (MTU) Multi-homing growth contributions, 46–47 routing table, 48 Multi-protocol label switching (MPLS), 27 Multi-zone cache, 256–265 design method, 261–265 design model, 262 MPC, 257–261 optimization tableau, 265–266 two-zone design example, 264 two-zone full address cache, 256 Multi-zone pipelined cache (MPC), 257–261 architecture, 257 lookup table transformation, 260 multi-zone caches, 257–261 outstanding miss buffer, 258 AU8057_C011.indd 421 Ⅲ 421 performance evaluation, 261 search, 258 Multibit trie, 107–161 block trees, 138–148 controlled prefi x expansion, 113–122 elevator algorithm, 128–137 level compression trie, 107–113 Lulea algorithm, 123–128 multibit tries in hardware, 149–161 prefi xes, 113 pseudocode, 116 Multibit tries in hardware, 149–161 hardware reference design, 157–161 Stanford hardware trie, 149–150 tree bitmap, 151–153 tree bitmap optimization, 154–156 Multigigabit router (MGR), 11 Multilevel hash table lookup, 131 Multilevel partitioning, 380–382 Multiple forwarding engines, 11 Multiple hash function, 290–291 Multiple hashing schemes, 290–296 data structure, 294 function, 290–291 performance comparison, 297 searching algorithms, 294 update and expansion to IPv6, 295–296 using cyclic redundancy code, 292–293 Multiple hashing using cyclic redundancy code, 292–293 Multiple instruction multiple data (MIMD), 16 Multiple parallel forwarding engines bus-based router architecture, 10 bus-based with multiple processors, 9–10 Multiple RISC cluster, 18 Multiway binary search, 376–377 Mutating binary search, 283 refinements, 281–285 trie, 282 N N-port input-queued switch, 24 N-stage forwarding pipeline, 166 Naive hash table (NHT), 305, 306 NART See Network address routing table (NART) Near-optimal scheme with bounded worst-case performance, 210–217 algorithm MINDPQ, 213–215 defined, 211 depth-constrained weight balanced tree, 216 simulation, 217 Neighboring prefi xes trie, 181 Network address routing table (NART), 237, 240–241 lookup data flow, 241 11/15/2007 11:32:11 AM 422 Ⅲ Index Network processor cache architectures, 242 linecard routers, 15 multiple RISC cluster, 18 routers, 16–18 Next hop port (NHP), 129 assigned to prefi x p, 134 NHP See Next hop port (NHP) NHT See Naive hash table (NHT) Node pullups, 178 reducing write bubbles, 178–179 Node sequence, 158 Node structure and insertions, 93 Noninteractive frames cache miss probability, 87 interfault distance, 88 normalized search time, 88 Noninteractive traffic, 86 Normalized search time cache, 84 noninteractive frames, 88 Number of packets between cache misses, 76, 77, 79 O OLDP See One-level dynamic partitioning (OLDP) 1-bit auxiliary trie, 174 1-bit trie into buckets algorithm postorder-split carving, 338 1-bit trie prefi xes, 122 One-level dynamic partitioning (OLDP), 380 algorithms, 381 statistics, 381 Open shortest path first (OSPF), 3, 32 Opt values, 122 Optimal alphabetic tree corresponding to three, 212 Optimal 4-VST prefi xes, 123 Optimal PVST construction, 199 Optimal routing table constructor (ORTC), 62–67 algorithm, 63–65 example, 64 experiments and results, 67 improvements, 66–67 IP address lookup and routing table, 50–67 routing tables performance, 67 Optimization algorithm bursty access patterns, 355 independently skewed access patterns, 354 Optimization Bloom fi lter, 301–306 Optimization tableau, 265–266 Optimizing memory, 171 Ordered TCAM efficient updating, 321–324 CAO_OPT algorithm, 322 forwarding engine, 321–324 AU8057_C011.indd 422 level-partitioning technology, 322–324 prefi x-length ordering constraint algorithm, 321 Ordering logic, 366 ORTC See Optimal routing table constructor (ORTC) OSPF See Open shortest path first (OSPF) Outstanding miss buffer, 258 P Packet sizes IP lookup speed, 35 Packets between cache misses, 76, 77, 79 Page-fi lling algorithm, 349 Paged TCAM, 331 pruned search, 329 Parallel architecture, 287 Parallel hashing architecture, 288 search algorithm, 289 Parallel hashing in prefi x length, 287 hashing schemes, 287 parallel architecture, 287 simulation, 288 Parallel multiple-hashing architecture (PMHA), 292, 293 data structure, 294 forwarding table, 295 search procedure, 296 searching algorithm, 296 Partial packet discard (PPD), 16 Partial prefi x tree expansion (PPTE), 251 Partition algorithm based on next-hop, 391–392 Partitioned binary search table, 371–377 encoding prefi xes as ranges, 372 insertion into modified binary search table, 375 multiway binary search, 376–377 performance measurements, 378 recomputation, 373–374 Partitioning scheme, 194 Path-compressed trie, 91, 108 classic schemes, 91 PATRICIA See Practical algorithm to retrieve information coded in alphanumeric (PATRICIA) Percentage of cache hits vs cache size, 74, 78 Percentage of references vs time of last references, 73, 77 Performance comparison, 290 PFHT See Pruned fast hash table (PFHT) PFST See Pipelined fi xed-stride trie (PFST) Pipeline diagram cache, 259 Pipelined fi xed-stride trie fast incremental updates, 165–184 constructing pipelined fi xed-stride trie, 170–176 forwarding engine model and assumption, 167 pipelined lookups using tries, 165–166 reducing write bubbles, 177–183 routing table and route update characteristics, 169–170 11/15/2007 11:32:12 AM Index Pipelined fi xed-stride trie (PFST), 186 construction, 170–176 Pipelined lookups using tries, 165–166 Pipelined multibit trie, 165–203 PFST fast incremental updates, 165–183 PVST, 198–202 two-phase algorithm, 185–197 Pipelined trie prefi xes, 179 Pipelined variable-stride multibit tries (PVST), 198–202 experimental results, 202–203 mapping onto pipeline architecture, 200–201 optimal PVST construction, 199 total memory, 203 PMHA See Parallel multiple-hashing architecture (PMHA) Pointer index, 126 Port-based partitioning, 388–399 experimental results, 400 IFPLUT algorithm, 388 IFPLUT architecture, 393–399 Postorder-split algorithm three iterations, 339 Power consumption dynamic architecture, 354 Power efficiency analysis, 362 Power reduction, 350 factor log scale, 334 Power-efficient TCAM, 328–355 bursty access pattern algorithms, 350–355 compaction techniques, 343–349 forwarding engine, 328–355 heuristic partition techniques, 331–342 paged TCAM, 330 pruned search, 329 pruned search and paged-TCAM, 329–330 PPD See Partial packet discard (PPD) PPTE See Partial prefi x tree expansion (PPTE) Practical algorithm to retrieve information coded in alphanumeric (PATRICIA), 92, 105, 128–130, 378 Precomputation to avoid backtracking, 277 Prefi x p, 133 Prefi x aggregation, 346–347 allocation, 362 average time to delete, 234 average time to insert, 234 binary search table, 372 binary trie, 90 1-bit trie, 122 cache, 249 cache lookup flow, 250 compaction techniques, 346–347 cube relationship, 345 database, 195 distribution, 41 distribution among ID groups, 358 encoding, 372 AU8057_C011.indd 423 Ⅲ 423 expansion, 114, 346–347 IFPLUT architecture, 393 imbalance distribution, 392 Liu’s scheme, 249, 250 memory, 250 multibit trie, 115 optimal 4-VST, 123 pictorial representation, 226 pipelined trie, 179 placing, 372 RIS, 195 set, 121 start and finish, 225 table expansion, 260 tree expansion, 123 Prefi x and interval partitioning, 371–387 multilevel and interval partitioning, 379–387 partitioned binary search table, 371–378 Prefi x caching schemes, 248–255 Liu’s scheme, 249–251 prefi x cache, 249 reverse routing cache (RRC), 251–254 Prefi x length distribution direct lookup array, 304 histogram, 279 IPv4 BGP table snapshots, 303 ordered TCAM efficient updating, 321 ordering constraint algorithm, 321 Priority selector, 365 Probability buckets, 293 Processing latency and latency distribution, 369 Programmable pointer based memory management, 160 Pruned fast hash table (PFHT), 309–311, 312 illustration, 310 using Bloom fi lter, 309–311 Pruned search, 329 Pullups, 180 PVST See Pipelined variable-stride multibit tries (PVST) Q QoS See Quality of service (QoS) Quality of service (QoS), 5, 330 R RAM See Random access memory (RAM) Random access memory (RAM), 317 Random early detection (RED), 6, 16 Reactive compaction, 159 Recomputation, binary searches, 278 11/15/2007 11:32:12 AM 424 Ⅲ Index Recomputing buckets, 342 RED See Random early detection (RED) Reduced instruction set computing (RISC), 16 Reducing write bubbles, 177–182 catching deleted sub-trees, 180–182 eliminating excess writes, 180 node pullups, 178–180 separating out updates to short routes, 177 Redundancy rates throughput, 367 Regular skip list, 218 Réseaux IP Europeéns (RIPE), 53 Reverse routing cache (RRC) handling parent prefi xes, 252 ME average search time, 256 ME hit ratio, 255 performance evaluation, 255 prefi x caching schemes, 248–255 structure, 252 total memory, 196 updating, 253 Reverse routing cache with parent restriction (RRC-PR), 252 RIP See Routing information protocol (RIP) RIPE See Réseaux IP Europeéns (RIPE) RIR guidelines Telstra, 53–54 Verio, 53–54 RIS prefi x database, 195 RISC See Reduced instruction set computing (RISC) Rope, 284 search, 285 trie node, 284 Round-robin reference pattern, 85 Route caching, bus-based with multiple processors, Route table updates heuristic partition techniques, 341–342 heuristic reapplications, 342 IP address lookup and routing table, 50–52 Routers, 2–27 architecture evolution, 7–13 bus-based with multiple processors, 8–11 bus-based with single processor, components, 14–27 functionalities, 2–7 generic architecture, IP-address lookup, 27 linecard, 14–16 network processor, 16–18 packet forwarding, route processing, 2–3 scaling with optics, 12–13 special services, switch fabric, 19–27 switch fabric-based, 11 Routing information protocol (RIP), AU8057_C011.indd 424 Routing prefi xes distribution, 41 Routing prefi xes/24s, 47 Routing table, 31–68, 166, 170, 335 address allocation impact, 43–45 address allocation impact on prefi xes, 65 address allocation impact on size, address allocation policy migration, 44–45 address fragmentation, 50 analysis, 356–357 binary search tree, 211 cache-oriented multistage structure, 271 characteristics, 39–51 compaction, 347 design criteria, 34 distributed TCAM architecture, 356–357 example, 41 growth, 41, 46–51 index bits, 245 IP-address lookup, 32 IPMA project, 231 load balancing, 50 longest-prefi x matching problem, 36–38 matching techniques, 33 minimization with address reassignments, 55 optimal routing table construction, 52–67 partitioning, 271 performance ORTC, 67 pseudo-code, 344 route update, 50 statistics, 325 structure, 40 traffic traces, 217 update characteristics, 168–169 Routing table size allocated blocks, 46 failure-to-aggregate, 49 Telstra, 55 Routing-table partitioning technologies, 371–414 comb extraction scheme, 407–414 complexity analysis, 404 concept, 401 generalization, 402 port-based partitioning, 388–399 prefi x and interval partitioning, 371–387 results, 405–406 ROT-partitioning, 401–406 storage sizes, 405 worst-case lookup times, 406 RRC See Reverse routing cache (RRC) S Sample forwarding table, 408 Sample ropes, 284 11/15/2007 11:32:12 AM Index Scaling with optics, 12–13 SDS See Space-division switching (SDS) SE details, 270 Search algorithm parallel hashing architecture, 289 static architecture, 352 VLMP technique to eliminate sorting, 327 Search procedure PMHA, 296 Search time RRC-ME, 256 Search trees, 280 Search unit, 394 Searching algorithm multiple hashing schemes, 294 PMHA, 296 Segmented tree bitmap, 156 Selective cache invalidation, 247 Selector block, 395–396 SFHT See Shared-node fast hash table (SFHT) Shape shifting trie, 162 Shared medium switch, 19 fabric, 20 Shared memory switch fabric, 20, 21 Shared-node fast hash table (SFHT), 312 illustration, 313 using extended Bloom fi lter, 312–314 Single address lookup time taken, 379 Single hash function, 292 Single IP routing table, 56–59 Skip list representation, 228 Small tables layout (STL), 205 Software system, 167 SONET See Synchronous optical network (SONET) Space-division switching (SDS), 19 Space-time division switch, 25–26 Splay trees collection, 229–230 Split tree bitmap optimization, 156 Splitting rule, 408–411 SRAM See Static RAM (SRAM) Stack distance density function LAT traffic, 86 noninteractive traffic, 86 Stack reference frequency, 84 round-robin reference pattern, 85 Standard and distinct binary search, 280 Standard Bloom fi lter, 298 Bloom fi lter, 297–298 Stanford hardware trie, 149–150 Static architecture, 350 bursty access pattern algorithms, 352 power consumption, 352 search algorithm, 351 Static RAM (SRAM), 15 Static routing, STL See Small tables layout (STL) AU8057_C011.indd 425 Ⅲ 425 Stockling definition, 143 densities, 147 sizes, 147 Stockpiling, 145 block trees, 143–144 Storage cost comparison, 414 Storage sizes, 405 Stride tree bitmap search algorithm, 154 Structure with 16-bits, 112 Sub-forwarding tables, 409, 410 Sub-prefi x cost, 413 Sub-tables CES, 412 Subtree-split algorithm, 335, 336 maximum number of routing table entries, 340 Suez’s lookup algorithm, 237–247 HAC architecture, 237–239 host address cache, 237–242 host address range cache, 242–243 IHARC and HARC comparisons, 246 index-bit selection, 244–245 intelligent HARC, 244–247 network address routing table, 240–241 selective cache invalidation, 247 simulations, 242 S&V vs LP, 210 Switch fabric, 11, 19–27 Switch-based router architecture, 11 Synchronous optical network (SONET), 14 T T algorithm, 192, 194, 197 Table expansion prefi x, 260 Table-driven model, 205–206 Table-driven scheme Bursty Access Pattern efficient data structure, 205–210 dynamic programming algorithm, 207–208 Lagrange approximation algorithm, 209 table-driven model, 205–206 TBL24 entry format, 150 100Tb/s router example, 13 TCAM See Ternary content addressable memory (TCAM) TCP See Transmission control protocol/ Internet protocol (TCP/IP) TDM See Time division multiplexing (TDM) TDS See Time-division switching (TDS) Telstra RIR guidelines, 53–54 routing table size, 56 11/15/2007 11:32:12 AM 426 Ⅲ Index Ternary content addressable memory (TCAM), 251, 254, 295 based forwarding engine, 317–369 chain-ancestor ordering constraint, 323 content-address memory, 317–320 distributed TCAM architecture, 356–369 forwarding table stored, 344 free space distribution, 322 IFPLUT architecture, 398 layout tables, 324 longest-prefi x match, 319, 320 minimal cover, 345 ordered TCAM efficient updating, 321–324 power-efficient TCAM, 328–355 VLMP technique to eliminate sorting, 325–327 Three-level routing tables, 256 Three-way tree eight keys, 376 Throughput redundancy rates and traffic distribution, 367 Time division multiplexing (TDM), 19 Time of last references vs percentage of references, 73, 77 Time-division switching (TDS), 19 Time-to-live (TTL) field, Timing chart, 328 TM See Traffic manager (TM) TOS See Type of service (TOS) field Total embedded memory size vs hash probes per lookup, 303, 305 Total memory, 232 RRC01, 196 Total SRAM IPv4, 405 Traditional bus-based router architecture, Traffic distribution ID groups, 362–363 throughput, 367 Traffic manager (TM), 15 Traffic traces, 217 Transmission control protocol/Internet protocol (TCP/IP), Transponder/transceiver, 14 Tree bitmap database, 152 multibit node compression, 152 multibit tries in hardware, 151–153, 154–156 optimization, 154–156 search algorithm, 154 stride, 154 Tree collection, 225–234 BSLPT, 227 CRBT, 226 experiments, 230–234 prefi x and range, 225–226 splay trees collection, 229–230 Tree depth, 218 Tree packing heuristic, 202 Tree simulations, 368 AU8057_C011.indd 426 Trie based architectures, 342 based power reduction schemes, 334 based table partitioning, 334–339 forwarding engine architecture, 335 heuristic partition techniques, 334–339 levels binary searches, 277 mutating binary search, 282 neighboring prefi xes, 181 node rope, 284 recomputing buckets, 342 structure binary strings, 108 write bubbles, 177 TTL See Time-to-live (TTL) field Two-level dynamic partitioning, 382 algorithm, 383 statistics, 382 Two-phase algorithm, 185–197 computing MMS, 186–189 computing T, 190 experimental results, 194–197 faster two-phase algorithm, 192–193 partitioning scheme, 193 problem statements, 185 Two-zone full address cache, 256 Two-zone optimization surface, 265 Type of service (TOS) field, Typical router architecture, 15 U Usage parameter control (UPC), 15 V Variable-stride trie (VST), 118 VCI See Virtual circuit indices (VCI) Verio RIR guidelines, 53–54 Vertical logical operation with mask-encoded prefi x length (VLMP), 325–328 architecture performance, 327 forwarding engine architecture, 325–326 search algorithm, 327 technique to eliminate sorting, 325–328 Very long instruction word (VLIW), 17 Virtual circuit indices (VCI), 36 Virtual output queues (VOQ), 16, 24 N-port input-queued switch, 24 VLIW See Very long instruction word (VLIW) VLMP See Vertical logical operation with mask-encoded prefi x length (VLMP) VOQ See Virtual output queues (VOQ) VST See Variable-stride trie (VST) 11/15/2007 11:32:12 AM Index W Weighted fair queuing (WFQ), Worst-case fairness, 16 Worst-case lookup times, 407 ROT-partitioning, 406 Worst-case storage complexity, 96 Worst-case time memory, 380 AU8057_C011.indd 427 Ⅲ 427 Write bubbles, 177–182, 184 catching deleted sub-trees, 180–182 eliminating excess writes, 180 node pullups, 178–179 pullups, 180 separating out updates to short routes, 177 trie, 177 11/15/2007 11:32:12 AM AU8057_C011.indd 428 11/15/2007 11:32:12 AM ... AU8057_C001.indd 11/13 /2007 9:37:01 AM 10 Ⅲ Packet Forwarding Technologies Forwarding Engine Forwarding Engine Row Bus Forwarding Engine Resource Control Forwarding Engine Control Bus Forwarding Engine... Cataloging-in-Publication Data Wu, Weidong Packet forwarding technologies / by Weidong Wu p cm Includes bibliographical references and index ISBN- 13: 978-0-8493-8057-0 ISBN- 10: 0-8493-8057-X Packet switching (Data... of packet forwarding are described and new technologies are discussed The book will be a practical guide to aid understanding of IP routers We have done our best to accurately describe packet forwarding

Ngày đăng: 19/04/2019, 16:41