the classic work NEWLY UPDATED AND REVISED The Art of Computer Programming volume Sorting and Searching Second Edition DONALD E KNUTH Volume 1/ Fundamental Algorithms Third Edition (0-201-89683-4) This volume begins with first basic programming concepts and techniques, then focuses on information structures —the representation of information inside a computer, the structural relationships between data elements and how to deal with them efficiently Elementary applications are given to simulation, numerical methods, symbolic computing, software and system design Volume 2/ Seminumerical Algorithms Third Edition (0-201-89684-2) The second volume introduction to the offers a field complete of seminumerical algorithms, with separate chapters numbers and arithmetic on random The book summarizes the major paradigms and basic theory of such algorithms, thereby providing a comprehensive interface between computer programming and numerical analysis Volume 3/ Sorting and Searching Second Edition (0-201-89685-0) The volume comprises the most comprehensive survey of classical computer third techniques for sorting and searching It extends the treatment of data structures in Volume I to consider both large and small databases and internal and external memories Volume 4A/ Combinatorial Algorithms, Part (0-201-03804-8) This volume introduces techniques that allow computers to deal efficiently with gigantic problems Its coverage begins with Boolean functions and bitwise tricks and techniques, then treats in depth the generation of tuples and permutations, and all all combinations THE ART OF COMPUTER PROGRAMMING SECOND EDITION DONALD E KNUTH Stanford University ADDISON-WESLEY Volume and Searching / Sorting THE ART OF COMPUTER PROGRAMMING SECOND EDITION Upper Saddle River, NJ Boston Indianapolis San Francisco New York Toronto Montreal London Munich Paris Madrid Capetown Sydney Tokyo Singapore Mexico City • • • • • • • • • • • • T^X is a trademark of the American Mathematical Society METfl FONT is a trademark of Addison-Wesley The author and publisher have taken care in the preparation of this book, but make no expressed or implied warranty of any kind and assume no responsibility for errors or omissions No liability is assumed for incidental or consequential damages in connection with or arising out of the use of the information or programs contained herein The publisher offers excellent discounts on this book when ordered in quantity for bulk purposes or special sales, which may include electronic versions and/or custom covers and content particular to your business, training goals, marketing focus, and branding interests For more information, please contact: U.S Corporate and Government Sales (800) 382-3419 corpsalesOpearsontechgroup com For sales outside the U.S., please contact: International Sales Visit international0pearsoned.com us on the Web: informit.com/aw Library of Congress Cataloging-in-Publication Data Knuth, Donald Ervin, 1938The art of computer programming / Donald Ervin Knuth xiv,782 p 24 cm Includes bibliographical references and index Contents: v Fundamental algorithms v Seminumerical algorithms v Sorting and searching v 4a Combinatorial algorithms part Contents: v Sorting and searching 2nd ed ISBN 978-0-201-89683-1 (v 1, 3rd ed.) ISBN 978-0-201-89684-8 (v 2, 3rd ed.) ISBN 978-0-201-89685-5 (v 3, 2nd ed.) ISBN 978-0-201-03804-0 (v 4a) Electronic digital computers Programming Computer algorithms I Title QA76.6.K64 1997 005.1 DC21 QT_ 014* — — , — — — — Internet page http: //www-cs-f acuity Stanford edu/*knuth/taocp.html contains current information about this book and related books Copyright © 1998 by Addison -Wesley All rights reserved Printed in the United States of America This publication is protected by copyright, and permission must be obtained from the publisher prior to an y prohibited reproduction, storage in a retrieval system, or transmission in any form or by any means, electronic, mechanical, photocopying, recording, or likewise For information regarding permissions, write to: Pearson Education, Inc Rights and Contracts Department 501 Boylston Street, Suite 900 Boston, 02116 Fax: (617) 671-3447 MA ISBN-13 ISBN-10 978-0-201-89685-5 0-201-89685-0 Text printed in the United States at Courier Westford Twenty-eighth printing, March 2011 in Westford, Massachusetts PREFACE Cookery is become an art, a noble science; cooks are gentlemen — TITUS LIVIUS, Ab Urbe Condita XXXIX vi (Robert Burton, Anatomy of Melancholy 1.2 2) This BOOK forms a natural sequel to the material on information structures in Chapter of Volume 1, because it adds the concept of linearly ordered data to the other basic structural ideas The title Sorting and Searching” systems programmers may sound as if this book is only for those who are concerned with the preparation of general-purpose sorting routines or applications to information retrieval But in fact the area of sorting and searching provides an ideal framework for discussing a wide variety of important general issues: • • • • • • • How How How How are good algorithms discovered? can given algorithms and programs be improved? can the efficiency of algorithms be analyzed mathematically? can a person choose rationally between different algorithms for the same task? In what senses can algorithms be proved “best possible”? How does the theory of computing interact with practical considerations? How can external memories like tapes, drums, or disks be used efficiently with large databases? Indeed, I believe that virtually every important aspect of programming arises somewhere in the context of sorting or searching! This volume comprises Chapters and of the complete series Chapter is concerned with sorting into order; this is a large subject that has been divided chiefly into two parts, internal sorting and external sorting There also are supplementary sections, which develop auxiliary theories about permutations (Section 5.1) and about optimum techniques for sorting (Section 5.3) Chapter deals with the problem of searching for specified items in tables or files; this is subdivided into methods that search sequentially, or by comparison of keys, or by digital properties, or by hashing, and then the more difficult problem of secondary key retrieval is considered There is a surprising amount of interplay PREFACE VI between both chapters, with strong analogies tying the topics together Two important varieties of information structures are also discussed, in addition to those considered in Chapter namely priority queues (Section 5.2.3) and linear represented as balanced trees (Section 6.2.3) Like Volumes and 2, this book includes a lot of material that does not appear in other publications Many people have kindly written to me about 2, lists spoken to me about them, and I hope that I have not distorted the material too badly when I have presented it in my own words I have not had time to search the patent literature systematically; indeed, I decry the current tendency to seek patents on algorithms (see Section 5.4.5) their ideas, or If somebody sends me a copy of a book, I will dutifully refer to relevant patent not presently cited in this in future editions However, I want to encourage people to continue the centuries-old mathematical tradition of putting newly discovered algorithms into the public domain There are better ways to earn a living than to prevent other people from making use of one’s contributions to computer it science Before I retired from teaching, I used this book as a text for a student’s second course in data structures, at the junior-to-graduate level, omitting most of the mathematical material I also used the mathematical portions of this book as the basis for graduate-level courses in the analysis of algorithms, emphasizing especially Sections 5.1, 5.2.2, 6.3, and 6.4 A graduate-level course on concrete computational complexity could also be based on Sections 5.3, and 5.4.4, together with Sections 4.3.3, 4.6.3, and 4.6.4 of Volume For the most part this book is self-contained, except for occasional discussions relating to the summary MIX computer explained in of the mathematical notations used, Volume Appendix B contains a some of which are a little different from those found in traditional mathematics books Preface to the Second Edition This new edition matches the third editions of Volumes and 2, in which I have been able to celebrate the completion of T^X and METFIFONT by applying those systems to the publications they were designed for The conversion over every to electronic format has given me the opportunity to go word of the text and every punctuation mark I’ve tried to retain my original sentences while perhaps adding some more mature judgment Dozens of new exercises have been added; dozens of old exercises have been given new and improved answers Changes appear everywhere, but most significantly in Sections 5.1.4 (about permutations and the youthful exuberance of tableaux), 5.3 (about optimum sorting), 5.4.9 (about disk sorting), 6.2.2 (about entropy), 6.4 (about universal hashing), and 6.5 (about multidimensional trees and tries) C.33.44.55.54.78.65.5.43.22.2.4 22.Tai lieu Luan 66.55.77.99 van Luan an.77.99.44.45.67.22.55.77.C.37.99.44.45.67.22.55.77.C.37.99.44.45.67.22.55.77.C.37.99.44.45.67.22.55.77.C.33.44.55.54.78.655.43.22.2.4.55.22 Do an.Tai lieu Luan van Luan an Do an.Tai lieu Luan van Luan an Do an J7 7 // / SORTING 378 17 [HM2S] (R improved to 5.4.9 W Floyd, 1980.) Show that the lower n(61nn — Inn In — bound Theorem F can be 1) + 6(1 + ln(l + m/6)) ’ some initial configuration must require at least Count the configurations that can be obtained after s stops.] in the sense that 18 of this many stops [Hint: [HM26] Let L be the lower bound of exercise 17 Show that the average number people to their desired floors is at least L — when the (bn)! possible permutations of people into bn desks are equally likely of elevator stops needed to take all , 19 Bennett and A C McKellar.) Consider the following approach to keysorting, illustrated on an example file with 10 keys: i) n) iii) iv) v) vi) Original Key file: (50, )(08, 70(51, )(06,/3 )(90,/4)(17,/ B )(89,/6 )(27,/r)(65,/8 )(42,/9 8)(89, 6)(90, 4) (see below): (2, 1)(2, 3)(2, 5)(2, 7)(2, 8)(2, 9)(1, 0)(l, 2)(1, 4)(1, 6) Sorted (iv): (1, 0)(2, 1)(1, 2)(2, 3)(1, 4)(2, 5)(1, 6)(2, 7)(2, 8)(2, 9) (i) distributed into bins using (v): Bin assignments The 1: (50,/o)(51,/2 )(90,/4 )(89,/6 ) 2: (08, /i)(06, )(17, )(27, )(65, )(42, 9) result of replacement selection, reading first bin 2, then bin (06,73 )(08,/1 )(17,/5 )(27,/7 )(42,/9 )(50,/o)(51,/2 )(65,/ The assignment on ) file: (50, 0)(08, 1)(51, 2)(06, 3)(90, 4)(17, 5)(89, 6)(27, 7)(65, 8)(42, 9) Sorted (ii): (06, 3)(08, 1)(17, 5)(27, 7)(42, 9)(50, 0)(51, 2)(65, Bin Bin vii) (B T [25] from (iii), of bin right to numbers in step (iv) is 1: )(89,/6 )(90,/4) made by doing replacement selection in decreasing order of the second component The bin number is the run number The example above uses replacement selection with only two elements in the selection tree; the same size tree should be used for replacement selection in both (iv) and (vii) Notice that the bin contents are not necessarily in left, sorted order! Prove that this method will sort, namely that the replacement selection in (vii) wdl produce only one run (This technique reduces the number of bins needed in a conventional keysort by distribution, especially if the input is largely in order already.) 20 ory: Modern hardware/ software systems provide programmers with a virtual memPrograms are written as if there were a very large internal memory, able to contain [25] memory is divided into pages only a few of which are in the actual any one time; the others are on disks or drums Programmers need not concern themselves with such details, since the system takes care of everything; of the data This all , internal memory new pages at are automatically brought into memory when needed would seem that the advent of virtual memory technology makes external sorting methods obsolete, since the job can simply be done using the techniques It developed for Discuss this situation; in what ways might a hand-tailored external sorting method be better than the application of a general-purpose paging technique internal sorting to an internal sorting 21 D disks? 22 How many [Ml 5] [22] If otj blocks of an L-block file go on disk j when the you are merging two file is striped on files with the Gilbreath principle and you want to with the a blocks and the keys with the blocks, in which block ft be placed in order to have the information available when it is needed? store the keys should method? otj Stt.010.Mssv.BKD002ac.email.ninhd 77.99.44.45.67.22.55.77.C.37.99.44.45.67.22.55.77.77.99.44.45.67.22.55.77.C.37.99.44.45.67.22.55.77.77.99.44.45.67.22.55.77.C.37.99.44.45.67.22.55.77.77.99.44.45.67.22.55.77.C.37.99.44.45.67.22.55.77.77.99.44.45.67.22.55.77.C.37.99.44.45.67.22.55.77.77.99.44.45.67.22.55.77.C.37.99.44.45.67.22.55.77t@edu.gmail.com.vn.bkc19134.hmu.edu.vn.Stt.010.Mssv.BKD002ac.email.ninhddtt@edu.gmail.com.vn.bkc19134.hmu.edu.vn C.33.44.55.54.78.65.5.43.22.2.4 22.Tai lieu Luan 66.55.77.99 van Luan an.77.99.44.45.67.22.55.77.C.37.99.44.45.67.22.55.77.C.37.99.44.45.67.22.55.77.C.37.99.44.45.67.22.55.77.C.33.44.55.54.78.655.43.22.2.4.55.22 Do an.Tai lieu Luan van Luan an Do an.Tai lieu Luan van Luan an Do an , ) , ( , , , How much [20] 24 is P [M36] Suppose 379 needed for input buffers to keep input going continuously done by (a) superblock striping? (b) the Gilbreath principle? space when two-way merging , , , AND DRUMS DISKS 5.4.9 23 , is D runs have been striped on disks so that block j of run k appears on disk (xk + j) mod D A P- way merge will read those blocks in some chronological order such as ( 19 ) If groups of D blocks are to be input continuously, we will read at time t the chronologically tth block stored on each disk, as in ( 21 ) What the is minimum number memory of buffer records needed in to hold input data that has not yet been merged, regardless of the chronological order? Explain the offsets x x 25 26 how to choose in the worst case Q= text’s instead How many output buffers will guarantee that a P- way merge with randomized [26] , x P so that the fewest buffers are needed example of randomized striping for the case What buffer contents would occur in place of ( 24 )? Q — of , Rework the [23] x2 memory striping will never have to pause for lack of a place in internal to put newly merged output? Assume that the time to write a block equals the time to read a block 27 [HM27] ( The cyclic occupancy problem.) Suppose n empty urns have been arranged in a circle and assigned the numbers 0, 1, n — For k = 1, 2, p, we throw mk balls into urns (Xk + j) mod n for j = 0, 1, m*, — 1, where the integers be the number of balls in urn 0, and let Xk are chosen at random Let Sn (m p) En(mi, p ) be the expected number of balls in the fullest urn , , a) m En (mi, m = mi + + m p Use the • • ,mp ) < YltLi min(l, n Pr(Sn (mi , mp > ) where t)), tail inequality, Eq 1.2.10— 25 ), to prove that m En (m \ , ,mp ) < E mini any nonnegative real numbers aj, a give the best upper bound? for 28 m Prove that • b) [HM47] Continuing exercise 27, is , En (m\, + a /n) m \ J (1 + Qt)‘ n(l 1, am t What values of Qi, ,mp ) > En (mi + m , m3 , am mp )? [M30] The purpose of this exercise is to derive an upper bound on the average time needed to input any sequence of blocks in chronological order by the randomized striping procedure, when the blocks represent P runs and D disks We say that the block being waited for at each time step as the algorithm proceeds (see ( 24 )) is “marked”; thus the total input time is proportional to the number of marked blocks Marking depends only on the chronological sequence of disk accesses (see ( 20 )) a) Prove that if Q + consecutive blocks in chronological order have Nj blocks on disk j, then at most max(Aro, Ni, Nd-i) of those blocks are marked b) Strengthen the result of (a) by showing that it holds also for Q + consecutive 29 , blocks c) Now use the cyclic occupancy problem of exercise 27 to obtain an upper bound on the average running time in terms of a function r(D, any chronological 30 Q + 2) order.- [HM30] Prove that the function r(d,m) of exercise 29 + /^) ( for fixed as in Table 2, given d as s -A 00 satisfies r(cl, sd log d) — 31 [HM48 Analyze randomized striping to determine its true average behavior, not merely an upper bound, as a function of P, Q, and D (Even the case Q — 0, which needs an average of Q(L/\/D read cycles, is interesting.) ] Stt.010.Mssv.BKD002ac.email.ninhd 77.99.44.45.67.22.55.77.C.37.99.44.45.67.22.55.77.77.99.44.45.67.22.55.77.C.37.99.44.45.67.22.55.77.77.99.44.45.67.22.55.77.C.37.99.44.45.67.22.55.77.77.99.44.45.67.22.55.77.C.37.99.44.45.67.22.55.77.77.99.44.45.67.22.55.77.C.37.99.44.45.67.22.55.77.77.99.44.45.67.22.55.77.C.37.99.44.45.67.22.55.77t@edu.gmail.com.vn.bkc19134.hmu.edu.vn.Stt.010.Mssv.BKD002ac.email.ninhddtt@edu.gmail.com.vn.bkc19134.hmu.edu.vn C.33.44.55.54.78.65.5.43.22.2.4 22.Tai lieu Luan 66.55.77.99 van Luan an.77.99.44.45.67.22.55.77.C.37.99.44.45.67.22.55.77.C.37.99.44.45.67.22.55.77.C.37.99.44.45.67.22.55.77.C.33.44.55.54.78.655.43.22.2.4.55.22 Do an.Tai lieu Luan van Luan an Do an.Tai lieu Luan van Luan an Do an SORTING 380 5.5 SUMMARY, HISTORY, AND BIBLIOGRAPHY 5.5 Now that WE have had better nearly reached the end of this enormously long chapter, most important facts that we have studied we “sort out” the An algorithm for sorting is a procedure that rearranges a that the keys are in ascending order This orderly arrangement file is of records so useful because brings equal-key records together, it allows efficient processing of several files that are sorted on the same key, it leads to efficient retrieval algorithms, and it it makes computer output look Internal sorting is speed internal memory less chaotic used when all of the records fit in the computer’s high We have studied more than two dozen algorithms for internal sorting, in various degrees of detail; and perhaps we would be happier if we didn’t know so many different approaches to the problem! It was fun to learn all the techniques, but now we must face the horrible prospect of actually deciding which method ought to be used in a given situation It would be nice if only one or two of the sorting methods would dominate of the others, regardless of the application or the computer being used But in fact, each method has its own peculiar virtues For example, the bubble sort all (Algorithm 5.2.2B) has no apparent redeeming features, since there is always a better way to what it does; but even this technique, suitably generalized, turns out to be useful for two-tape sorting (see Section 5.4.8) Thus we find that nearly of the algorithms deserve to be remembered, since there are applications in which they turn out to be best all some The following brief survey gives the highlights of the most significant algorithms we have encountered for internal sorting As usual, stands for the number of records in the given file N Distribution counting, Algorithm 5.2D, is very useful when the keys have It is stable (doesn’t affect the order of records with equal keys), a small range but requires N saves memory space for counters and for 2N records A modification that of these record spaces at the cost of stability appears in exercise 5.2-13 Straight insertion, Algorithm requires no extra space, large N it is and is IS, is quite efficient for small unbearably slow unless the input Shellsort, the simplest is N< (say N to program, < 25) For nearly in order Algorithm 5.2 ID, is also quite easy to program, and uses space; and it is reasonably efficient for moderately large minimum memory (say N method N 1000) List insertion, Algorithm 5.2 1L, uses the same basic idea as straight insertion, so it is suitable only for small N Like the other list sorting methods described below, it saves the cost of moving long records by manipulating links; this is particularly advantageous when the records have variable length or are part of other data structures Address calculation techniques are efficient when the keys have a known (usually uniform) distribution; the principal variants of this approach are multiple list insertion (Program 5.2 1M), and MacLaren’s combined radix-insertion Stt.010.Mssv.BKD002ac.email.ninhd 77.99.44.45.67.22.55.77.C.37.99.44.45.67.22.55.77.77.99.44.45.67.22.55.77.C.37.99.44.45.67.22.55.77.77.99.44.45.67.22.55.77.C.37.99.44.45.67.22.55.77.77.99.44.45.67.22.55.77.C.37.99.44.45.67.22.55.77.77.99.44.45.67.22.55.77.C.37.99.44.45.67.22.55.77.77.99.44.45.67.22.55.77.C.37.99.44.45.67.22.55.77t@edu.gmail.com.vn.bkc19134.hmu.edu.vn.Stt.010.Mssv.BKD002ac.email.ninhddtt@edu.gmail.com.vn.bkc19134.hmu.edu.vn C.33.44.55.54.78.65.5.43.22.2.4 22.Tai lieu Luan 66.55.77.99 van Luan an.77.99.44.45.67.22.55.77.C.37.99.44.45.67.22.55.77.C.37.99.44.45.67.22.55.77.C.37.99.44.45.67.22.55.77.C.33.44.55.54.78.655.43.22.2.4.55.22 Do an.Tai lieu Luan van Luan an Do an.Tai lieu Luan van Luan an Do an SUMMARY, HISTORY, AND BIBLIOGRAPHY 5.5 381 method (discussed at the close of Section 5.2.5) The latter can be done with only 0(y/N) cells of additional memory A two-pass method that learns a nonuniform distribution is discussed in Theorem 5.2.5T Merge exchange, Algorithm 5.2.2M (Batcher’s method) and bitonic sort (exercise 5.3.4-10) are useful when a large number its cousin the of comparisons can be made simultaneously Quicksort, Algorithm 5.2.2Q (Hoare’s method) is probably the most useful general-purpose technique for internal sorting, because it requires very little memory its space and its competitors when average running time on most computers beats that of it is well implemented It can run very slowly in its worst case, however, so a careful choice of the partitioning elements should whenever nonrandom data are likely be made Choosing the median of three elements, as suggested in exercise 5.2.2-55, makes the worst-case behavior extremely unlikely and also improves the average running time slightly Straight selection, Algorithm 5.2.3S, when special hardware is a simple method especially suitable available to find the smallest element of a is list rapidly Heapsort Algorithm 5.2.3H, requires minimum memory and is guaranteed to run pretty fast; its average time and its maximum time are both roughly twice the average running time of quicksort , Algorithm 5.2.4L, is a guaranteed to be rather fast even in its worst 10 List merging, list sort that, like heapsort, case; moreover, it is is stable with respect to equal keys 11 Radix sorting, using Algorithm 5.2.5R, is a list sort especially appropri- ate for keys that are either rather short or that have an unusual lexicographic collating sequence The method of distribution counting (point above) can also N be used, as an alternative to linking; such a procedure requires record spaces, plus a table of counters, but the simple form of its inner loop makes it especially good for ultra- fast, “number-crunching” computers that have look-ahead control Caution: Radix sorting should not be used for small TV! 12 Merge values of TV, appropriate insertion, see Section 5.3.1, method five- or six-record 13 possible is especially suitable for very small would be the an application that requires the sorting of numerous in a “straight-line-coded” routine; for example, in it groups Hybrid methods, combining one or more of the techniques above, are also For example, merge insertion could be used for sorting short subfiles that arise in quicksort 14 Finally, an unnamed method appearing in the answer to exercise 1-3 seems to require the shortest possible sorting program But its average running time, proportional to TV , makes it the slowest sorting routine in this book! Table summarizes the speed and space characteristics of many of these methods, when programmed for MIX It is important to realize that the figures in this table are only rough indications of the relative sorting times; they apply to one computer only, and the assumptions made about input data are not Stt.010.Mssv.BKD002ac.email.ninhd 77.99.44.45.67.22.55.77.C.37.99.44.45.67.22.55.77.77.99.44.45.67.22.55.77.C.37.99.44.45.67.22.55.77.77.99.44.45.67.22.55.77.C.37.99.44.45.67.22.55.77.77.99.44.45.67.22.55.77.C.37.99.44.45.67.22.55.77.77.99.44.45.67.22.55.77.C.37.99.44.45.67.22.55.77.77.99.44.45.67.22.55.77.C.37.99.44.45.67.22.55.77t@edu.gmail.com.vn.bkc19134.hmu.edu.vn.Stt.010.Mssv.BKD002ac.email.ninhddtt@edu.gmail.com.vn.bkc19134.hmu.edu.vn C.33.44.55.54.78.65.5.43.22.2.4 22.Tai lieu Luan 66.55.77.99 van Luan an.77.99.44.45.67.22.55.77.C.37.99.44.45.67.22.55.77.C.37.99.44.45.67.22.55.77.C.37.99.44.45.67.22.55.77.C.33.44.55.54.78.655.43.22.2.4.55.22 Do an.Tai lieu Luan van Luan an Do an.Tai lieu Luan van Luan an Do an 382 SORTING 5.5 U J3 05 ~ ~ «3 O T3 X! r bO ^0000010^050 ^ ^ N Tf CO COHMlOHrfCOCO N- TH 00 H ^005^.00100^ 0 Cl h WNHOOOOlOTfH kO 05 ^ OfOC5WTffOOOCO S CO (N 1C O § o T-H 05 CO H Tf< r-i CM rH (M rH kO rH (M 00 CO 00 CO CO bO ’3 ^ ooc^ LO S^»S ^5Snl OCO^lC^0O)Tf^HOOON(N ^ rH t* ; S£2 Ttco 1-H (h 05 ^ COMPUTER kO CM cs co > ^ AP M S /\| |g bO C ^ ^ -O fe; > -S pi in ^ (N 00 rH bO ^ o + rH + « O ^+ « ^ {2 £ ^ CM O ^ ^ ^ CM 05 fe; ã 1:0 ' CN s|S;Đ METHODS > O ^ © rH N» rH CM CM USING fe; ''tf rH M + + Sr^ ? fe" 4- I ^ CM 05 Tt oo + bO oo T S « a a £ n °° ^ N (O ^ M ^ bj oo w « CM CO n _ rf O M co H o C4 rH fe; fe; fe; te; ; fe; fe; >> -5 r^ " Tji SORTING o a CO INTERNAL OF o ° ^ bo + + + + G+> + _ >* ^< +r