The art of computer programming volume 3 sorting and searching (second edition 2011) part 1

412 2 0
The art of computer programming   volume 3 sorting and searching (second edition   2011)   part 1

Đang tải... (xem toàn văn)

Tài liệu hạn chế xem trước, để xem đầy đủ mời bạn chọn Tải xuống

Thông tin tài liệu

the classic work NEWLY UPDATED AND REVISED The Art of Computer Programming volume Sorting and Searching Second Edition DONALD E KNUTH Volume 1/ Fundamental Algorithms Third Edition (0-201-89683-4) This volume begins with first basic programming concepts and techniques, then focuses on information structures —the representation of information inside a computer, the structural relationships between data elements and how to deal with them efficiently Elementary applications are given to simulation, numerical methods, symbolic computing, software and system design Volume 2/ Seminumerical Algorithms Third Edition (0-201-89684-2) The second volume introduction to the offers a field complete of seminumerical algorithms, with separate chapters numbers and arithmetic on random The book summarizes the major paradigms and basic theory of such algorithms, thereby providing a comprehensive interface between computer programming and numerical analysis Volume 3/ Sorting and Searching Second Edition (0-201-89685-0) The volume comprises the most third comprehensive survey of classical computer techniques for sorting and searching It extends the treatment of data structures in Volume I to consider both large and small databases and internal and external memories Volume 4A/ Combinatorial Algorithms, Part (0-201-03804-8) This volume introduces techniques that allow computers to deal efficiently with gigantic problems Its coverage begins with Boolean functions and bitwise tricks and techniques, then treats in depth the generation of tuples and permutations, and partitions, and all all trees all combinations jui m THE ART OF COMPUTER PROGRAMMING SECOND EDITION DONALD E KNUTH Stanford University ADDISON-WESLEY Volume and Searching / Sorting THE ART OF COMPUTER PROGRAMMING SECOND EDITION Upper Saddle River, NJ Boston Indianapolis San Francisco New York Toronto Montreal London Munich Paris Madrid Capetown Sydney Tokyo Singapore Mexico City • • • • • • • • • • • • T^X is a trademark of the American Mathematical Society METfl FONT is a trademark of Addison-Wesley The author and publisher have taken care in the preparation of this book, but make no expressed or implied warranty of any kind and assume no responsibility for errors or omissions No liability is assumed for incidental or consequential damages in connection with or arising out of the use of the information or programs contained herein The publisher offers excellent discounts on this book when ordered in quantity for bulk purposes or special sales, which may include electronic versions and/or custom covers and content particular to your business, training goals, marketing focus, and branding interests For more information, please contact: U.S Corporate and Government Sales (800) 382-3419 corpsalesOpearsontechgroup com For sales outside the U.S., please contact: International Sales Visit international0pearsoned.com us on the Web: informit.com/aw Library of Congress Cataloging-in-Publication Data Knuth, Donald Ervin, 1938The art of computer programming / Donald Ervin Knuth xiv,782 p 24 cm Includes bibliographical references and index Contents: v Fundamental algorithms v Seminumerical algorithms v Sorting and searching v 4a Combinatorial algorithms part Contents: v Sorting and searching 2nd ed ISBN 978-0-201-89683-1 (v 1, 3rd ed.) ISBN 978-0-201-89684-8 (v 2, 3rd ed.) ISBN 978-0-201-89685-5 (v 3, 2nd ed.) ISBN 978-0-201-03804-0 (v 4a) Electronic digital computers Programming Computer algorithms I Title QA76.6.K64 1997 005.1 DC21 QT_ 014* — — , — — — — Internet page http: //www-cs-f acuity Stanford edu/*knuth/taocp.html contains current information about this book and related books Copyright © 1998 by Addison -Wesley All rights reserved Printed in the United States of America This publication is protected by copyright, and permission must be obtained from the publisher prior to an y prohibited reproduction, storage in a retrieval system, or transmission in any form or by any means, electronic, mechanical, photocopying, recording, or likewise For information regarding permissions, write to: Pearson Education, Inc Rights and Contracts Department 501 Boylston Street, Suite 900 Boston, 02116 Fax: (617) 671-3447 MA ISBN-13 ISBN-10 978-0-201-89685-5 0-201-89685-0 Text printed in the United States at Courier Westford Twenty-eighth printing, March 2011 in Westford, Massachusetts PREFACE Cookery become an is art, a noble science; cooks are gentlemen — TITUS LIVIUS, Ab Urbe Condita XXXIX vi (Robert Burton, Anatomy of Melancholy 1.2 2) This BOOK forms a natural sequel to the material on information structures in Chapter of Volume 1, because it adds the concept of linearly ordered data to the other basic structural ideas The title Sorting and Searching” may sound as if this book is only for those systems programmers who are concerned with the preparation of general-purpose sorting routines or applications to information retrieval But in fact the area of sorting and searching provides an ideal framework for discussing a wide variety of important general issues: • • • • How How How How are good algorithms discovered? can given algorithms and programs be improved? can the efficiency of algorithms be analyzed mathematically? can a person choose rationally between different algorithms for the same task? what senses can algorithms be proved “best possible”? How does the theory of computing interact with practical considerations? How can external memories like tapes, drums, or disks be used efficiently • In • • with large databases? Indeed, I believe that virtually every important aspect of programming arises somewhere in the context of sorting or searching! This volume comprises Chapters and of the complete series Chapter is concerned with sorting into order; this is a large subject that has been divided chiefly into two parts, internal sorting and external sorting There also are supplementary sections, which develop auxiliary theories about permutations (Section 5.1) and about optimum techniques for sorting (Section 5.3) Chapter deals with the problem of searching for specified items in tables or files; this is subdivided into methods that search sequentially, or by comparison of keys, or by digital properties, or by hashing, and then the more difficult problem of secondary key retrieval is considered There is a surprising amount of interplay PREFACE VI between both chapters, with strong analogies tying the topics together Two important varieties of information structures are also discussed, in addition to those considered in Chapter lists 2, namely priority queues (Section 5.2.3) and linear represented as balanced trees (Section 6.2.3) Like Volumes appear and 2, this in other publications their ideas, or spoken to the material too badly me when book includes a Many material that does not people have kindly written to me about lot of about them, and I hope that I have not distorted have presented it in my own words I I have not had time to search the patent literature systematically; indeed, decry the current tendency to seek patents on algorithms (see Section 5.4.5) somebody sends me a copy of a relevant patent not presently cited in this book, I will dutifully refer to it in future editions However, I want to encourage I If people to continue the centuries-old mathematical tradition of putting newly discovered algorithms into the public domain There are better ways to earn a than to prevent other people from making use of one’s contributions to living computer science Before I retired from teaching, I used this book as a text for a student’s second course in data structures, at the junior-to-graduate level, omitting most of the mathematical material I also used the mathematical portions of this book as the basis for graduate-level courses in the analysis of algorithms, emphasizing especially Sections 5.1, 5.2.2, 6.3, and 6.4 A graduate-level course on concrete computational complexity could also be based on Sections 5.3, and 5.4.4, together with Sections 4.3.3, 4.6.3, and 4.6.4 of Volume For the most part this book is self-contained, except for occasional discussions relating to the MIX computer explained in Volume Appendix B contains a of the mathematical notations used, some of which are a little different from those found in traditional mathematics books summary Preface to the Second Edition This new edition matches the third editions of Volumes and 2, in which I have been able to celebrate the completion of T^X and METFIFONT by applying those systems to the publications they were designed for The conversion to electronic format has given me the opportunity to go over every word of the text and every punctuation mark I’ve tried to retain the youthful exuberance of my original sentences while perhaps adding some more mature judgment Dozens of new exercises have been added; dozens of new and improved answers Changes appear everywhere, but most significantly in Sections 5.1.4 (about permutations and tableaux), 5.3 (about optimum sorting), 5.4.9 (about disk sorting), 6.2.2 (about old exercises have been given entropy), 6.4 (about universal hashing), and 6.5 (about multidimensional trees and tries) SORTING 378 17 [HM2S] improved to 5.4.9 W (R Floyd, 1980.) Show that the lower n(61nn — Inn some in the sense that In — bound of 1) + 6(1 + ln(l + m/6)) ’ configuration must require at least this Count the configurations that can be obtained after s stops.] 18 Theorem F can be initial many stops [Hint: [HM26] Let L be the lower bound of exercise 17 Show that the average number of elevator stops needed to take all people to their desired floors is at least L — when the (bn)! possible permutations of people into bn desks are equally 19 , likely Bennett and A C McKellar.) Consider the following approach to keysorting, illustrated on an example file with 10 keys: i) n) (B T [25] Original Key file: file: (50, )(08, 70(51, )(06,/3 )(90,/4)(17,/ B )(89,/6 )(27,/r)(65,/8 )(42,/9 Sorted iv) Bin assignments v) vi) (ii): 8)(89, 6)(90, 4) (see below): (2, 1)(2, 3)(2, 5)(2, 7)(2, 8)(2, 9)(1, 0)(l, 2)(1, 4)(1, 6) Sorted (iv): (1, 0)(2, 1)(1, 2)(2, 3)(1, 4)(2, 5)(1, 6)(2, 7)(2, 8)(2, 9) (i) distributed into bins using (v): Bin Bin vii) The 1: (50,/o)(51,/2 )(90,/4 )(89,/6 ) 2: (08, /i)(06, )(17, )(27, )(65, )(42, 9) result of replacement selection, reading first bin 2, then bin (06,73 )(08,/1 )(17,/5 )(27,/7 )(42,/9 )(50,/o)(51,/2 )(65,/ The assignment on ) (50, 0)(08, 1)(51, 2)(06, 3)(90, 4)(17, 5)(89, 6)(27, 7)(65, 8)(42, 9) (06, 3)(08, 1)(17, 5)(27, 7)(42, 9)(50, 0)(51, 2)(65, iii) of bin numbers in step (iv) is 1: )(89,/6 )(90,/4) made by doing replacement selection from right to left, in decreasing order of the second component The bin the run number The example above uses replacement selection with only two elements in the selection tree; the same size tree should be used for replacement (iii), number is selection in both (iv) and (vii) Notice that the bin contents are not necessarily in sorted order! Prove that this method will sort, namely that the replacement selection in (vii) wdl produce only one run (This technique reduces the number of bins needed in a conventional keysort by distribution, especially if the input is largely in order already.) 20 ory: Modern hardware/ software systems provide programmers with a virtual memPrograms are written as if there were a very large internal memory, able to contain [25] memory is divided into pages only a few of which are in the actual memory at any one time; the others are on disks or drums Programmers need not concern themselves with such details, since the system takes care of everything; new pages are automatically brought into memory when needed of the data This all , internal It would seem that the advent of virtual memory technology makes external sorting methods obsolete, since the job can simply be done using the techniques internal sorting sorting developed for Discuss this situation; in what ways might a hand-tailored external better than the application of a general-purpose paging technique method be to an internal sorting 21 D disks? 22 [Ml 5] If method? How many blocks of an L-block you are merging two file go on disk j when the file is striped on with the Gilbreath principle and you want to store the keys otj with the a blocks and the keys ft with the blocks, in which block should otj be placed in order to have the information available when it is needed? [22] files 23 How much [20] is P [M36] Suppose 379 needed for input buffers to keep input going continuously done by (a) superblock striping? (b) the Gilbreath principle? space when two-way merging 24 AND DRUMS DISKS 5.4.9 is D runs have been striped on disks so that block j of run k appears on disk (xk + j) mod D A P- way merge will read those blocks in some chronological order such as ( 19 ) If groups of D blocks are to be input continuously, we will read at time t the chronologically tth block stored on each disk, as in ( 21 ) What the is minimum number memory of buffer records needed in to hold input data that how has not yet been merged, regardless of the chronological order? Explain the offsets x x 25 26 to choose in the worst case Q= text’s instead How many output buffers will guarantee that a P- way merge with randomized [26] x P so that the fewest buffers are needed example of randomized striping for the case What buffer contents would occur in place of ( 24 )? Q — of , Rework the [23] x2 , memory striping will never have to pause for lack of a place in internal to put newly merged output? Assume that the time to write a block equals the time to read a block The cyclic occupancy problem.) Suppose n empty urns have been arranged in a circle and assigned the numbers 0, 1, n — For k = 1, 2, p, we throw mk balls into urns (Xk + j) mod n for j = 0, 1, m*, — 1, where the integers Xk are chosen at random Let Sn (m p ) be the number of balls in urn 0, and let En(mi, p ) be the expected number of balls in the fullest urn 27 [HM27] ( , a) b) En (mi, + mp m = mi + Use the tail inequality, • • • any nonnegative give the best upper ,mp ) < YltLi min(l, n Pr(Sn (mi , mp > ) where t)), Eq 1.2.10— 25 ), to prove that \ , ,mp ) < E mini 27, is is , En (m\, [M30] The purpose of this exercise n(l 1, + a /n) m \ + Qt)‘ J t (1 real numbers aj, a bound? [HM47] Continuing exercise 29 m for m m Prove that En (m 28 , am What values of Qi, ,mp ) > En (mi + m to derive an upper , m3 am mp )? , bound on the average time needed to input any sequence of blocks in chronological order by the randomized striping procedure, when the blocks represent P runs and D disks We say that the block being waited for at each time step as the algorithm proceeds (see is “marked”; thus the total input time is proportional to the number Marking depends only on the chronological sequence of disk accesses a) of marked (see ( ( 24 )) blocks 20 )) Prove that if Q + consecutive blocks in chronological order have Nj blocks on disk j, then at most max(Aro, Ni, Nd-i) of those blocks are marked by showing that it holds also for Q + consecutive , b) Strengthen the result of (a) blocks c) Now use the cyclic occupancy problem of exercise 27 to obtain an upper bound on the average running time in terms of a function r(D, any chronological 30 Q + 2) + /^) ( for fixed as in Table 2, given order.- [HM30] Prove that the function r(d,m) of exercise 29 d as s -A 00 satisfies r(cl, sd log d) — 31 [HM48 Analyze randomized striping to determine its true average behavior, not merely an upper bound, as a function of P, Q, and D (Even the case Q — 0, which needs an average of Q(L/\/D read cycles, is interesting.) ] SORTING 380 5.5 SUMMARY, HISTORY, AND BIBLIOGRAPHY 5.5 Now that WE have nearly reached the end of this enormously long chapter, we “sort out” the most important facts that we have studied algorithm for sorting is a procedure that rearranges a file of records so that the keys are in ascending order This orderly arrangement is useful because it brings equal-key records together, it allows efficient processing of several files had better An that are sorted on the same key, it leads to makes computer output look less chaotic efficient retrieval algorithms, and it used when all of the records fit in the computer’s high speed internal memory We have studied more than two dozen algorithms for Internal sorting is internal sorting, in various degrees of detail; and perhaps we would be happier we didn’t know so many different approaches to the problem! It was fun to all the techniques, but now we must face the horrible prospect of actually if learn deciding which method ought to be used in a given situation It would be nice if only one or two of the sorting methods would dominate of the others, regardless of the application or the computer being used But in fact, each method has its own peculiar virtues For example, the bubble sort all (Algorithm 5.2.2B) has no apparent redeeming features, since there is always a better way to what it does; but even this technique, suitably generalized, turns out to be useful for two-tape sorting (see Section 5.4.8) Thus we find that nearly all of the algorithms deserve to be remembered, since there are some applications in which they turn out to be best The following brief survey gives the highlights of the most significant algorithms we have encountered for internal sorting As usual, stands for the number of records in the given file N Distribution counting, Algorithm 5.2D, is very useful when the keys have a small range It is stable (doesn’t affect the order of records with equal keys), but requires memory space for counters and for Straight insertion, Algorithm requires no extra space, large 2N records A modification that N of these record spaces at the cost of stability appears in exercise 5.2-13 saves N it is and is IS, is the simplest quite efficient for small unbearably slow unless the input is N method (say N to program, < 25) For nearly in order Algorithm 5.2 ID, is also quite easy to program, and uses minimum memory space; and it is reasonably efficient for moderately large (say Shellsort, N< N 1000) List insertion, Algorithm 5.2 1L, uses the same basic idea as straight insertion, so it is suitable only for small N Like the other list sorting methods described below, it saves the cost of moving long records by manipulating links; this is particularly advantageous when the records have variable length or are part of other data structures Address calculation techniques are efficient when the keys have a known (usually uniform) distribution; the principal variants of this approach are multiple list insertion (Program 5.2 1M), and MacLaren’s combined radix-insertion SUMMARY, HISTORY, AND BIBLIOGRAPHY 5.5 381 method (discussed at the close of Section 5.2.5) The latter can be done with only 0(y/N) cells of additional memory A two-pass method that learns a nonuniform distribution is discussed in Theorem 5.2.5T Merge exchange, Algorithm 5.2.2M (Batcher’s method) and bitonic sort (exercise 5.3.4-10) are useful when a large number its cousin the of comparisons can be made simultaneously Quicksort, Algorithm 5.2.2Q (Hoare’s method) is probably the most useful general-purpose technique for internal sorting, because it requires very little memory its space and its competitors when average running time on most computers beats that of it is well implemented It can run very slowly in its worst case, however, so a careful choice of the partitioning elements should be made whenever nonrandom data are likely Choosing the median of three elements, as suggested in exercise 5.2.2-55, makes the worst-case behavior extremely unlikely and also improves the average running time slightly Straight selection, Algorithm 5.2.3S, when special hardware is a simple method especially suitable available to find the smallest element of a is list rapidly Heapsort Algorithm 5.2.3H, requires minimum memory and is guaranteed to run pretty fast; its average time and its maximum time are both roughly twice the average running time of quicksort , 10 List merging, Algorithm 5.2.4L, is a list sort that, like heapsort, is guaranteed to be rather fast even in its worst case; moreover, it is stable with respect to equal keys 11 Radix sorting, using Algorithm 5.2.5R, is a list sort especially appropri- ate for keys that are either rather short or that have an unusual lexicographic collating sequence The method of distribution counting (point above) can also be used, as an alternative to linking; such a procedure requires record spaces, plus a table of counters, but the simple form of its inner loop makes it especially good for ultra- fast, “number-crunching” computers that have look-ahead control Caution: Radix sorting should not be used for small TV! N 12 Merge values of TV, appropriate insertion, see Section 5.3.1, five- or six-record 13 possible is especially suitable for very small it would be the an application that requires the sorting of numerous in a “straight-line-coded” routine; for example, method in groups Hybrid methods, combining one or more of the techniques above, are also For example, merge insertion could be used for sorting short subfiles that arise in quicksort 14 Finally, an unnamed method appearing in the answer to exercise 1-3 seems to require the shortest possible sorting program But its average running time, proportional to TV , makes it the slowest sorting routine in this book! Table summarizes the speed and space characteristics of many of these methods, when programmed for MIX It is important to realize that the figures in this table are only rough indications of the relative sorting times; they apply to one computer only, and the assumptions made about input data are not 382 SORTING 5.5 U J3 05 ~ ~ «3 O T3 X! r bO ^0000010^050 ^ ^ N Tf CO COHMlOHrfCOCO N- TH 00 H ^005^.00100^ 0 Cl h WNHOOOOlOTfH kO 05 ^ OfOC5WTffOOOCO S CO (N 1C O § o T-H 05 CO H Tf< r-i CM rH (M rH kO rH (M 00 CO 00 CO CO bO ’3 ^5Snl S£2 Ttco ^ ooc^ LO S^»S OCO^lC^0O)Tf^HOOON(N ; ^ 1-H rH fe: ^ t* (h 05 ^ COMPUTER kO CM cs co > ^ AP M S |g /\| ^ kO ^ S > ^ (N 00 rH bO ^ o + rH + « +« O ^ ^ {2 £ ^ CM O ^ ^ ^ CM 05 fe; ã 1:0 ' CN s|S;Đ METHODS fe; rH > O ^ © ''tf N» rH CM CM USING rH M + + Sr^ ? I ^ CM 05 Tt oo bO 4- + oo T S « a a £ n °° ^ N (O ^ M ^ bj oo w « CM CO n _ O rf M H o co C4 rH fe" fe; fe; fe; fe; ; >> -5 fe; r^ " Tji ^ o a CO bo + G + >+ + _ >* fe; bO fe; r + « + fe; bO oo CO

Ngày đăng: 10/12/2022, 23:23

Tài liệu cùng người dùng

Tài liệu liên quan