Tài liệu hạn chế xem trước, để xem đầy đủ mời bạn chọn Tải xuống
1
/ 922 trang
THÔNG TIN TÀI LIỆU
Thông tin cơ bản
Định dạng
Số trang
922
Dung lượng
9,2 MB
Nội dung
DataStructuresandAlgorithmsinJava Michael T Goodrich Department of Computer Science University of California, Irvine Roberto Tamassia Department of Computer Science Brown University 0-471-73884-0 Fourth Edition John Wiley & Sons, Inc ASSOCIATE PUBLISHER Dan Sayre MARKETING DIRECTOR Frank Lyman EDITORIAL ASSISTANT Bridget Morrisey SENIOR PRODUCTION EDITOR Ken Santor COVER DESIGNER Hope Miller COVER PHOTO RESEARCHER Lisa Gee COVER PHOTO Ralph A Clevenger/Corbis by the authors and printed and bound by R.R Donnelley This book was set in - Crawfordsville The cover was printed by Phoenix Color, Inc Front Matter To Karen, Paul, Anna, and Jack -Michael T Goodrich To Isabel -Roberto Tamassia Preface to the Fourth Edition This fourth edition is designed to provide an introduction to datastructuresand algorithms, including their design, analysis, and implementation In terms of curricula based on the IEEE/ACM 2001 Computing Curriculum, this book is appropriate for use in the courses CS102 (I/O/B versions), CS103 (I/O/B versions), CS111 (A version), and CS112 (A/I/O/F/H versions) We discuss its use for such courses in more detail later in this preface The major changes, with respect to the third edition, are the following: • Added new chapter on arrays, linked lists, and recursion • Added new chapter on memory management • Full integration with Java 5.0 • Better integration with the Java Collections Framework • Better coverage of iterators • Increased coverage of array lists, including the replacement of uses of the class java.util.Vector with java.util.ArrayList • Update of all Java APIs to use generic types • Simplified list, binary tree, and priority queue ADTs • Further streamlining of mathematics to the seven most used functions • Expanded and revised exercises, bringing the total number of reinforcement, creativity, and project exercises to 670 Added exercises include new projects on maintaining a game's high-score list, evaluating postfix and infix expressions, minimax game-tree evaluation, processing stock buy and sell orders, scheduling CPU jobs, n-body simulation, computing DNA-strand edit distance, and creating and solving mazes This book is related to the following books: • M.T Goodrich, R Tamassia, and D.M Mount, DataStructuresandAlgorithmsin C++, John Wiley & Sons, Inc., 2004 This book has a similar overall structure to the present book, but uses C++ as the underlying language (with some modest, but necessary pedagogical differences required by this approach) Thus, it could make for a handy companion book in a curriculum that allows for either a Java or C++ track in the introductory courses • M.T Goodrich and R Tamassia, Algorithm Design: Foundations, Analysis, and Internet Examples, John Wiley & Sons, Inc., 2002 This is a textbook for a more advanced algorithmsanddatastructures course, such as CS210 (T/W/C/S versions) in the IEEE/ACM 2001 curriculum Use as a Textbook The design and analysis of efficient datastructures has long been recognized as a vital subject in computing, for the study of datastructures is part of the core of every collegiate computer science and computer engineering major program we are familiar with Typically, the introductory courses are presented as a two- or threecourse sequence Elementary datastructures are often briefly introduced in the first programming or introduction to computer science course and this is followed by a more in-depth introduction to datastructuresin the following course(s) Furthermore, this course sequence is typically followed at a later point in the curriculum by a more in-depth study of datastructuresandalgorithms We feel that the central role of data structure design and analysis in the curriculum is fully justified, given the importance of efficient datastructuresin most software systems, including the Web, operating systems, databases, compilers, and scientific simulation systems With the emergence of the object-oriented paradigm as the framework of choice for building robust and reusable software, we have tried to take a consistent objectoriented viewpoint throughout this text One of the main ideas of the objectoriented approach is that data should be presented as being encapsulated with the methods that access and modify them That is, rather than simply viewing data as a collection of bytes and addresses, we think of data as instances of an abstract data type (ADT) that include a repertory of methods for performing operations on the data Likewise, object-oriented solutions are often organized utilizing common design patterns, which facilitate software reuse and robustness Thus, we present each data structure using ADTs and their respective implementations and we introduce important design patterns as means to organize those implementations into classes, methods, and objects For each ADT presented in this book, we provide an associated Java interface Also, concrete datastructures realizing the ADTs are provided as Java classes implementing the interfaces above We also give Java implementations of fundamental algorithms (such as sorting and graph traversals) and of sample applications of datastructures (such as HTML tag matching and a photo album) Due to space limitations, we sometimes show only code fragments in the book and make additional source code available on the companion Web site, http://java.datastructures.net The Java code implementing fundamental datastructuresin this book is organized in a single Java package, net.datastructures This package forms a coherent library of datastructuresandalgorithmsinJava specifically designed for educational purposes in a way that is complementary with the Java Collections Framework Web Added-Value Education This book is accompanied by an extensive Web site: http://java.datastructures.net Students are encouraged to use this site along with the book, to help with exercises and increase understanding of the subject Instructors are likewise welcome to use the site to help plan, organize, and present their course materials For the Student for all readers, and specifically for students, we include: • All the Java source code presented in this book • The student version of the net.datastructures package • Slide handouts (four-per-page) in PDF format • A database of hints to all exercises, indexed by problem number • Java animations and interactive applets for datastructuresandalgorithms • Hyperlinks to other datastructuresandalgorithms resources We feel that the Java animations and interactive applets should be of particular interest, since they allow readers to interactively "play" with different data structures, which leads to better understanding of the different ADTs In addition, the hints should be of considerable use to anyone needing a little help getting started on certain exercises For the Instructor For instructors using this book, we include the following additional teaching aids: • Solutions to over two hundred of the book's exercises • A keyword-searchable database of additional exercises • The complete net.datastructures package • Additional Java source code • Slides in Powerpoint and PDF (one-per-page) format • Self-contained special-topic supplements, including discussions on convex hulls, range trees, and orthogonal segment intersection The slides are fully editable, so as to allow an instructor using this book full freedom in customizing his or her presentations A Resource for Teaching DataStructuresandAlgorithms This book contains many Java-code and pseudo-code fragments, and over 670 exercises, which are divided into roughly 40% reinforcement exercises, 40% creativity exercises, and 20% programming projects This book can be used for courses CS102 (I/O/B versions), CS103 (I/O/B versions), CS111 (A version), and/or CS112 (A/I/O/F/H versions) in the IEEE/ACM 2001 Computing Curriculum, with instructional units as outlined in Table 0.1 Table 0.1: Material for Units in the IEEE/ACM 2001 Computing Curriculum Instructional Unit Relevant Material PL1 Overview of Programming Languages Chapters & PL2 Virtual Machines Sections 14.1.1, 14.1.2, & 14.1.3 PL3 Introduction to Language Translation Section 1.9 PL4 Declarations and Types Sections 1.1, 2.4, & 2.5 PL5 Abstraction Mechanisms Sections 2.4, 5.1, 5.2, 5.3, 6.1.1, 6.2, 6.4, 6.3, 7.1, 7.3.1, 8.1, 9.1, 9.3, 11.6, & 13.1 PL6 Object-Oriented Programming Chapters & and Sections 6.2.2, 6.3, 7.3.7, 8.1.2, & 13.3.1 PF1 Fundamental Programming Constructs Chapters & PF2 Algorithmsand Problem-Solving Sections 1.9 & 4.2 PF3 Fundamental DataStructures Sections 3.1, 5.1-3.2, 5.3, , 6.1-6.4, 7.1, 7.3, 8.1, 8.3, 9.1-9.4, 10.1, & 13.1 PF4 Recursion Section 3.5 SE1 Software Design Chapter and Sections 6.2.2, 6.3, 7.3.7, 8.1.2, & 13.3.1 SE2 Using APIs Sections 2.4, 5.1, 5.2, 5.3, 6.1.1, 6.2, 6.4, 6.3, 7.1, 7.3.1, 8.1, 9.1, 9.3, 11.6, & 13.1 AL1 Basic Algorithmic Analysis Chapter AL2 Algorithmic Strategies Sections 11.1.1, 11.7.1, 12.2.1, 12.4.2, & 12.5.2 AL3 Fundamental Computing Algorithms Sections 8.1.4, 8.2.3, 8.3.5, 9.2, & 9.3.3, and Chapters 11, 12, & 13 DS1 Functions, Relations, and Sets Sections 4.1, 8.1, & 11.6 DS3 Proof Techniques Sections 4.3, 6.1.4, 7.3.3, 8.3, 10.2, 10.3, 10.4, 10.5, 11.2.1, 11.3, 11.6.2, 13.1, 13.3.1, 13.4, & 13.5 DS4 Basics of Counting Sections 2.2.3 & 11.1.5 DS5 Graphs and Trees Chapters 7, 8, 10, & 13 DS6 Discrete Probability Appendix A and Sections 9.2.2, 9.4.2, 11.2.1, & 11.7 Chapter Listing The chapters for this course are organized to provide a pedagogical path that starts with the basics of Java programming and object-oriented design, moves to concrete structures like arrays and linked lists, adds foundational techniques like recursion and algorithm analysis, and then presents the fundamental datastructuresand algorithms, concluding with a discussion of memory management (that is, the architectural underpinnings of data structures) Specifically, the chapters for this book are organized as follows: Java Programming Basics Object-Oriented Design Arrays, Linked Lists, and Recursion Analysis Tools Stacks and Queues Lists and Iterators Trees Priority Queues Maps and Dictionaries 10 Search Trees 11 Sorting, Sets, and Selection 12 Text Processing 13 Graphs 14 Memory A Useful Mathematical Facts Prerequisites We have written this book assuming that the reader comes to it with certain knowledge.That is, we assume that the reader is at least vaguely familiar with a high-level programming language, such as C, C++, or Java, and that he or she understands the main constructs from such a high-level language, including: • Variables and expressions • Methods (also known as functions or procedures) • Decision structures (such as if-statements and switch-statements) • Iteration structures (for-loops and while-loops) For readers who are familiar with these concepts, but not with how they are expressed in Java, we provide a primer on the Java language in Chapter Still, this book is primarily a datastructures book, not a Java book; hence, it does not provide a comprehensive treatment of Java Nevertheless, we not assume that the reader is necessarily familiar with object-oriented design or with linked structures, such as linked lists, for these topics are covered in the core chapters of this book In terms of mathematical background, we assume the reader is somewhat familiar with topics from high-school mathematics Even so, in Chapter 4, we discuss the seven most-important functions for algorithm analysis In fact, sections that use something other than one of these seven functions are considered optional, and are indicated with a star ( ) We give a summary of other useful mathematical facts, including elementary probability, in Appendix A About the Authors Professors Goodrich and Tamassia are well-recognized researchers inalgorithmsanddata structures, having published many papers in this field, with applications to Internet computing, information visualization, computer security, and geometric computing They have served as principal investigators in several joint projects sponsored by the National Science Foundation, the Army Research Office, and the Defense Advanced Research Projects Agency They are also active in educational technology research, with special emphasis on algorithm visualization systems Michael Goodrich received his Ph.D in Computer Science from Purdue University in 1987 He is currently a professor in the Department of Computer Science at University of California, Irvine Previously, he was a professor at Johns Hopkins University He is an editor for the International Journal of Computational Geometry & Applications and Journal of Graph Algorithmsand Applications Roberto Tamassia received his Ph.D in Electrical and Computer Engineering from the University of Illinois at Urbana-Champaign in 1988 He is currently a professor in the Department of Computer Science at Brown University He is editor-in-chief for the Journal of Graph Algorithmsand Applications and an editor for Computational Geometry: Theory and Applications He previously served on the editorial board of IEEE Transactions on Computers In addition to their research accomplishments, the authors also have extensive experience in the classroom For example, Dr Goodrich has taught datastructuresandalgorithms courses, including DataStructures as a freshman-sophomore level course and Introduction to Algorithms as an upper level course He has earned several teaching awards in this capacity His teaching style is to involve the students in lively interactive classroom sessions that bring out the intuition and insights behind data structuring and algorithmic techniques Dr Tamassia has taught DataStructuresandAlgorithms as an introductory freshman-level course since 1988 One thing that has set his teaching style apart is his effective use of interactive hypermedia presentations integrated with the Web The instructional Web sites, datastructures.net and algorithmdesign.net, supported by Drs Goodrich and Tamassia, are used as reference material by students, teachers, and professionals worldwide Acknowledgments There are a number of individuals who have made contributions to this book We are grateful to all our research collaborators and teaching assistants, who provided feedback on early drafts of chapters and have helped us in developing exercises, programming assignments, and algorithm animation systems.In particular, we would like to thank Jeff Achter, Vesselin Arnaudov, James Baker, Ryan Baker,Benjamin Boer, Mike Boilen, Devin Borland, Lubomir Bourdev, Stina Bridgeman, Bryan Cantrill, Yi-Jen Chiang, Robert Cohen, David Ellis, David Emory, Jody Fanto, Ben Finkel, Ashim Garg, Natasha Gelfand, Mark Handy, Michael Horn, Beno^it Hudson, Jovanna Ignatowicz, Seth Padowitz, James Piechota, Dan Polivy, Seth Proctor, Susannah Raub, Haru Sakai, Andy Schwerin, Michael Shapiro, MikeShim, Michael Shin, Galina Shubina, Christian Straub, Ye 10 P-14.3 Implement an external-memory sorting algorithm and compare it experimentally to any of the internal-memory sorting algorithms described in this book Chapter Notes Knuth [60] has very nice discussions about external-memory sorting and searching, and Ullman [93] discusses external memory structures for database systems The reader interested in the study of the architecture of hierarchical memory systems is referred to the book chapter by Burger et al [18] or the book by Hennessy and Patterson [48] The handbook by Gonnet and Baeza-Yates [41] compares the performance of a number of different sorting algorithms, many of which are externalmemory algorithms B-trees were invented by Bayer and McCreight [11] and Comer [24] provides a very nice overview of this data structure The books by Mehlhorn [74] and Samet [84] also have nice discussions about B+trees and their variants Aggarwal and Vitter [2] study the I/O complexity of sorting and related problems, establishing upper and lower bounds, including the lower bound for sorting given in this chapter Goodrich et al [44] study the I/O complexity of several computational geometry problems The reader interested in further study of I/O+efficient algorithms is encouraged to examine the survey paper of Vitter [95] 908 Appendix A Useful Mathematical Facts In this appendix we give several useful mathematical facts We begin with some combinatorial definitions and facts Logarithms and Exponents The logarithm function is defined as logba = c if a = bc The following identities hold for logarithms and exponents: logbac = logba + logbc logba/c = logba − logbc logbac = clogba logba = (logca)/logcb blogca = alogcb (ba)c = bac babc = ba+c ba/bc = ba−c In addition, we have the following: Proposition A.1: If a > 0, b > 0, and c > a + b, then loga + logb ≤ 2logc − Justification: It is enough to show that ab < c2/4 We can write ab = a2 + 2ab + b2 − a2 + 2ab − b2/4 = (a + b)2 − (a − b)2/4 ≤ (a + b)2/4 < c2/4 909 The natural logarithm function lnx = logex, where e = 2.71828…, is the value of the following progression: e = + 1/1! + 1/2! + 1/3! + ··· In addition, ex = + x/1! = x2/2! + x3/3! + ··· ln(1 + x) = x − x2/2! + x3/3! + x4/4! + ··· There are a number of useful inequalities relating to these functions (which derive from these definitions) Proposition A.2: If x > − 1, x/1 + x ≤ ln(1 + x) ≤ x Proposition A.3: For0≤x > 1, + x ≤ ex ≤ 1/1 − x Proposition A.4: For any two positive real numbers x and n, Integer Functions and Relations The "floor" and "ceiling" functions are defined respectively as follows: x = the largest integer less than or equal to x x = the smallest integer greater than or equal to x The modulo operator is defined for integers a ≥ and b > as The factorial function is defined as n! = ·····(n − 1)n The binomial coefficient is 910 which is equal to the number of different combinations one can define by choosing k different items from a collection of n items (where the order does not matter) The name "binomial coefficient" derives from the binomial expansion: We also have the following relationships Proposition A.5: If ≤k≤ n, then Proposition A.6 (Stirling's Approximation): where ε(n) is O(1/n2) The Fibonacci progression is a numeric progression such that F0 = 0, F1 = 1, and Fn = Fn−1 + Fn − for n≥ Proposition A.7: If Fn is defined by the Fibonacci progression, then Fn is Θ(gn), where g = (1 + )/2 is the so-called golden ratio Summations There are a number of useful facts about summations Proposition A.8: Factoring summations: provided a does not depend upon i 911 Proposition A.9: Reversing the order: One special form of summation is a telescoping sum: which arises often in the amortized analysis of a data structure or algorithm The following are some other facts about summations that arise often in the analysis of datastructuresandalgorithms Proposition A.10: Proposition A.11: Proposition A.12: If k≥ is an integer constant, then Another common summation is the geometric sum, < a ≠ , for any fixed real number Proposition A.13: for any real number < a ≠ Proposition A.14: for any real number < a < 912 There is also a combination of the two common forms, called the linear exponential summation, which has the following expansion: Proposition A.15: For < a ≠ 1, and n ≥ 2, The nth Harmonic number Hn is defined as Proposition A.16: If Hn is the nth harmonic number, then Hn is ln n + Θ(1) Basic Probability We review some basic facts from probability theory The most basic is that any statement about a probability is defined upon a sample space S, which is defined as the set of all possible outcomes from some experiment We leave the terms "outcomes" and "experiment" undefined in any formal sense Example A.17: Consider an experiment that consists of the outcome from flipping a coin five times This sample space has 25 different outcomes, one for each different ordering of possible flips that can occur Sample spaces can also be infinite, as the following example illustrates Example A.18: Consider an experiment that consists of flipping a coin until it comes up heads This sample space is infinite, with each outcome being a sequence of i tails followed by a single flip that comes up heads, for i = 1,2,3,… A probability space is a sample space S together with a probability function Pr that maps subsets of S to real numbers in the interval [0,1] It captures mathematically the notion of the probability of certain "events" occurring Formally, each subset A of S is called an event, and the probability function Pr is assumed to possess the following basic properties with respect to events defined from S: Pr(ø) = Pr(S) = ≤ Pr(A) ≤ 1, for any A S If A,B S and A∩B = ø, then Pr(AυB) = Pr(A) +Pr(B) 913 Two events A and B are independent if Pr(A∩B) = Pr(A)·Pr(B) A collection of events {A1, A2,…, An} is mutually independent if Pr(Ai1 ∩ A i2 ∩…∩Aik) = Pr(Ai1) Pr(Ai2) ···Pr(Aik) for any subset {Ai1,Ai2,…,Aik} The conditional probability that an event A occurs, given an event B, is denoted as Pr(A|B), and is defined as the ratio Pr(A∩B)/Pr(B), assuming that Pr(B) > An elegant way for dealing with events is in terms of random variables Intuitively, random variables are variables whose values depend upon the outcome of some experiment Formally, a random variable is a function X that maps outcomes from some sample space S to real numbers An indicator random variable is a random variable that maps outcomes to the set {0,1} Often indata structure and algorithm analysis we use a random variable X to characterize the running time of a randomized algorithm In this case, the sample space S is defined by all possible outcomes of the random sources used in the algorithm We are most interested in the typical, average, or "expected" value of such a random variable The expected value of a random variable X is defined as where the summation is defined over the range of X (which in this case is assumed to be discrete) Proposition A.19 (The Linearity of Expectation): Let X and Y be two random variables and let c be a number Then E(X + Y) = E(X) + E(Y) and E(cX) = cE(X) Example A.20: Let X be a random variable that assigns the outcome of the roll of two fair dice to the sum of the number of dots showing Then E(X) = Justification: To justify this claim, let X1 and X2 be random variables corresponding to the number of dots on each die Thus, X1 = X2 (i.e., they are two instances of the same function) and E(X) = E(X1 + X2) = E(X1) + E(X2) Each outcome of the roll of a fair die occurs with probability 1/6 Thus 914 E(xi) = 1/6 + 2/6 + 3/6 + 4/6 + 5/6 + 6/6 = 7/2, for i = 1,2 Therefore, E(X) = Two random variables X and Y are independent if Pr(X = x|Y = y)= Pr(X = x), for all real numbers x and y Proposition A.21: If two random variables X and Y are independent, then E(XY) = E(X)E(Y) Example A.22: Let X be a random variable that assigns the outcome of a roll of two fair dice to the product of the number of dots showing Then E(X) = 49/4 Justification: Let X1 and X2 be random variables denoting the number of dots on each die The variables X1 and X2 are clearly independent; hence E(X) = E(X1X2) = E(X1)E(X2) = (7/2)2 = 49/4 The following bound and corollaries that follow from it are known as Chernoff bounds Proposition A.23: Let X be the sum of a finite number of independent 0/1 random variables and let μ > be the expected value of X Then, for δ > 0, Useful Mathematical Techniques To compare the growth rates of different functions, it is sometimes helpful to apply the following rule Proposition A.24 (L'Hôpital's Rule): If we have limn→∞f(n) = +∞ and we have limn→∞g(n) = +∞, then limn→∞f(n)/g(n) = limn→∞ f′(n)/g′(n), where f′(n) and g′(n) respectively denote the derivatives of f(n) and g (n) In deriving an upper or lower bound for a summation, it is often useful to split a summation as follows: 915 Another useful technique is to bound a sum by an integral If f is a nonde-creasing function, then, assuming the following terms are defined, There is a general form of recurrence relation that arises in the analysis of divide-andconquer algorithms: T(n) = aT(n/b) + f(n), for constants a ≥ and b>1 Proposition A.25: Let T(n) be defined as above Then If f(n) is O(nlogba −ε, for some constant > 0, then T(n) is Θ(nlogba) If f(n) is Θ(nlogbalogkn), for a fixed nonnegative integer k≥ 0, then T(n) is log a Θ(n b logk+1n) If f(n) is Ω(nlogba+ ), for some constant T(n) isΘ(f(n)) > 0, and if a f (n/b) ≤ cf(n), then This proposition is known as the master method for characterizing divide-andconquer recurrence relations asymptotically 916 Bibliography [1] G M Adelson-Velskii and Y M Landis, An algorithm for the organization of information, Doklady Akademii Nauk SSSR, vol 146, pp 263– 266, 1962 English translation in Soviet Math Dokl., 3, 1259–1262 [2] A Aggarwal and J S Vitter, The input/output complexity of sorting and related problems, Commun ACM, vol 31, pp 1116–1127, 1988 [3] A V Aho, Algorithms for finding patterns in strings, in Handbook of Theoretical Computer Science (J van Leeuwen, ed.), vol A Algorithmsand Complexity, pp 255–300, Amsterdam: Elsevier, 1990 [4] A V Aho, J E Hopcroft, and J D Ullman, The Design and Analysis of Computer Algorithms Reading, MA: Addison-Wesley, 1974 [5] A V Aho, J E Hopcroft, and J D Ullman, DataStructuresandAlgorithms Reading, MA: Addison-Wesley, 1983 [6] R K Ahuja, T L Magnanti, and J B Orlin, Network Flows: Theory, Algorithms, and Applications Englewood Cliffs, NJ: Prentice Hall, 1993 [7] K Arnold and J Gosling, The Java Programming Language The Java Series, Reading, Mass.: Addison-Wesley, 1996 [8] R Baeza-Yates and B Ribeiro-Neto, Modern Information Retrieval Reading, Mass.: Addison-Wesley, 1999 [9] O Baruvka, O jistem problemu minimalnim, Praca Moravske Prirodovedecke Spolecnosti, vol 3, pp 37–58, 1926 (in Czech) [10] R Bayer, Symmetric binary B-trees: Data structure and maintenance, Acta Informatica, vol 1, no 4, pp 290–306, 1972 [11] R Bayer and McCreight, Organization of large ordered indexes, Acta Inform., vol 1, pp 173–189, 1972 917 [12] J L Bentley, Programming pearls: Writing correct programs, Communications of the ACM, vol 26, pp 1040–1045, 1983 [13] J L Bentley, Programming pearls: Thanks, heaps, Communications of the ACM, vol 28, pp 245–250, 1985 [14] G Booch, Object-Oriented Analysis and Design with Applications Redwood City, CA: Benjamin/Cummings, 1994 [15] R S Boyer and J S Moore, A fast string searching algorithm, Communications of the ACM, vol 20, no 10, pp 762–772, 1977 [16] G Brassard, Crusade for a better notation, SIGACT News, vol 17, no 1, pp 60–64, 1985 [17] T Budd, An Introduction to Object-Oriented Programming Reading, Mass.: Addison-Wesley, 1991 [18] D Burger, J R Goodman, and G S Sohi, Memory systems, in The Computer Science and Engineering Handbook (A B Tucker, Jr., ed.), ch 18, pp 447–461, CRC Press, 1997 [19] M Campione and H Walrath, The Java Tutorial: Programming for the Internet Reading, Mass.: Addison Wesley, 1996 [20] L Cardelli and P Wegner, On understanding types, data abstraction and polymorphism, ACM Computing Surveys, vol 17, no 4, pp 471–522, 1985 [21] S Carlsson, Average case results on heapsort, BIT, vol 27, pp 2–17, 1987 [22] K L Clarkson, Linear programming in O(n3 d2 )time, Inform Process Lett., vol 22, pp 21–24, 1986 [23] R Cole, Tight bounds on the complexity of the Boyer-Moore pattern matching algorithm, SIAM Journal on Computing, vol 23, no 5, pp 1075–1091, 1994 [24] D Comer, The ubiquitous B-tree, ACM Comput Surv., vol 11, pp 121– 137, 1979 [25] T H Cormen, C E Leiserson, and R L Rivest, Introduction to Algorithms Cambridge, MA: MIT Press, 1990 [26] G Cornell and C S Horstmann, Core Java Mountain View, CA: SunSoft Press, 1996 [27] M Crochemore and T Lecroq, Pattern matching and text compression algorithms, in The Computer Science and Engineering Handbook (A B Tucker, Jr., ed.), ch 8, pp 162–202, CRC Press, 1997 [28] S A Demurjian, Sr., Software design, in The Computer Science and Engineering Handbook (A B Tucker, Jr., ed.), ch 108, pp 2323–2351, CRC Press, 1997 [29] G Di Battista, P Eades, R Tamassia, and I G Tollis, Algorithms for drawing graphs: an annotated bibliography, Comput Geom Theory Appl., vol 4, pp 235–282, 1994 [30] G Di Battista, P Eades, R Tamassia, and I G Tollis, GraphDrawing Upper Saddle River, NJ: Prentice Hall, 1999 [31] E W Dijkstra, A note on two problems in connexion with graphs, Numerische Mathematik, vol 1, pp 269–271, 1959 918 [32] J R Driscoll, H N Gabow, R Shrairaman, and R E Tarjan, Relaxed heaps: An alternative to Fibonacci heaps with applications to parallel computation., Commun ACM, vol 31, pp 1343–1354, 1988 [33] S Even, Graph Algorithms Potomac, Maryland: Computer Science Press, 1979 [34] D Flanagan, Javain a Nutshell OReilly, 4th ed., 2002 [35] R W Floyd, Algorithm 97: Shortest path, Communications of the ACM, vol 5, no 6, p 345, 1962 [36] R W Floyd, Algorithm 245: Treesort 3, Communications of the ACM, vol 7, no 12, p 701, 1964 [37] M L Fredman and R E Tarjan, Fibonacci heaps and their uses in improved network optimization algorithms, J ACM, vol 34, pp 596–615, 1987 [38] E Gamma, R Helm, R Johnson, and J Vlissides, Design Patterns: Elements of Reusable Object-Oriented Software Reading, Mass.: AddisonWesley, 1995 [39] A M Gibbons, Algorithmic Graph Theory Cambridge, UK: Cambridge University Press, 1985 [40] A Goldberg and D Robson, Smalltalk-80: The Language Reading, Mass.: Addison-Wesley, 1989 [41] G H Gonnet and R Baeza-Yates, Handbook of AlgorithmsandDataStructuresin Pascal and C Reading, Mass.: Addison-Wesley, 1991 [42] G H Gonnet and J I Munro, Heaps on heaps, SIAM Journal on Computing, vol 15, no 4, pp 964–971, 1986 [43] M T Goodrich, M Handy, B Hudson, and R Tamassia, Accessing the internal organization of datastructuresin the JDSL library, in Proc Workshop on Algo-rithm Engineering and Experimentation (M T Goodrich and C C McGeoch, eds.), vol 1619 of Lecture Notes Comput Sci., pp 124–139, SpringerVerlag, 1999 [44] M T Goodrich, J.-J Tsay, D E Vengroff, and J S Vitter, Externalmemory computational geometry, in Proc 34th Annu IEEE Sympos Found Comput Sci., pp 714–723, 1993 [45] R L Graham and P Hell, On the history of the minimum spanning tree problem, Annals of the History of Computing, vol 7, no 1, pp 43–57, 1985 [46] L J Guibas and R Sedgewick, A dichromatic framework for balanced trees, in Proc 19th Annu IEEE Sympos Found Comput Sci., Lecture Notes Comput Sci., pp 8–21, Springer-Verlag, 1978 [47] Y Gurevich, What does O(n)mean?, SIGACT News, vol 17, no 4, pp 61–63, 1986 [48] J Hennessy and D Patterson, Computer Architecture: A Quantitative Approach San Francisco: Morgan Kaufmann, 2nd ed., 1996 [49] C A R Hoare, Quicksort, The Computer Journal, vol 5, pp 10–15, 1962 [50] J E Hopcroft and R E Tarjan, Efficient algorithms for graph manipulation, Communications of the ACM, vol 16, no 6, pp 372–378, 1973 [51] C S Horstmann, Computing Concepts inJava New York: John Wiley, and Sons, 1998 919 [52] B Huang and M Langston, Practical in-place merging, Communications of the ACM, vol 31, no 3, pp 348–352, 1988 [53] J JáJá, An Introduction to Parallel Algorithms Reading, Mass.: AddisonWesley, 1992 [54] V Jarnik, O jistem problemu minimalnim, Praca Moravske Prirodovedecke Spolecnosti, vol 6, pp 57–63, 1930 (in Czech) [55] R E Jones, Garbage Collection: Algorithms for Automatic Dynamic Memory Management John Wiley and Sons, 1996 [56] D R Karger, P Klein, and R E Tarjan, A randomized linear-time algorithm to find minimum spanning trees, Journal of the ACM, vol 42, pp 321– 328, 1995 [57] R M Karp and V Ramachandran, Parallel algorithms for shared memory machines, in Handbook of Theoretical Computer Science (J van Leeuwen, ed.), pp 869–941, Amsterdam: Elsevier/The MIT Press, 1990 [58] P Kirschenhofer and H Prodinger, The path length of random skip lists, Acta Informatica, vol 31, pp 775–792, 1994 [59] J Kleinberg and É Tardos, Algorithm Design Reading, MA: AddisonWesley, 2006 [60] D E Knuth, Sorting and Searching,vol.3of The Art of Computer Programming Reading, MA: Addison-Wesley, 1973 [61] D E Knuth, Big omicron and big omega and big theta, in SIGACT News, vol 8, pp 18–24, 1976 [62] D E Knuth, FundamentalAlgorithms, vol.1of The Art of Computer Programming Reading, MA: Addison-Wesley, 3rd ed., 1997 [63] D E Knuth, Sorting and Searching,vol.3of The Art of Computer Programming Reading, MA: Addison-Wesley, 2nd ed., 1998 [64] D E Knuth, J H Morris, Jr., and V R Pratt, Fast pattern matching in strings, SIAM Journal on Computing, vol 6, no 1, pp 323–350, 1977 [65] J B Kruskal, Jr., On the shortest spanning subtree of a graph and the traveling salesman problem, Proc Amer Math Soc., vol 7, pp 48–50, 1956 [66] N G Leveson and C S Turner, An investigation of the Therac-25 accidents, IEEE Computer, vol 26, no 7, pp 18–41, 1993 [67] R Levisse, Some lessons drawn from the history of the binary search algorithm, The Computer Journal, vol 26, pp 154–163, 1983 [68] A Levitin, Do we teach the right algorithm design techniques?, in 30th ACM SIGCSE Symp on Computer Science Education, pp 179–183, 1999 [69] B Liskov and J Guttag, Abstraction and Specification in Program Development Cambridge, Mass./New York: The MIT Press/McGraw-Hill, 1986 [70] E M McCreight, A space-economical suffix tree construction algorithm, Journal of Algorithms, vol 23, no 2, pp 262–272, 1976 [71] C J H McDiarmid and B A Reed, Building heaps fast, Journal of Algorithms, vol 10, no 3, pp 352–365, 1989 [72] N Megiddo, Linear-time algorithms for linear programming in R and related problems, SIAM J Comput., vol 12, pp 759–776, 1983 [73] N Megiddo, Linear programming in linear time when the dimension is fixed, J ACM, vol 31, pp 114–127, 1984 920 [74] K Mehlhorn, DataStructuresandAlgorithms 1: Sorting and Searching, vol of EATCS Monographs on Theoretical Computer Science Heidelberg, Germany: Springer-Verlag, 1984 [75] K Mehlhorn, DataStructuresandAlgorithms 2: Graph Algorithmsand NP-Completeness,vol 2of EATCS Monographs on Theoretical Computer Science Heidelberg, Germany: Springer-Verlag, 1984 [76] K Mehlhorn and A Tsakalidis, Data structures, in Handbook of Theoretical Computer Science (J van Leeuwen, ed.), vol A Algorithmsand Complexity, pp 301–341, Amsterdam: Elsevier, 1990 [77] M H Morgan, Vitruvius: The Ten Books on Architecture New York: Dover Publications, Inc., 1960 [78] D R Morrison, PATRICIA-practical algorithm to retrieve information coded in alphanumeric, Journal of the ACM, vol 15, no 4, pp 514–534, 1968 [79] R Motwani and P Raghavan, Randomized Algorithms New York, NY: Cambridge University Press, 1995 [80] T Papadakis, J I Munro, and P V Poblete, Average search and update costs in skip lists, BIT, vol 32, pp 316–332, 1992 [81] P V Poblete, J I Munro, and T Papadakis, The binomial transform and its application to the analysis of skip lists, in Proceedings of the European Symposium on Algorithms (ESA), pp 554–569, 1995 [82] R C Prim, Shortest connection networks and some generalizations, Bell Syst Tech J., vol 36, pp 1389–1401, 1957 [83] W Pugh, Skip lists: a probabilistic alternative to balanced trees, Commun ACM, vol 33, no 6, pp 668–676, 1990 [84] H Samet, The Design and Analysis of Spatial DataStructures Reading, MA: Addison-Wesley, 1990 [85] R Schaffer and R Sedgewick, The analysis of heapsort, Journal of Algorithms, vol 15, no 1, pp 76–100, 1993 [86] D D Sleator and R E Tarjan, Self-adjusting binary search trees, J ACM, vol 32, no 3, pp 652–686, 1985 [87] G A Stephen, String Searching Algorithms World Scientific Press, 1994 [88] R Tamassia, Graph drawing, in Handbook of Discrete and Computational Geometry (J E Goodman and J ORourke, eds.), ch 44, pp 815–832, Boca Raton, FL: CRC Press LLC, 1997 [89] R Tarjan and U Vishkin, An efficient parallel biconnectivity algorithm, SIAM J Comput., vol 14, pp 862–874, 1985 [90] R E Tarjan, Depth first search and linear graph algorithms, SIAM Journal on Computing, vol 1, no 2, pp 146–160, 1972 [91] R E Tarjan, DataStructuresand Network Algorithms, vol 44 of CBMSNSF Regional Conference Series in Applied Mathematics Philadelphia, PA: Society for Industrial and Applied Mathematics, 1983 [92] A B Tucker, Jr., The Computer Science and Engineering Handbook CRC Press, 1997 [93] J D Ullman, Principles of Database Systems Potomac, MD: Computer Science Press, 1983 921 [94] J van Leeuwen, Graph algorithms, in Handbook of Theoretical Computer Science (J van Leeuwen, ed.), vol A Algorithmsand Complexity, pp 525–632, Amsterdam: Elsevier, 1990 [95] J S Vitter, Efficient memory access in large-scale computation, in Proc 8th Sympos Theoret Aspects Comput Sci., Lecture Notes Comput Sci., Springer-Verlag, 1991 [96] J S Vitter and W C Chen, Design and Analysis of Coalesced Hashing New York: Oxford University Press, 1987 [97] J S Vitter and P Flajolet, Average-case analysis of algorithmsanddata structures, inAlgorithmsand Complexity (J van Leeuwen, ed.), vol A of Handbook of Theoretical Computer Science, pp 431–524, Amsterdam: Elsevier, 1990 [98] S Warshall, A theorem on boolean matrices, Journal of the ACM, vol 9, no 1, pp 11–12, 1962 [99] J W J Williams, Algorithm 232: Heapsort, Communications of the ACM, vol 7, no 6, pp 347–348, 1964 [100] D Wood, Data Structures, Algorithms, and Performance Reading, Mass.: Addison-Wesley, 1993 922 ... exercises, indexed by problem number • Java animations and interactive applets for data structures and algorithms • Hyperlinks to other data structures and algorithms resources We feel that the Java. .. students in lively interactive classroom sessions that bring out the intuition and insights behind data structuring and algorithmic techniques Dr Tamassia has taught Data Structures and Algorithms. .. development of the Java code examples in this book and to the initial design, implementation, and testing of the net.datastructures library of data structures and algorithms in Java We are also