Lecture Notes in Computer Science
Lecture Notes in Computer Science Edited by G Goos, J Hartmanis and J van Leeuwen 1851 Berlin Heidelberg New York Barcelona Hong Kong London Milan Paris Singapore Tokyo Magn´us M Halld´orsson (Ed.) Algorithm Theory – SWAT 2000 7th Scandinavian Workshop on Algorithm Theory Bergen, Norway, July 5-7, 2000 Proceedings 13 Series Editors Gerhard Goos, Karlsruhe University, Germany Juris Hartmanis, Cornell University, NY, USA Jan van Leeuwen, Utrecht University, The Netherlands Volume Editor Magn´us M Halld´orsson University of Iceland and University of Bergen Taeknigardur, 107 Reykjavik, Iceland E-mail: mmh@hi.is Cataloging-in-Publication Data applied for Die Deutsche Bibliothek - CIP-Einheitsaufnahme Algorithm theory : proceedings / SWAT ’2000, 7th Scandinavian Workshop on Algorithm Theory, Bergen, Norway, July - 7, 2000, Magn´us M Halld´orsson (ed.) - Berlin ; Heidelberg ; New York ; Barcelona ; Hong Kong ; London ; Milan ; Paris ; Singapore ; Tokyo : Springer, 2000 (Lecture notes in computer science ; Vol 1851) ISBN 3-540-67690-2 CR Subject Classification (1998): F.2, E.1, G.2, I.3.5, C.2 ISSN 0302-9743 ISBN 3-540-67690-2 Springer-Verlag Berlin Heidelberg New York This work is subject to copyright All rights are reserved, whether the whole or part of the material is concerned, specifically the rights of translation, reprinting, re-use of illustrations, recitation, broadcasting, reproduction on microfilms or in any other way, and storage in data banks Duplication of this publication or parts thereof is permitted only under the provisions of the German Copyright Law of September 9, 1965, in its current version, and permission for use must always be obtained from Springer-Verlag Violations are liable for prosecution under the German Copyright Law Springer-Verlag is a company in the BertelsmannSpringer publishing group © Springer-Verlag Berlin Heidelberg 2000 Printed in Germany Typesetting: Camera-ready by author, data conversion by Steingräber Satztechnik GmbH, Heidelberg Printed on acid-free paper SPIN: 10722125 06/3142 543210 Preface The papers in this volume were presented at SWAT 2000, the Seventh Scandinavian Workshop on Algorithm Theory The workshop, which is really a conference, has been held biennially since 1988, rotating between the five Nordic countries (Sweden, Norway, Finland, Denmark, and Iceland) It also has a loose association with the WADS (Workshop on Algorithms and Data Structures) conference that is held in odd numbered years SWAT is intended as a forum for researchers in the area of design and analysis of algorithms The SWAT conferences are coordinated by the SWAT steering committee, which consists of B Aspvall (Bergen), S Carlsson (Lule˚ a), H Hafsteinsson (U Iceland), R Karlsson (Lund), A Lingas (Lund), E Schmidt (Aarhus), and E Ukkonen (Helsinki) The call for papers sought contributions in all areas of algorithms and data structures, including computational geometry, parallel and distributed computing, graph theory, and computational biology A total of 105 papers were submitted, out of which the program committee selected 43 for presentation In addition, invited lectures were presented by Uriel Feige (Weizmann), Mikkel Thorup (AT&T Labs-Research), and Esko Ukkonen (Helsinki) SWAT 2000 was held in Bergen, July 5-7, 2000, and was locally organized by a committee consisting of Pinar Heggernes, Petter Kristiansen, Fredrik Manne, and Jan Arne Telle (chair), all from the department of informatics, University of Bergen We wish to thank all the referees who aided in evaluating the papers We also thank The Research Council of Norway (NFR) and the City of Bergen for financial support July 2000 Magn´ us M Halld´ orsson VI Organization Program Committee Amotz Bar-Noy, Tel-Aviv Univ Luisa Gargano, Univ of Salerno Jens Gustedt, LORIA and INRIA Lorraine Magn´ us M Halld´ orsson, chair, U Iceland and U Bergen Kazuo Iwama, Kyoto Univ Klaus Jansen, Univ of Kiel Jan Kratochv´ıl, Charles Univ Andrzej Lingas, Lund Univ Jaikumar Radhakrishnan, Tata Institute R Ravi, Carnegie Mellon Univ Jă org-Rudicher Sack, Carleton Univ Baruch Schieber, IBM Research Sven Skyum, Univ of Aarhus Hisao Tamaki, Meiji Univ Jan Arne Telle, Univ of Bergen Esko Ukkonen, Univ of Helsinki Gerhard Woeginger, Technical Univ Graz Referees Pankaj Agarwal Helmut Alt Larse Arge David Avis Luitpold Babel Bruno Beauquier Binay Bhattacharya Therese Biedl Andreas Bjăorklund Claudson Bornstein Gerth Stứlting Brodal Eranda C ¸ ela Timothy M Chan Joseph Cheriyan Johanne Cohen Artur Czumaj Frank Dehne Ingvar Eidhammer Aleksei V Fishkin ´ Eric Fleury Fedor Fomin Gudmund S Frandsen Leszek G¸asieniec Laura Grigori Joachim Gudmundsson Bjarni V Halld´ orsson Mikael Hammar Michael Houle David Hutchinson Rahul Jain Jesper Jansson Rolf Karlsson T Kavita Jyrki Kivinen Rolf Klein Bettina Klinz Jochen Kăonemann Goran Konjevod S Krishnan Christos Levcopoulos Moshe Lewenstein Alex Lopez-Ortiz Ross McConnell Ewa Malesinska Monaldo Mastrolilli Brian M Mayoh Jiri Matousek Peter Bro Miltersen Gabriele Neyer ă Anna Ostlin Rasmus Pagh Jakob Pagter Christian N S Pedersen Marco Pellegrini Cecilia M Procopiuc Wojtek Rytter Fatma Sibel Salman Sven Schuierer Eike Seidel Pranab Sen Jop Sibeyn Michiel Smid Edyta Szymanska Ariel Tamir Santosh Vempala S Venkatesh Alexander Wolff Anders Yeo Table of Contents Invited Talks Dynamic Graph Algorithms with Applications Mikkel Thorup (AT&T Labs-Research) and David R Karger (MIT) Coping with the NP-Hardness of the Graph Bandwidth Problem 10 Uriel Feige (Weizmann Institute) Toward Complete Genome Data Mining in Computational Biology 20 Esko Ukkonen (University of Helsinki) Data Structures A New Trade-Off for Deterministic Dictionaries 22 Rasmus Pagh (University of Aarhus) Improved Upper Bounds for Pairing Heaps 32 John Iacono (Rutgers University) Maintaining Center and Median in Dynamic Trees 46 Stephen Alstrup, Jacob Holm (IT University of Copenhagen), and Mikkel Thorup (AT&T Labs–Research) Dynamic Planar Convex Hull with Optimal Query Time and O(log n · log log n) Update Time 57 Gerth Stølting Brodal and Riko Jacob (University of Aarhus) Dynamic Partitions A Dynamic Algorithm for Maintaining Graph Partitions 71 Lyudmil G Aleksandrov (Bulgarian Academy of Sciences) and Hristo N Djidjev (University of Warwick) Data Structures for Maintaining Set Partitions 83 Michael A Bender, Saurabh Sethia, and Steven Skiena (SUNY Stony Brook) Graph Algorithms Fixed Parameter Algorithms for Planar Dominating Set and Related Problems 97 Jochen Alber (Universită at Tă ubingen), Hans L Bodlaender (Utrecht University), Henning Fernau, and Rolf Niedermeier (Universită at Tă ubingen) VIII Embeddings of k-Connected Graphs of Pathwidth k 111 Arvind Gupta (Simon Fraser University), Naomi Nishimura (University of Waterloo), Andrzej Proskurowski (University of Oregon), and Prabhakar Ragde (University of Waterloo) On Graph Powers for Leaf-Labeled Trees 125 Naomi Nishimura, Prabhakar Ragde (University of Waterloo), and Dimitrios M Thilikos (Universitat Polit`ecnica de Catalunya) Recognizing Weakly Triangulated Graphs by Edge Separability 139 Anne Berry, Jean-Paul Bordat (LIRMM, Montpellier) and Pinar Heggernes (University of Bergen) Online Algorithms Caching for Web Searching 150 Bala Kalyanasundaram (Georgetown University), John Noga (Technical University of Graz), Kirk Pruhs (University of Pittsburgh), and Gerhard Woeginger (Technical University of Graz) On-Line Scheduling with Precedence Constraints 164 Yossi Azar and Leah Epstein (Tel-Aviv University) Scheduling Jobs Before Shut-Down 175 Vincenzo Liberatore (UMIACS) Resource Augmentation in Load Balancing 189 Yossi Azar, Leah Epstein (Tel-Aviv University), and Rob van Stee (Centre for Mathematics and Computer Science, CWI) Fair versus Unrestricted Bin Packing 200 Yossi Azar (Tel-Aviv University), Joan Boyar, Lene M Favrholdt, Kim S Larsen, and Morten N Nielsen (University of Southern Denmark) Approximation Algorithms A d/2 Approximation for Maximum Weight Independent Set in d-Claw Free Graphs 214 Piotr Berman (Pennsylvania State University) Approximation Algorithms for the Label-CoverM AX and Red-Blue Set Cover Problems 220 David Peleg (Weizmann Institute) Approximation Algorithms for Maximum Linear Arrangement 231 Refael Hassin and Shlomi Rubinstein (Tel-Aviv University) IX Approximation Algorithms for Clustering to Minimize the Sum of Diameters 237 Srinivas R Doddi, Madhav V Marathe (Los Alamos National Laboratory), S S Ravi (SUNY Albany), David Scot Taylor (UCLA), and Peter Widmayer (ETH) Matchings Robust Matchings and Maximum Clustering 251 Refael Hassin and Shlomi Rubinstein (Tel-Aviv University) The Hospitals/Residents Problem with Ties 259 Robert W Irving, David F Manlove, and Sandy Scott (University of Glasgow) Network Design Incremental Maintenance of the 5-Edge-Connectivity Classes of a Graph 272 Yefim Dinitz (Ben-Gurion University) and Ronit Nossenson (Technion) On the Minimum Augmentation of an -Connected Graph to a k-Connected Graph 286 Toshimasa Ishii and Hiroshi Nagamochi (Toyohashi University of Technology) Locating Sources to Meet Flow Demands in Undirected Networks 300 Kouji Arata (Osaka University), Satoru Iwata (University of Tokyo), Kazuhisa Makino, and Satoru Fujishige (Osaka University) Improved Greedy Algorithms for Constructing Sparse Geometric Spanners 314 Joachim Gudmundsson, Christos Levcopoulos (Lund University), and Giri Narasimhan (University of Memphis) Computational Geometry Computing the Penetration Depth of Two Convex Polytopes in 3D 328 Pankaj K Agarwal (Duke University), Leonidas J Guibas (Stanford University), Sariel Har-Peled (Duke University), Alexander Rabinovitch (Synopsys Inc.), and Micha Sharir (Tel Aviv University) Compact Voronoi Diagrams for Moving Convex Polygons 339 Leonidas J Guibas (Stanford University), Jack Snoeyink (University of North Carolina), and Li Zhang (Stanford University) Efficient Expected-Case Algorithms for Planar Point Location 353 Sunil Arya, Siu-Wing Cheng (Hong Kong University of Science and Technology), David M Mount (University of Maryland), and H Ramesh (Indian Institute of Science) X A New Competitive Strategy for Reaching the Kernel of an Unknown Polygon 367 Leonidas Palios (University of Ioannina) Strings and Algorithm Engineering The Enhanced Double Digest Problem for DNA Physical Mapping 383 Ming-Yang Kao, Jared Samet (Yale University), and Wing-Kin Sung (University of Hong Kong) Generalization of a Suffix Tree for RNA Structural Pattern Matching 393 Tetsuo Shibuya (IBM Tokyo Research Laboratory) Efficient Computation of All Longest Common Subsequences 407 Claus Rick (Universită at Bonn) A Blocked All-Pairs Shortest-Paths Algorithm 419 Gayathri Venkataraman, Sartaj Sahni, and Srabani Mukhopadhyaya (University of Florida) External Memory Algorithms On External-Memory MST, SSSP, and Multi-way Planar Graph Separation 433 Lars Arge (Duke University), Gerth Stølting Brodal (University of Aarhus) and Laura Toma (Duke University) I/O-Space Trade-Offs 448 Lars Arge (Duke University), and Jakob Pagter (University of Aarhus) Optimization Optimal Flow Aggregation 462 Subhash Suri, Tuomas Sandholm, and Priyank Ramesh Warkhede (Washington University) On the Complexities of the Optimal Rounding Problems of Sequences and Matrices 476 Tetsuo Asano (JAIST), Tomomi Matsui (University of Tokyo), and Takeshi Tokuyama (Tohoku University) On the Complexity of the Sub-permutation Problem 490 Shlomo Ahal (Ben-Gurion University) and Yuri Rabinovich (Haifa University) Parallel Attribute-Efficient Learning of Monotone Boolean Functions 504 Peter Damaschke (FernUniversită at Hagen) Fibonacci Correction Networks 541 Proof We prove, that its destination column index does not change or decreases by We trace that stopped for 2s+1 layers after stop repairing all the faults it can encounter and changing all other active 1’s to 0’s Even if destination column index u of this before it is stopped is equal to the label of its column, after it is stopped the index is smaller A single displaced that has such smaller u, after layer Lq is in register r2i+q+1 and treated by layer Cq moves to the next column (having label smaller by 1) It is easy to see that during these computations this displaced goes to the next column into a register on higher level, than at the moment it was stopped The next column because of its delay, at the moment considered gets to it, is at the same or earlier phase of its computations, as column in which the stop occurred was at the moment of stop It proves that the index u of does not decrease by more than one Fact Assume an active is stopped by another active and its value decreases In such a case its value becomes to be not smaller than the value of causing the delay decreased by Proof There is no difference for displaced between being stopped by a passive fault and another displaced Just before one active stops another they must have the same destination column index Lemma Network T (s, N, t) reduces the dirty area of any x-partially-disturbed input (x ≤ t) to at most t2 + t registers if it has at most t − x passive faults Proof Putting together the facts one can see that at the end of computations 1’s that were displaced at the beginning of the second part of the network have values v1 , v2 , , vx Without loss of generality we can assume that they form a not increasing sequence Because of the Facts the difference vi − vi+1 is not bigger than the number of faults with vi+1 encountered increased by one Since v1 = t, we have that vx ≥ and having value vx is not active at the end of the second part It gives dirty area of size not bigger than t2 + t since b is decreased x times during the computations Partial Correction Network Now we define a (t, ct(log N )cs log t )-partial-correction network C(s, N, t), where s is an integer constant and cs depends on s This network has depth α(1 + 1/s) log N + cs (1 + o(1)) log t log log N We show later in this paper how from this network we can obtain a t-correction network of almost the same depth For this section we change denotation ψi to ψ(i) (the same for ϕ and ϑ) Before we begin to construct the network C(·) we prove a lemma about network FN on which the construction is based As we know network FN successfully corrects one displaced The lemma describes its behavior if the number of displaced 1’s is bigger 542 G Stachowiak Lemma Assume that in a given moment of computations not less than t displaced 1’s are in the correction area of FN In such a case after the next layer of FN at least t/2 displaced 1’s are in the correction area Consequently after s layers at least t/2s displaced 1’s remain in the correction area Proof The reason displaced can drop off the correction area is that a comparison between this and another displaced is made In such case the other remains in correction area The main idea of construction C(·) is to have a number of disjoint networks FN At the beginning all displaced 1’s are moved to a few networks FN (other become free from displaced 1’s) Each s steps displaced 1’s that drop out from correction area in one FN are moved to another FN not containing previously any displaced 1’s These moved 1’s are in correction area of their new network FN , because the new FN is delayed by s + steps In the delayed FN there is at most fraction − 1/2s of displaced 1’s from the previous FN Thus the total delay cannot grow very much because in the subsequent networks FN maximal numbers of displaced 1’s go down exponentially In fact this idea is similar to that applied in [3] The changes consist in applying FN network and putting Cq layers not every second step, but less frequently The following simple combinatorial fact says us, that in the our construction the number of networks FN is small Fact The number of nondecreasing sequences j1 , j2 , , jk for ≤ k ≤ K and ≤ jl ≤ J is equal: J +K = O(J K ) K Proof The number is the same as the number of nondecreasing sequences of integers ≤ jl ≤ J of length K which is the same as the number of increasing sequences of integers ≤ jl ≤ J + K Now we define network C(·) in a more formal way Let K = − log1−1/2s t, J = dLG(N )e + K In the network C(s, N, t) indexes of registers have the form (n0 ) ◦ j ◦ (J + τ ) In this denotation τ ∈ {1, , t}, j = (j1 , , jk ) is a nondecreasing sequence of integers jl ∈ {1, , J} of length at most K and n0 ∈ {1, , N }, N As in the case of T (·), we can change all displaced 1’s into where N = K+J ( J )t 0’s There are at most two (differing by 1) levels of the highest 0’s in a column We treat the level just below these levels as the border between 0’s and 1’s for the needs of this algorithm We exclude from the considerations all displaced 1’s which are moved to registers above the border level The network C(s, N, t) consists of two parts The first part uses a selector for the t largest entries [6] to each row of registers After the first part, displaced 1’s in all rows below the border get to registers R(n0 , τ + J) Indexes of these registers are lexicographically biggest in each row This first part has depth s ∼ cs log t log log N The constant cs grows as − ln(1−1/2 s) ∼ Let d = LG(N ) The second part consists of the sequence of layers L1 , L2 , , Ls , Cs , Ls , Ls+1 , , L2s , C2s , L2s+1 , , L3s , C3s , L3s+1 , Fibonacci Correction Networks 543 where Lp = {[(2i + p) ◦ j ◦ (τ + J) : (2i + p + ϕ(d + (s + 1)|j| − p)) ◦ j ◦ (τ + J)]}, Cq = {[(2i + q + 1) ◦ j ◦ (τ + J) : io q − |j|, τ + J (2i + q + + ϑ(d + (s + 1)|j| − q + 1)) ◦ j ◦ s ∪ {[(2i + q) ◦ j ◦ (τ + J) : io q − |j|, τ + J) (2i + q + ϕ(d + (s + 1)|j| − q)) ◦ j ◦ s Altogether we have (1 + 1/s)(LG(N ) + K) + (s + 1)K layers in the second part In this network again layers Lp represent layers of FN inside the columns (similarly to the T (·)) Layers Cq represent transfers of disturbed 1’s beyond correction area to columns not containing displaced 1’s From the way layers Cq are defined we see that only one transfer to a given column can occur during the whole time of computations Displaced 1’s are transferred from column j ◦ (τ + J) to column j ◦ (τ + J) and |j | = |j| + (since j = j ◦ (q/s − |j|)) All transferred 1’s after the transfer are on a level in the distance not bigger than ϕ(d + (s + 1)|j | − q − 1) from the border level Thus they are in correction area for their new column At most fraction − 1/2s of displaced 1’s is transferred Because of this in general we have the following fact: Fact A column with the index j ◦(τ +J) contains not more than t·(1−1/2s )|j| displaced 1’s The fact above is the reason, we not need columns for |j| > K Even if they were present, no displaced 1’s would get to them Because all displaced 1’s are at the end of computations on or above border level also the following fact holds: Fact The second part of the network reduces the dirty area to at most three rows 3t This J+K K way the network C(·) reduces dirty area to at = ct(log N )cs log t registers This proves the following lemma: most Lemma Comparator network C(s, N, t) is a (t, ct(log N )cs log t )-partial-correction network for some constant cs depending on s This network has depth log N + cs (1 + o(1)) log t log log N α 1+ s Fault Tolerant and Correction Networks Now we show how having partial-fault-tolerant and partial-correction networks we can obtain fault-tolerant and correction networks of almost the same depth 544 G Stachowiak The solutions presented in this section are intended to be as simple as possible and the author believes the reader can find solutions with a bit better constants The problem that is often encountered in construction of comparator networks is sorting inputs with dirty areas of small size Assume we can reduce dirty area of t-disturbed sequence of 0’s and 1’s to size ∆ The question is how many layers and comparators a comparator network needs to ‘clean’ this dirty area We have two versions of this question One if we require fault-tolerance the other if we not This question is answered by easy to prove lemmas: Lemma Assume that there exists a t-fault-tolerant network XN that for input size N has depth δ(N, t) and γ(N, t) comparators Then there exists a comparator network that sorts any x-disturbed input with a dirty area of size at most ∆ if it has not more than t − x faulty comparators This network has depth 2δ(2∆, t) and N ∆ γ(2∆, t) comparators Lemma Assume that there exists a t-correction network XN , that for input of size N has depth δ(N, t) and γ(N, t) comparators Then there exists a comparator network that sorts any t-disturbed input with a dirty area of size at most ∆ This network has depth 2δ(2∆, t) and N ∆ γ(2∆, t) comparators Proof of both lemmas We index the registers with integers 1, , N The network consists of two parts δ(2∆, t) layers each The first part consists of networks X2∆ on each set of registers: S2i = {r2i∆+1 , r2i∆+2 , , r2i∆+2∆ } The second part is are the networks X2∆ on each set of registers: S2i+1 = {r(2i+1)∆+1 , r(2i+1)∆+2 , , r(2i+1)∆+2∆ } This network cleans the dirty area because this area is contained in at least one Si Now having these cleaning networks we formulate the main result of this section We are going to prove is that to produce a good t-fault-tolerant(-correction) network it is enough to construct (t, ∆)-partial-fault-tolerant(-correction) network YN having small depth and reasonably small function ∆ and t-faulttolerant(-correction) network XN of not too big depth and small number of comparators In such case we can construct t-fault-tolerant(-correction) network of almost the same depth as YN and having roughly speaking twice as many comparators as XN has We call these reductions Refinement Lemmas We formulate and prove them at once for fault-tolerant and correction networks Lemma 10 ((refinement lemma)) Assume we have a comparator network YN which is (t, ∆)-partial-fault-tolerant(-correction) network of depth δ (N, t) (∆ = ∆(N, t)) We have also a t-fault-tolerant(-correction) network XN of depth δ(N, t) and having γ(N, t) comparators Then for any M there exists a t-faulttolerant(-correction) network for any input size N of depth ∆ = ∆(N t/M, t) 4M ∆ Nt , t + 2δ + 2M, t δ(M, t) + δ M t Fibonacci Correction Networks 545 with the number of comparators not bigger than: Nt Nt Nt 4M ∆ N γ(M, t) + δ ,t + γ + 2M, t M M M 2M ∆ + M t t Proof Let indexes of registers be pairs (i, j)(i ∈ {1, , N/M }, j ∈ {1, , M }) Our network consists of three main parts In the first part we apply XM in each row separately This requires δ(M, t) N γ(M, t) comparators The result of this part is that displaced 0’s layers and M are moved to the first t columns, and displaced 1’s are moved to last t columns (except maybe one row) In the second part we use two copies of YN t/M The first copy is reversed upside-down to deal with displaced 0’s and is applied to all registers of the first t columns The second copy is applied to all registers of last t columns to deal t t Nt , t layers and at most N with the displaced 1’s This requires δ N M M δ M ,t comparators The result of this part is that the dirty area is reduced to at most Mt∆ + M registers The third part is cleaning network (based on XN ) for dirty area 2Mt ∆ + M Nt 4M ∆ + 2M, t comparawhich requires 2δ 4Mt ∆ + 2M, t layers and 2M ∆+M tγ t tors Now we show how we can use refinement lemmas to construct fault-tolerant and correction networks of a small depth In the construction of t-fault-tolerant network we apply the Piotr´ ow’s network [5] Theorem There exists a constant c such that for an arbitrary s there exists a t-fault-tolerant network of depth: log N + c log log N + (2s + c)t α 1+ s having O(N t) comparators Proof We defined (t, t2 + t)-partial-fault-tolerant network T (s, N, t), which has depth α(1 + 1/s) log N + (2s + c0 )t Theorem follows from Refinement Lemma ow’s network, YN = T (N, s, t), M = t log N applied to XN being Piotr´ √ The above network is practical for small t If we fix t and√take s = log N , then we get a t-fault-tolerant network of depth α log N + O( log N ) Similarly as for fault tolerant networks we can now construct a t-correction network applying Refinement Lemma Theorem For any integer s there exists a t-correction network of depth log N + c0s (log t log log N )2 α 1+ s for some constant c0s depending on s 546 G Stachowiak Proof We apply Refinement Lemma taking YN = C(·), XN –Batcher network, M = t log N √ This network has depth α(1+1/s) log N +o(log N ) if t = o log N / log log N We can take s = log log log N and since c0s = Ω(c2s ) we obtain the following corollary: Corollary For any t there exists a t-correction network of depth log N + c log2 t log log4 N ∼ α log N α 1+ log log log N We can also apply Refinement Lemma once again taking the network from previous theorem for s = as XN We put YN = C(s, N, t), M = t log N and get the following corollary Corollary For any integers s, t there exists a t-correction network of depth log N + c00s log t log log N + o(log log N ) α 1+ s for some constant c00s depending on s Unfortunately it is not clear if this corollary improves the bound on t for which we can make a correction network of depth ∼ α(1 + 1/s) log N , because the construction works well only for t N Minimizing Number of Comparators First we should know what the minimal numbers of comparators for t-faulttolerant and t-correction networks are Any t-fault tolerant network has at least t comparators going from any register different from the highest one to registers with higher indexes (to make it impossible to have them all faulty for 1-disturbed input) So it has at least (N − 1)t = Ω(N t) comparators Any t-correction network has to be a t-selector which forces it to have Ω(N log t) comparators [6] These asymptotic lower bounds on the numbers of comparators in correction networks prove to be achieved A t-fault-tolerant having asymptotically optimal number of comparators is tfault-tolerant network from the previous section It has depth α(1 + 1/s) log N + O(log log N + st) An optimal t-correction network we construct using the Refinement Lemma Similar techniques to those we use in this section can be applied to reduce numbers of comparators of practical correction networks but not to make this paper too long we not describe how to it The simplest way to make these practical constructions is to use Batcher network instead of AKS in what follows Unfortunately we were not able to find a t-correction network with asymptotically optimal number of comparators without using AKS network, so Fibonacci Correction Networks 547 our further constructions are not practical First we construct a network that is asymptotically optimal in the sense of the number of comparators but is not in the sense of depth Lemma 11 There exists a t-correction network that for some constant c and any input size N has depth cN log t/t and at most cN log t comparators Proof Let AKS denote a sorting network which has depth 2c log t for input of size 2t [1] It has at most 2c t log t comparators We index registers with integers 1, , N We define sets of registers: Si = {r(i−1)t+1 , r(i−1)t+2 , , r(i−1)t+2t } Our network consists of 2N/t − parts c log t layers each Each part consists of AKS networks on register sets Si Thus we apply AKS subsequently to S1 , S2 , , SN/t , S(N/t)−1 , , S1 It is easy to see that what we constructed is really a t-correction network When we have the t-correction network from the last lemma we can put it as log N XN to Refinement Lemma taking M = t log t As YN we can use AKS which is a (t, 0)-partial-correction network This way we obtain the following corollary: Corollary There exists a t-correction network of depth O(log N ) having O(N log t) comparators Further on we can take correction network from the last lemma as XN , C(·) as YN and M = t log N As a result by Refinement Lemma we obtain the following corollary: Corollary For any integer s there exists a t-correction network of depth α(1 + 1/s) log N + c0s log t log log N for some constant c0s depending on s which has O(N log t) comparators Conclusions We constructed t-fault-tolerant and t-correction networks of depths ∼ α log N for fixed t This is less than depth of 1-correction network found by Schimmler and Starke [5] Network T (·) seems to be better for practical purposes although it is worse than C(·) for combinations of N and t where N is big and t ≥ log N Some considerations we did not include in this paper seem to indicate that the following conjecture is true This conjecture was originally posed by Mirek Kutylowski – authors only contribution is the constant α 548 G Stachowiak Conjecture The lower bound for depth of 1-correction network is α log N − c for some small constant c Because the author was unable to find 2-correction networks of depth asymptotically better than T (·), he dares to pose another conjecture concerning 2correction networks Conjecture The lower bound for depth of 2-correction network is p α log N + c log N for some constant c > Acknowledgments Author wishes to thank Mirek Kutylowski, Krzysiek Lory´s and Marek Piotr´ ow for presenting the problems, helpful discussions and their encouragement to write this paper Author also thanks Mirek Kutylowski for many valuable remarks that improved the presentation References M Ajtai, J Komol´ os, E Szemer´edi, Sorting in c log n parallel steps, Combinatorica (1983), 1-19 K.E Batcher, Sorting networks and their applications, in AFIPS Conf Proc 32 (1968), 307-314 M Kik, M Kutylowski, M Piotr´ ow, Correction Networks, in Proc of 1999 ICPP, 40-47 M Piotr´ ow, Depth Optimal Sorting Networks Resistant to k Passive Faults in Proc 7th SIAM Symposium on Discrete Algorithms (1996), 242-251 (also accepted for SIAM J Comput.) M Schimmler, C Starke, A Correction Network for N -Sorters, SIAM J Comput 18 (1989), 1179-1197 A.C Yao, Bounds on Selection Networks, SIAM J Comput (1980), 566-582 A.C Yao, F.F Yao, On Fault-Tolerant Networks for Sorting, SIAM J Comput 14 (1985), 120-128 Least Adaptive Optimal Search with Unreliable Tests Ferdinando Cicalese1,? , Daniele Mundici2 , and Ugo Vaccaro1 Dipartimento di Informatica ed Applicazioni, University of Salerno, 84081 Baronissi (SA), Italy {cicalese,uv}@dia.unisa.it, http://www.dia.unisa.it/{˜cicalese,˜uv} Dipartimento Scienze Informazione, University of Milan, Via Comelico 39-41, 20135 Milan, Italy mundici@mailserver.unimi.it Abstract We consider the basic problem of searching for an unknown m-bit number by asking the minimum possible number of yes-no questions, when up to a finite number e of the answers may be erroneous In case the (i + 1)th question is adaptively asked after receiving the answer to the ith question, the problem was posed by Ulam and R´enyi and is strictly related to Berlekamp’s theory of error correcting communication with noiseless feedback Conversely, in the fully non-adaptive model when all questions are asked before knowing any answer, the problem amounts to finding a shortest e-error correctingP code Let qe (m) be the e smallest integer q satisfying Berlekamp’s bound i=0 qi ≤ 2q−m Then at least qe (m) questions are necessary, in the adaptive, as well as in the non-adaptive model In the fully adaptive case, optimal searching strategies using exactly qe (m) questions always exist up to finitely many exceptional m’s At the opposite non-adaptive case, searching strategies with exactly qe (m) questions—or equivalently, perfect e-error correcting codes with 2m codewords of length qe (m)—are rather the exception, already for e = 2, and not exist for e > In this paper we show that for any e > and sufficiently large m, optimal—indeed, perfect— strategies exist using a first batch of m non-adaptive questions and then, only depending on the answers to these m questions, a second batch of qe (m) − m non-adaptive questions Since even in the fully adaptive case, qe (m) − questions not suffice to find the unknown number, and qe (m) questions generally not suffice in the non-adaptive case, the results of our paper provide e-fault tolerant searching strategies with minimum adaptiveness and minimum number of tests Introduction We consider the following scenario: Two players, called Questioner and Responder, first agree on fixing an integer m and a search space S = {0, , 2m − 1} ? Partially supported by ENEA M.M Halld´ orsson (Ed.): SWAT 2000, LNCS 1851, pp 549–562, 2000 c Springer-Verlag Berlin Heidelberg 2000 550 F Cicalese, D Mundici, and U Vaccaro Then the Responder thinks of a number x ∈ S and the Questioner must find out x by asking questions to which the Responder can only answer yes or no It is agreed that the Responder is allowed to lie (or just to be inaccurate) at most e times, where the integer e is fixed and known to the Questioner We are interested in the problem of determining the minimum number of questions the Questioner has to ask in order to infallibly guess the number x When the questions are asked adaptively, i.e., the ith question is asked knowing the answer to the (i−1)th question, the problem is generally referred to as the Ulam-R´enyi game, [29, p 281], [24, p 47], and is strictly related to Berlekamp’s theory of error correcting communication with noiseless feedback [6] At the other, non-adaptive extreme, when the totality of questions is asked at the outset, before knowing any answer, the problem amounts to finding a shortest e-error correcting binary code with 2m codewords It is known that at least qe (m) questions are necessary in the adaptive and, a fortiori, in the non-adaptive case—where qe (m) is the smallest integer q satisfying Pe Berlekamp’s bound i=0 qi ≤ 2q−m In the fully adaptive case, an important result of Spencer [26] shows that qe (m) questions are always sufficient, up to finitely many exceptional m’s Optimal searching strategies had been previously exhibited by [22], [11], [21], respectively for the case e = 1, e = and e = Thus, fully adaptive fault tolerant search can be performed in a very satisfactory manner However, in many practical situations it is desirable to have searching strategies with “small degree” of adaptiveness, that is, searching strategies in which all questions (or at least, many of them) can be prepared in advance, and asked in parallel This is the case, e.g., when the Questioner and the Responder are far away from each other and can interact only on a slow channel; or in all situations when formulating the queries is a costly process, and therefore the Questioner finds it more convenient and time-saving to prepare them in advance We refer to the monographs [3,13] for a discussion on the power of adaptive and non-adaptive searching strategies and their possible uses in different contexts Unfortunately, in the totally non-adaptive case, a series of negative results culminating in the celebrated paper by Tietă avă ainen [28] (also see [17]) shows that searching strategies with exactly qe (m) questions—or equivalently, perfect binary e-errors correcting codes with 2m codewords of length qe (m)—are sporadic exceptions already for e = 2, and not exist for e > 2, except in trivial cases Thus, adaptiveness in Ulam-R´enyi games can be completely eliminated only by significantly increasing the number of questions in the solution strategy.1 Our purpose in this paper is to investigate the minimum amount of adaptiveness required by all successful searching strategies with exactly qe (m) questions The situation is completely different in the case of no lies: here an optimal, totally non-adaptive searching strategy with dlog |S|e questions simply amounts to asking dlog |S|e queries about the locations of the bit in the binary expansion of the unknown number x ∈ S Least Adaptive Optimal Search with Unreliable Tests 1.1 551 Our Results We exactly quantify the minimum amount of adaptiveness needed to solve the Ulam-R´enyi problem, while still constraining the total number of questions to Berlekamp’s minimum qe (m) Our main result is that for each e, and for all sufficiently large m, there exist searching strategies of shortest length (using exactly the minimum number qe (m) of questions) in which questions can be submitted to the Responder in only two rounds Specifically, for the Questioner to infallibly guess the Responder’s secret number x ∈ S it is sufficient to ask a first batch of m non-adaptive questions, and then, only depending on the m-tuple of answers, ask a second mini-batch of n non-adaptive questions Our strategies are perfect, in that m + n coincides with Berlekamp’s minimum qe (m), the number of questions that are a priori necessary to accommodate all possible answering strategies of the Responder—once he is allowed to lie up to e times Since the Questioner can adapt his strategy only once, our paper yields e-fault tolerant search strategies with minimum adaptiveness and the least possible number of tests Our main tool is the discovery of a close relation between searching strategies tolerating e lies and certain special families of error correcting codes, which will be described in Section In the last section we specialize our analysis to the case e = 3; we shall give an explicit description of our searching strategies for the Ulam-R´enyi game, for all m ≥ 99 1.2 Related Work The general issue of coping with unreliable information (and/or unreliable components) in computing is an important problem in computer science, and its study goes back to the work of von Neumann [30] The problem of dealing with erroneous information in search strategies (what we call here Ulam-R´enyi game) has received considerable attention in the last decades, beginning with [25] (see [2,4,5,9,11,12,20,22,26] and references therein) The survey paper [14] gives a detailed account of the relevant literature on the subject In the paper [15] the Ulam-R´enyi game is embedded in a broader context We have already mentioned the connections between Ulam-R´enyi games and Berlekamp’s theory of error correcting communication with noiseless feedback [6] Other interesting connections between Ulam-R´enyi games and different areas of computer science and logic have also been found (see for instance [8,18]) For the sake of conciseness, we shall limit ourselves to mentioning here only those results which are directly related to our present issue of adaptive vs non-adaptive search It is well known that for e = 1, Hamming codes yield non-adaptive searching strategies (i.e., one round strategies) with the smallest possible number q1 (m) of questions—indeed, Pelc [23] showed that adaptiveness in this case is irrelevant even under the stronger assumption that repetition of the same question is forbidden The first significant case where the dichotomy between adaptive and non-adaptive search makes its appearance is when e = Two-round optimal strategies for the case e = were given in [10] Our paper extends the result of [10] to the case of an arbitrary number e of errors/lies Other results related 552 F Cicalese, D Mundici, and U Vaccaro to the issue of fully adaptive vs totally non-adaptive searching strategies, are contained in [12,27] The Ulam-R´ enyi Game For some fixed integer m, let S = {0, 1, , 2m − 1} be the search space By a yes-no question we simply mean an arbitrary subset T of S If the answer to the question T is “yes”, numbers in T are said to satisfy the answer, while numbers in S \ T falsify it A negative answer to question T has the same effect as a positive answer to the opposite question S \ T At any stage of the game, a number y ∈ S must be rejected from consideration if, and only if, it falsifies more than e answers The remaining numbers of S still are possible candidates for the unknown x At any time the Questioner’s state of knowledge is represented by an e-tuple σ = (A0 , A1 , A2 , , Ae ) of pairwise disjoint subsets of S, where Ai is the set of numbers falsifying exactly i answers, i = 0, 1, 2, , e The initial state is naturally given by (S, ∅, ∅, , ∅) A state (A0 , A1 , A2 , , Ae ) is final iff A0 ∪ A1 ∪ A2 ∪ · · · ∪ Ae either has exactly one element, or is empty In this latter case, evidently, more than e lies have been told For any state σ = (A0 , A1 , A2 , , Ae ) and question T ⊆ S, the two states σ yes and σ no respectively resulting from a positive or a negative answer, are given by yes yes σ yes = (Ayes , A1 , , Ae ) and no no σ no = (Ano , A1 , , Ae ) (1) where, for the sake of definiteness, we let A−1 = ∅, and Ayes = (Ai ∩ T ) ∪ (Ai−1 \ T ) i and Ano i = (Ai \ T ) ∪ (Ai−1 ∩ T ) (2) for each i = 0, 1, , e Given a state σ, suppose questions T1 , , Tt have been asked and answers b = b1 , , bt have been received (with bi ∈ {yes, no}) Iterated application of the above formulas yields a sequence of states σ0 = σ, σ1 = σ0b1 , bt σ2 = σ1b2 , , σt = σt−1 (3) By a strategy S with q questions we mean the binary tree of depth q, where each node ν is mapped into a question Tν , and the two edges ηleft , ηright generated by ν are respectively labelled yes and no Let η = η1 , , ηq be a path in S, from the root to a leaf, with respective labels b1 , , bq , generating nodes ν1 , , νq and associated questions Tν1 , , Tνq Fix an arbitrary state σ Then, according to (3), iterated application of (1)-(2) naturally transforms σ into σ η (where the dependence on the bj and Tj is understood) We say that strategy S is winning for σ iff for every path η the state σ η is final A strategy is said to be nonadaptive iff all nodes at the same depth of the tree are mapped into the same question Let σ = (A0 , A1 , A2 , , Ae ) be a state For each i = 0, 1, 2, , e let = |Ai | be the number of elements of Ai Then the e-tuple (a0 , a1 , a2 , , ae ) is called Least Adaptive Optimal Search with Unreliable Tests 553 the type of σ The Berlekamp weight of σ before q questions, q = 0, 1, 2, , is given by e−i e X X q (4) wq (σ) = j i=0 j=0 The character ch(σ) of a state σ is the smallest integer q ≥ such that wq (σ) ≤ 2q By abuse of notation, the weight of any state σ of type (a0 , a1 , a2 , , ae ) before q questions will be denoted wq (a0 , a1 , a2 , , ae ) Similarly, its character will also be denoted ch(a0 , a1 , a2 , , ae ) As an immediate consequence of the above definition we have the following monotonicity properties: For any two states σ = (A00 , A01 , A02 , , A0e ) and σ 00 = (A000 , A001 , A002 , , A00e ) respectively of type (a00 , a01 , a02 , , a0e ) and (a000 , a001 , a002 , , a00e ), if a0i ≤ a00i for all i = 0, 1, 2, , e then ch(σ ) ≤ ch(σ 00 ) and wq (σ ) ≤ wq (σ 00 ) (5) for each q ≥ Moreover, if there exists a winning strategy for σ 00 with q questions then there exists also a winning strategy for σ with q questions [6] Note that ch(σ) = iff σ is a final state Lemma [6] Let σ be an arbitrary state, and T ⊆ S a question Let σ yes and σ no be as in (1)-(2) (i) (Conservation Law) For any integer q ≥ we have wq (σ) = wq−1 (σ yes )+ wq−1 (σ no ) (ii) (Berlekamp’s lower bound) If σ has a winning strategy with q questions then q ≥ ch(σ) t u In complete analogy with the notion of perfect error correcting code [17], we say that a winning strategy for σ with q questions is perfect iff q = ch(σ) In agreement with the above notation, we shall write qe (m) instead of ch(2m , 0, , 0) Let σ = (A0 , A1 , A2 , , Ae ) be a state Let T ⊆ S be a question We say that T is balanced for σ iff for each j = 0, 1, 2, , e, we have |Aj ∩ T | = |Aj \ T | The following is easy to prove Lemma Let T be a balanced question for a state σ = (A0 , A1 , A2 , , Ae ) Let n = ch(σ) Let σ yes and σ no be as in (1)-(2) above Then (i) wq (σ yes ) = wq (σ no ), for each integer q ≥ 0, (ii) ch(σ yes ) = ch(σ no ) = n − Strategies vs Codes Let us first remind some notations from Coding Theory, for more see [17] 554 F Cicalese, D Mundici, and U Vaccaro Fix an integer n > and let x, y ∈ {0, 1}n The Hamming distance dH (x, y) is defined by dH (x, y) = |{i ∈ {1, , n} | xi 6= yi }|, where, as above, |A| denotes the number of elements of A, and xi (resp yi ) denotes the ith components of x (resp y) The Hamming sphere Br (x) with radius r and center x is the set of elements of {0, 1}n whose Hamming distance from x is at most r, in symbols, Br (x) = {y ∈ {0, 1}n | dH (x, y) ≤ r} Pr Notice that for any x ∈ {0, 1}n , and r ≥ 0, we have |Br (x)| = i=0 ni The Hamming weight wH (x) of x is the number of non-zero digits of x Throughout this paper, by a code we shall mean a binary code, in the following sense: Definition A (binary) code C of length n is a non-empty subset of {0, 1}n Its elements are called codewords The minimum distance of C is given by δ(C) = min{dH (x, y) | x, y ∈ C, x 6= y} We say that C is an (n, m, d) code iff C has length n, |C| = m and δ(C) = d The minimum weight of C is the minimum of the Hamming weights of its codewords, in symbols, µ(C) = min{wH (x) | x ∈ C} Let C1 and C2 be two codes of length n The minimum distance between C1 and C2 is defined by ∆(C1 , C2 ) = min{dH (x, y) | x ∈ C1 , y ∈ C2 } We now describe a correspondence between non-adaptive winning strategies and certain special codes This will be a key tool to prove the main results of our paper Lemma Let σ = (A0 , A1 , A2 , Ae ) be a state of type (a0 , a1 , a2 , , ae ) Let n ≥ ch(σ) Then a non-adaptive winning strategy for σ with n questions exists if and only if for all i = 0, 1, 2, , e−1 there are integers di ≥ 2(e−i)+1, together with an e-tuple of codes Γ = {C0 , C1 , C2 , , Ce−1 }, such that each Ci is an (n, , di ) code, and ∆(Ci , Cj ) ≥ 2e−(i+j)+1, (whenever ≤ i < j ≤ e−1) Proof We first prove the implication strategy ⇒ codes Assume σ = (A0 , A1 , A2 , , Ae ) to be a state of type (a0 , a1 , a2 , , ae ) having a non-adaptive winning strategy S with n questions T1 , , Tn , n ≥ ch(σ) Let the map z ∈ A0 ∪ A1 ∪ A2 ∪ ∪ Ae 7→ z S ∈ {0, 1}n send each z ∈ A0 ∪ A1 ∪ A2 ∪ ∪ Ae into the n-tuple of bits z S = z1S · · · znS arising from the sequence of “true” answers to the questions “does z belong to T1 ?”, “does z belong to T2 ?”, , “does z belong to Tn ?”, via the identifications Least Adaptive Optimal Search with Unreliable Tests 555 = yes, = no More precisely, for each j = 1, , n, zjS = iff z ∈ Tj Let C ⊆ {0, 1}n be the range of the map z 7→ z S We shall first prove that, for every i = 0, , e − 1, there exists an integer di ≥ 2(e − i) + such that the set Ci = {y S ∈ C | y ∈ Ai } is an (n, , di ) code Since S is winning, the map z 7→ z S is one-to-one, whence in particular |Ci | = , for any i = 0, 1, 2, , e − Moreover by definition, the Ci ’s are subsets of {0, 1}n Claim δ(Ci ) ≥ 2(e − i) + 1, for i = 0, , e − For otherwise (absurdum hypothesis) assuming c and d to be two distinct elements of Ai such that dH (c S , d S ) ≤ 2(e − i), we will prove that S is not a winning strategy We can safely assume cjS = djS for each j = 1, , n − 2(e − i) Suppose the answer to question Tj is “yes” or “no” according as cjS = or cjS = 0, respectively Then after n − 2(e − i) answers, the resulting state has the form σ = (A00 , , A0i , , A0e ), with {c, d} ⊆ A0i , whence the type of σ is (a00 , , a0i , , a0e ) with a0i ≥ Since by [6, Lemma 2.5], ch(σ ) ≥ ch(0, 0, , 0, 2, 0, , 0) = 2(e − i) + then from Lemma 1(ii) it follows that the remaining 2(e − i) questions/answers not suffice to reach a final state, thus contradicting the assumption that S is winning Claim For any ≤ i < j ≤ e − and for each y ∈ Ai and h ∈ Aj we have the inequality dH (y S , h S ) ≥ 2e − (i + j) + For otherwise (absurdum hypothesis) let y ∈ Ai , h ∈ Aj be a counterexample, and dH (y S , h S ) ≤ 2e − (i + j) Writing y S = y1S ynS and h S = h1S hnS , it is no loss of generality to assume hkS = ykS , for all k = 1, , n − (2e − (i + j)) Suppose that the answer to question Tk is “yes” or “no” according as hkS = or hkS = 0, respectively Then the state resulting from these answers has the form σ 00 = (A000 , A001 , A002 , , A00e ), where y ∈ A00i and h ∈ A00j Since by [6, Lemma 2.5], ch(σ 00 ) ≥ ch(0, , 0, 1, 0, , 0, 1, 0, , 0) = 2e − (i + j) + 1, then Lemma 1(ii) again shows that 2e − (i + j) additional questions will not suffice to find the unknown number This contradicts the assumption that S is a winning strategy In conclusion, for all i = 0, 1, , e − 1, Ci is an (n, , di ) code with di ≥ 2(e − i) + and for all j = 0, , i − 1, i + 1, , e − 1, we have the desired inequality ∆(Ci , Cj ) ≥ 2e − (i + j) + Now we prove the converse implication: strategy ⇐ codes Let Γ = {C0 , C1 , C2 , , Ce−1 } be a family of codes satisfying the hypothesis Let e−1 [ [ Be−i (x) H= i=0 x∈Ci By hypothesis, for any i, j ∈ {0, 1, , e − 1} and x ∈ Ci , y ∈ Cj we have dH (x, y) ≥ 2e−(i+j)+1 It follows that the Hamming spheres Be−i (x), Be−j (y) are pairwise disjoint and hence e−i e−1 X X n (6) |H| = j i=0 j=0 ... Singapore Tokyo Magn´us M Halld´orsson (Ed.) Algorithm Theory – SWAT 2000 7th Scandinavian Workshop on Algorithm Theory Bergen, Norway, July 5-7, 2000 Proceedings 13 Series Editors Gerhard Goos,... Deutsche Bibliothek - CIP-Einheitsaufnahme Algorithm theory : proceedings / SWAT ? ?2000, 7th Scandinavian Workshop on Algorithm Theory, Bergen, Norway, July - 7, 2000, Magn´us M Halld´orsson (ed.) -... 06/3142 543210 Preface The papers in this volume were presented at SWAT 2000, the Seventh Scandinavian Workshop on Algorithm Theory The workshop, which is really a conference, has been held biennially