Introduction to Algorithms Second Edition Instructor’s Manual 2nd phần 4 pot

43 275 0
Introduction to Algorithms Second Edition Instructor’s Manual 2nd phần 4 pot

Đang tải... (xem toàn văn)

Tài liệu hạn chế xem trước, để xem đầy đủ mời bạn chọn Tải xuống

Thông tin tài liệu

9-6 Lecture Notes for Chapter 9: Medians and Order Statistics • • To complete this proof, we choose c such that cn/4 − c/2 − an ≥ cn/4 − an ≥ c/2 n(c/4 − a) ≥ c/2 c/2 n ≥ c/4 − a 2c n ≥ c − 4a Thus, as long as we assume that T (n) = O(1) for n < 2c/(c − 4a), we have E [T (n)] = O(n) Therefore, we can determine any order statistic in linear time on average Selection in worst-case linear time We can Þnd the ith smallest element in O(n) time in the worst case We’ll describe a procedure S ELECT that does so S ELECT recursively partitions the input array • • Idea: Guarantee a good split when the array is partitioned Will use the deterministic procedure PARTITION, but with a small modiÞcation Instead of assuming that the last element of the subarray is the pivot, the modiÞed PARTITION procedure is told which element to use as the pivot S ELECT works on an array of n > elements It executes the following steps: Divide the n elements into groups of Get n/5 groups: n/5 groups with exactly elements and, if does not divide n, one group with the remaining n mod elements Find the median of each of the n/5 groups: • • Run insertion sort on each group Takes O(1) time per group since each group has ≤ elements Then just pick the median from each group, in O(1) time Find the median x of the n/5 medians by a recursive call to S ELECT (If n/5 is even, then follow our convention and Þnd the lower median.) Using the modiÞed version of PARTITION that takes the pivot element as input, partition the input array around x Let x be the kth element of the array after partitioning, so that there are k − elements on the low side of the partition and n − k elements on the high side Now there are three possibilities: • • • If i = k, just return x If i < k, return the ith smallest element on the low side of the partition by making a recursive call to S ELECT If i > k, return the (i − k)th smallest element on the high side of the partition by making a recursive call to S ELECT Lecture Notes for Chapter 9: Medians and Order Statistics 9-7 Analysis Start by getting a lower bound on the number of elements that are greater than the partitioning element x: x [Each group is a column Each white circle is the median of a group, as found in step Arrows go from larger elements to smaller elements, based on what we know after step Elements in the region on the lower right are known to be greater than x ] • • • • At least half of the medians found in step are ≥ x Look at the groups containing these medians that are ≥ x All of them contribute elements that are > x (the median of the group and the elements in the group greater than the group’s median), except for of the groups: the group containing x (which has only elements > x) and the group with < elements n − groups with eleForget about these groups That leaves ≥ ments known to be > x Thus, we know that at least n −2 ≥ 3n −6 10 elements are > x Symmetrically, the number of elements that are < x is at least 3n/10 − Therefore, when we call S ELECT recursively in step 5, it’s on ≤ 7n/10 + elements Develop a recurrence for the worst-case running time of S ELECT: • Steps 1, 2, and each take O(n) time: • • • • • Step 1: making groups of elements takes O(n) time Step 2: sorting n/5 groups in O(1) time each Step 4: partitioning the n-element array around x takes O(n) time Step takes time T ( n/5 ) Step takes time ≤ T (7n/10 + 6), assuming that T (n) is monotonically increasing 9-8 Lecture Notes for Chapter 9: Medians and Order Statistics • • Assume that T (n) = O(1) for small enough n We’ll use n < 140 as “small enough.” Why 140? We’ll see why later Thus, we get the recurrence T (n) ≤ O(1) if n < 140 , T ( n/5 ) + T (7n/10 + 6) + O(n) if n ≥ 140 Solve this recurrence by substitution: • • • • Inductive hypothesis: T (n) ≤ cn for some constant c and all n > Assume that c is large enough that T (n) ≤ cn for all n < 140 So we are concerned only with the case n ≥ 140 Pick a constant a such that the function described by the O(n) term in the recurrence is ≤ an for all n > Substitute the inductive hypothesis in the right-hand side of the recurrence: T (n) ≤ c n/5 + c(7n/10 + 6) + an ≤ cn/5 + c + 7cn/10 + 6c + an = 9cn/10 + 7c + an = cn + (−cn/10 + 7c + an) • This last quantity is ≤ cn if −cn/10 + 7c + an ≤ cn/10 − 7c ≥ an cn − 70c ≥ 10an c(n − 70) ≥ 10an c ≥ 10a(n/(n − 70)) • Because we assumed that n ≥ 140, we have n/(n − 70) ≤ Thus, 20a ≥ 10a(n/(n −70)), so choosing c ≥ 20a gives c ≥ 10a(n/(n −70)), which in turn gives us the condition we need to show that T (n) ≤ cn We conclude that T (n) = O(n), so that S ELECT runs in linear time in all cases Why 140? We could have used any integer strictly greater than 70 • • • • • • Observe that for n > 70, the fraction n/(n − 70) decreases as n increases We picked n ≥ 140 so that the fraction would be ≤ 2, which is an easy constant to work with We could have picked, say, n ≥ 71, so that for all n ≥ 71, the fraction would be ≤ 71/(71 − 70) = 71 Then we would have had 20a ≥ 710a, so we’d have needed to choose c ≥ 710a Notice that S ELECT and R ANDOMIZED -S ELECT determine information about the relative order of elements only by comparing elements • • • • Sorting requires (n lg n) time in the comparison model Sorting algorithms that run in linear time need to make assumptions about their input Linear-time selection algorithms not require any assumptions about their input Linear-time selection algorithms solve the selection problem without sorting and therefore are not subject to the (n lg n) lower bound Solutions for Chapter 9: Medians and Order Statistics Solution to Exercise 9.1-1 The smallest of n numbers can be found with n − comparisons by conducting a tournament as follows: Compare all the numbers in pairs Only the smaller of each pair could possibly be the smallest of all n, so the problem has been reduced to that of Þnding the smallest of n/2 numbers Compare those numbers in pairs, and so on, until there’s just one number left, which is the answer To see that this algorithm does exactly n − comparisons, notice that each number except the smallest loses exactly once To show this more formally, draw a binary tree of the comparisons the algorithm does The n numbers are the leaves, and each number that came out smaller in a comparison is the parent of the two numbers that were compared Each non-leaf node of the tree represents a comparison, and there are n − internal nodes in an n-leaf full binary tree (see Exercise (B.5-3)), so exactly n − comparisons are made In the search for the smallest number, the second smallest number must have come out smallest in every comparison made with it until it was eventually compared with the smallest So the second smallest is among the elements that were compared with the smallest during the tournament To Þnd it, conduct another tournament (as above) to Þnd the smallest of these numbers At most lg n (the height of the tree of comparisons) elements were compared with the smallest, so Þnding the smallest of these takes lg n − comparisons in the worst case The total number of comparisons made in the two tournaments was n − + lg n − = n + lg n − in the worst case Solution to Exercise 9.3-1 For groups of 7, the algorithm still works in linear time The number of elements greater than x (and similarly, the number less than x) is at least n −2 ≥ 2n −8, 9-10 Solutions for Chapter 9: Medians and Order Statistics and the recurrence becomes T (n) ≤ T ( n/7 ) + T (5n/7 + 8) + O(n) , which can be shown to be O(n) by substitution, as for the groups of case in the text For groups of 3, however, the algorithm no longer works in linear time The number of elements greater than x, and the number of elements less than x, is at least n n −2 ≥ −4, 2 3 and the recurrence becomes T (n) ≤ T ( n/3 ) + T (2n/3 + 4) + O(n) , which does not have a linear solution We can prove that the worst-case time for groups of is (n lg n) We so by deriving a recurrence for a particular case that takes (n lg n) time In counting up the number of elements greater than x (and similarly, the number less than x), consider the particular case in which there are exactly n groups with medians ≥ x and in which the “leftover” group does contribute elements greater than x Then the number of elements greater than x is exactly n − + (the −1 discounts x’s group, as usual, and the +1 is con2 tributed by x’s group) = n/6 − 1, and the recursive step for elements ≤ x has n − (2 n/6 − 1) ≥ n − (2(n/6 + 1) − 1) = 2n/3 − elements Observe also that the O(n) term in the recurrence is really (n), since the partitioning in step takes (n) (not just O(n)) time Thus, we get the recurrence T (n) ≥ T ( n/3 ) + T (2n/3 − 1) + (n) ≥ T (n/3) + T (2n/3 − 1) + (n) , from which you can show that T (n) ≥ cn lg n by substitution You can also see that T (n) is nonlinear by noticing that each level of the recursion tree sums to n [In fact, any odd group size ≥ works in linear time.] Solution to Exercise 9.3-3 A modiÞcation to quicksort that allows it to run in O(n lg n) time in the worst case uses the deterministic PARTITION algorithm that was modiÞed to take an element to partition around as an input parameter S ELECT takes an array A, the bounds p and r of the subarray in A, and the rank i of an order statistic, and in time linear in the size of the subarray A[ p r] it returns the ith smallest element in A[ p r] B EST-C ASE -Q UICKSORT ( A, p, r) if p < r then i ← (r − p + 1)/2 x ← S ELECT ( A, p, r, i) q ← PARTITION (x) B EST-C ASE -Q UICKSORT ( A, p, q − 1) B EST-C ASE -Q UICKSORT ( A, q + 1, r) Solutions for Chapter 9: Medians and Order Statistics 9-11 For an n-element array, the largest subarray that B EST-C ASE -Q UICKSORT recurses on has n/2 elements This situation occurs when n = r − p + is even; then the subarray A[q + r] has n/2 elements, and the subarray A[ p q − 1] has n/2 − elements Because B EST-C ASE -Q UICKSORT always recurses on subarrays that are at most half the size of the original array, the recurrence for the worst-case running time is T (n) ≤ 2T (n/2) + (n) = O(n lg n) Solution to Exercise 9.3-5 We assume that are given a procedure M EDIAN that takes as parameters an array A and subarray indices p and r, and returns the value of the median element of A[ p r] in O(n) time in the worst case Given M EDIAN, here is a linear-time algorithm S ELECT for Þnding the ith smallest element in A[ p r] This algorithm uses the deterministic PARTITION algorithm that was modiÞed to take an element to partition around as an input parameter S ELECT ( A, p, r, i) if p = r then return A[ p] x ← M EDIAN ( A, p, r) q ← PARTITION (x) k ←q − p+1 if i = k then return A[q] elseif i < k then return S ELECT ( A, p, q − 1, i) else return S ELECT ( A, q + 1, r, i − k) Because x is the median of A[ p r], each of the subarrays A[ p q − 1] and A[q + r] has at most half the number of elements of A[ p r] The recurrence for the worst-case running time of S ELECT is T (n) ≤ T (n/2) + O(n) = O(n) Solution to Exercise 9.3-8 Let’s start out by supposing that the median (the lower median, since we know we have an even number of elements) is in X Let’s call the median value m, and let’s suppose that it’s in X[k] Then k elements of X are less than or equal to m and n − k elements of X are greater than or equal to m We know that in the two arrays combined, there must be n elements less than or equal to m and n elements greater than or equal to m, and so there must be n − k elements of Y that are less than or equal to m and n − (n − k) = k elements of Y that are greater than or equal to m 9-12 Solutions for Chapter 9: Medians and Order Statistics Thus, we can check that X[k] is the lower median by checking whether Y [n − k] ≤ X[k] ≤ Y [n − k + 1] A boundary case occurs for k = n Then n − k = 0, and there is no array entry Y [0]; we only need to check that X[n] ≤ Y [1] Now, if the median is in X but is not in X[k], then the above condition will not hold If the median is in X[k ], where k < k, then X[k] is above the median, and Y [n − k + 1] < X[k] Conversely, if the median is in X[k ], where k > k, then X[k] is below the median, and X[k] < Y [n − k] Thus, we can use a binary search to determine whether there is an X[k] such that either k < n and Y [n−k] ≤ X[k] ≤ Y [n−k +1] or k = n and X[k] ≤ Y [n−k +1]; if we Þnd such an X[k], then it is the median Otherwise, we know that the median is in Y , and we use a binary search to Þnd a Y [k] such that either k < n and X[n − k] ≤ Y [k] ≤ X[n − k + 1] or k = n and Y [k] ≤ X[n − k + 1]; such a Y [k] is the median Since each binary search takes O(lg n) time, we spend a total of O(lg n) time Here’s how we write the algorithm in pseudocode: T WO -A RRAY-M EDIAN (X, Y ) n ← length[X] £ n also equals length[Y ] median ← F IND -M EDIAN (X, Y, n, 1, n) if median = NOT- FOUND then median ← F IND -M EDIAN (Y, X, n, 1, n) return median F IND -M EDIAN ( A, B, n, low, high) if low > high then return NOT- FOUND else k ← (low + high)/2 if k = n and A[n] ≤ B[1] then return A[n] elseif k < n and B[n − k] ≤ A[k] ≤ B[n − k + 1] then return A[k] elseif A[k] > B[n − k + 1] then return F IND -M EDIAN ( A, B, n, low, k − 1) else return F IND -M EDIAN ( A, B, n, k + 1, high) Solution to Exercise 9.3-9 In order to Þnd the optimal placement for Professor Olay’s pipeline, we need only Þnd the median(s) of the y-coordinates of his oil wells, as the following proof explains Claim The optimal y-coordinate for Professor Olay’s east-west oil pipeline is as follows: • • If n is even, then on either the oil well whose y-coordinate is the lower median or the one whose y-coordinate is the upper median, or anywhere between them If n is odd, then on the oil well whose y-coordinate is the median Solutions for Chapter 9: Medians and Order Statistics 9-13 Proof We examine various cases In each case, we will start out with the pipeline at a particular y-coordinate and see what happens when we move it We’ll denote by s the sum of the north-south spurs with the pipeline at the starting location, and s will denote the sum after moving the pipeline We start with the case in which n is even Let us start with the pipeline somewhere on or between the two oil wells whose y-coordinates are the lower and upper medians If we move the pipeline by a vertical distance d without crossing either of the median wells, then n/2 of the wells become d farther from the pipeline and n/2 become d closer, and so s = s + dn/2 − dn/2 = s; thus, all locations on or between the two medians are equally good Now suppose that the pipeline goes through the oil well whose y-coordinate is the upper median What happens when we increase the y-coordinate of the pipeline by d > units, so that it moves above the oil well that achieves the upper median? All oil wells whose y-coordinates are at or below the upper median become d units farther from the pipeline, and there are at least n/2 + such oil wells (the upper median, and every well at or below the lower median) There are at most n/2 − oil wells whose y-coordinates are above the upper median, and each of these oil wells becomes at most d units closer to the pipeline when it moves up Thus, we have a lower bound on s of s ≥ s + d(n/2 + 1) − d(n/2 − 1) = s + 2d > s We conclude that moving the pipeline up from the oil well at the upper median increases the total spur length A symmetric argument shows that if we start with the pipeline going through the oil well whose y-coordinate is the lower median and move it down, then the total spur length increases We see, therefore, that when n is even, an optimal placement of the pipeline is anywhere on or between the two medians Now we consider the case when n is odd We start with the pipeline going through the oil well whose y-coordinate is the median, and we consider what happens when we move it up by d > units All oil wells at or below the median become d units farther from the pipeline, and there are at least (n + 1)/2 such wells (the one at the median and the (n − 1)/2 at or below the median There are at most (n − 1)/2 oil wells above the median, and each of these becomes at most d units closer to the pipeline We get a lower bound on s of s ≥ s + d(n + 1)/2 − d(n − 1)/2 = s + d > s, and we conclude that moving the pipeline up from the oil well at the median increases the total spur length A symmetric argument shows that moving the pipeline down from the median also increases the total spur length, and so the (claim) optimal placement of the pipeline is on the median Since we know we are looking for the median, we can use the linear-time medianÞnding algorithm Solution to Problem 9-1 We assume that the numbers start out in an array a Sort the numbers using merge sort or heapsort, which take (n lg n) worst-case time (Don’t use quicksort or insertion sort, which can take (n2) time.) Put 9-14 Solutions for Chapter 9: Medians and Order Statistics the i largest elements (directly accessible in the sorted array) into the output array, taking (i) time Total worst-case running time: (n lg n + i) = (n lg n) (because i ≤ n) b Implement the priority queue as a heap Build the heap using BUILD -H EAP, which takes (n) time, then call H EAP -E XTRACT-M AX i times to get the i largest elements, in (i lg n) worst-case time, and store them in reverse order of extraction in the output array The worst-case extraction time is (i lg n) because • • i extractions from a heap with O(n) elements takes i · O(lg n) = O(i lg n) time, and half of the i extractions are from a heap with ≥ n/2 elements, so those i/2 extractions take (i/2) (lg(n/2)) = (i lg n) time in the worst case Total worst-case running time: (n + i lg n) c Use the S ELECT algorithm of Section 9.3 to Þnd the ith largest number in (n) time Partition around that number in (n) time Sort the i largest numbers in (i lg i) worst-case time (with merge sort or heapsort) Total worst-case running time: (n + i lg i) Note that method (c) is always asymptotically at least as good as the other two methods, and that method (b) is asymptotically at least as good as (a) (Comparing (c) to (b) is easy, but it is less obvious how to compare (c) and (b) to (a) (c) and (b) are asymptotically at least as good as (a) because n, i lg i, and i lg n are all O(n lg n) The sum of two things that are O(n lg n) is also O(n lg n).) Solution to Problem 9-2 a The median x of the elements x1 , x2 , , xn , is an element x = xk satisfying |{xi : ≤ i ≤ n and xi < x}| ≤ n/2 and |{xi : ≤ i ≤ n and xi > x}| ≤ n/2 If each element xi is assigned a weight wi = 1/n, then we get wi = x i x · |{xi : ≤ i ≤ n and xi > x}| n n · ≤ n , = which proves that x is also the weighted median of x1 , x2 , , xn with weights wi = 1/n, for i = 1, 2, , n = b We Þrst sort the n elements into increasing order by xi values Then we scan the array of sorted xi ’s, starting with the smallest element and accumulating weights as we scan, until the total exceeds 1/2 The last element, say xk , whose weight caused the total to exceed 1/2, is the weighted median Notice that the total weight of all elements smaller than xk is less than 1/2, because xk was the Þrst element that caused the total weight to exceed 1/2 Similarly, the total weight of all elements larger than xk is also less than 1/2, because the total weight of all the other elements exceeds 1/2 The sorting phase can be done in O(n lg n) worst-case time (using merge sort or heapsort), and the scanning phase takes O(n) time The total running time in the worst case, therefore, is O(n lg n) c We Þnd the weighted median in (n) worst-case time using the (n) worstcase median algorithm in Section 9.3 (Although the Þrst paragraph of the section only claims an O(n) upper bound, it is easy to see that the more precise running time of (n) applies as well, since steps 1, 2, and of S ELECT actually take (n) time.) The weighted-median algorithm works as follows If n ≤ 2, we just return the brute-force solution Otherwise, we proceed as follows We Þnd the actual median xk of the n elements and then partition around it We then compute the total weights of the two halves If the weights of the two halves are each strictly less than 1/2, then the weighted median is xk Otherwise, the weighted median should be in the half with total weight exceeding 1/2 The total weight of the “light” half is lumped into the weight of xk , and the search continues within the half that weighs more than 1/2 Here’s pseudocode, which takes as input a set X = {x1 , x2 , , xn }: 11-14 Lecture Notes for Chapter 11: Hash Tables Pr {X ≥ i } = n −i +2 n n−1 n−2 · · ··· m m−1 m−2 m −i +2 i − factors n < m ⇒ (n − j )/(m − j ) ≤ n/m for j ≥ 0, which implies n i−1 Pr {X ≥ i } ≤ m i−1 = α By equation (C.24), E [X ] = ≤ = ∞ Pr {X ≥ i } i=1 ∞ i=1 ∞ α i−1 αi i=0 = 1−α (equation (A.6)) (theorem) Interpretation: If α is constant, an unsuccessful search takes O(1) time • • If α = 0.5, then an unsuccessful search takes an average of 1/(1 − 0.5) = probes If α = 0.9, takes an average of 1/(1 − 0.9) = 10 probes Corollary The expected number of probes to insert is at most 1/(1 − α) Proof Since there is no deletion, insertion uses the same probe sequence as an unsuccessful search Theorem The expected number of probes in a successful search is at most 1 ln α 1−α Proof A successful search for key k follows the same probe sequence as when key k was inserted By the previous corollary, if k was the (i + 1)st key inserted, then α equaled i/m at the time Thus, the expected number of probes made in a search for k is at most 1/(1 − i/m) = m/(m − i) That was assuming that k was the (i + 1)st key inserted We need to average over all n keys: n n−1 i=0 m m −i n−1 = m n = (Hm − Hm−n ) , α i=0 m −i Lecture Notes for Chapter 11: Hash Tables where Hi = i j =1 1/j 11-15 is the ith harmonic number Simplify by using the technique of bounding a summation by an integral: m 1 (Hm − Hm−n ) = 1/k α α k=m−n+1 ≤ = = m (1/x) dx α m−n m ln α m−n 1 ln α 1−α (inequality (A.12)) (theorem) Solutions for Chapter 11: Hash Tables Solution to Exercise 11.1-4 We denote the huge array by T and, taking the hint from the book, we also have a stack implemented by an array S The size of S equals the number of keys actually stored, so that S should be allocated at the dictionary’s maximum size The stack has an attribute top[S], so that only entries S[1 top[S]] are valid The idea of this scheme is that entries of T and S validate each other If key k is actually stored in T , then T [k] contains the index, say j , of a valid entry in S, and S[ j ] contains the value k Let us call this situation, in which ≤ T [k] ≤ top[S], S[T [k]] = k, and T [S[ j ]] = j , a validating cycle Assuming that we also need to store pointers to objects in our direct-address table, we can store them in an array that is parallel to either T or S Since S is smaller than T , we’ll use an array S , allocated to be the same size as S, for these pointers Thus, if the dictionary contains an object x with key k, then there is a validating cycle and S [T [k]] points to x The operations on the dictionary work as follows: • • • • Initialization: Simply set top[S] = 0, so that there are no valid entries in the stack S EARCH: Given key k, we check whether we have a validating cycle, i.e., whether ≤ T [k] ≤ top[S] and S[T [k]] = k If so, we return S [T [k]], and otherwise we return NIL I NSERT: To insert object x with key k, assuming that this object is not already in the dictionary, we increment top[S], set S[top[S]] ← k, set S [top[S]] ← x, and set T [k] ← top[S] D ELETE: To delete object x with key k, assuming that this object is in the dictionary, we need to break the validating cycle The trick is to also ensure that we don’t leave a “hole” in the stack, and we solve this problem by moving the top entry of the stack into the position that we are vacating—and then Þxing up that entry’s validating cycle That is, we execute the following sequence of assignments: Solutions for Chapter 11: Hash Tables 11-17 S[T [k]] ← S[top[S]] S [T [k]] ← S [top[S]] T [S[T [k]]] ← T [k] T [k] ← top[S] ← top[S] − Each of these operations—initialization, S EARCH, I NSERT, and D ELETE—takes O(1) time Solution to Exercise 11.2-1 For each pair of keys k, l, where k = l, deÞne the indicator random variable X kl = I {h(k) = h(l)} Since we assume simple uniform hashing, Pr{X kl = 1} = Pr {h(k) = h(l)} = 1/m, and so E [X kl ] = 1/m Now deÞne the random variable Y to be the total number of collisions, so that Y = k=l X kl The expected number of collisions is E [Y ] = E X kl k=l = E [X kl ] (linearity of expectation) k=l = = = n m n(n − 1) · m n(n − 1) 2m Solution to Exercise 11.2-4 The ßag in each slot will indicate whether the slot is free • • A free slot is in the free list, a doubly linked list of all free slots in the table The slot thus contains two pointers A used slot contains an element and a pointer (possibly NIL) to the next element that hashes to this slot (Of course, that pointer points to another slot in the table.) Operations • Insertion: • If the element hashes to a free slot, just remove the slot from the free list and store the element there (with a NIL pointer) The free list must be doubly linked in order for this deletion to run in O(1) time 11-18 Solutions for Chapter 11: Hash Tables • If the element hashes to a used slot j , check whether the element x already there “belongs” there (its key also hashes to slot j ) • • • Deletion: Let j be the slot the element x to be deleted hashes to • • • • If so, add the new element to the chain of elements in this slot To so, allocate a free slot (e.g., take the head of the free list) for the new element and put this new slot at the head of the list pointed to by the hashed-to slot ( j ) If not, E is part of another slot’s chain Move it to a new slot by allocating one from the free list, copying the old slot’s ( j ’s) contents (element x and pointer) to the new slot, and updating the pointer in the slot that pointed to j to point to the new slot Then insert the new element in the now-empty slot as usual To update the pointer to j , it is necessary to Þnd it by searching the chain of elements starting in the slot x hashes to If x is the only element in j ( j doesn’t point to any other entries), just free the slot, returning it to the head of the free list If x is in j but there’s a pointer to a chain of other elements, move the Þrst pointed-to entry to slot j and free the slot it was in If x is found by following a pointer from j , just free x’s slot and splice it out of the chain (i.e., update the slot that pointed to x to point to x’s successor) Searching: Check the slot the key hashes to, and if that is not the desired element, follow the chain of pointers from the slot All the operations take expected O(1) times for the same reason they with the version in the book: The expected time to search the chains is O(1 + α) regardless of where the chains are stored, and the fact that all the elements are stored in the table means that α ≤ If the free list were singly linked, then operations that involved removing an arbitrary slot from the free list would not run in O(1) time Solution to Exercise 11.3-3 First, we observe that we can generate any permutation by a sequence of interchanges of pairs of characters One can prove this property formally, but informally, consider that both heapsort and quicksort work by interchanging pairs of elements and that they have to be able to produce any permutation of their input array Thus, it sufÞces to show that if string x can be derived from string y by interchanging a single pair of characters, then x and y hash to the same value Let us denote the ith character in x by xi , and similarly for y The interpretan−1 ip mod (2 p − 1) tion of x in radix 2p is n−1 xi 2ip , and so h(x) = i=0 i=0 x i n−1 ip mod (2 p − 1) Similarly, h(y) = i=0 yi Suppose that x and y are identical strings of n characters except that the characters in positions a and b are interchanged: xa = yb and ya = xb Without loss of generality, let a > b We have Solutions for Chapter 11: Hash Tables n−1 h(x) − h(y) = 11-19 n−1 xi 2ip mod (2 p − 1) − i=0 yi 2ip mod (2 p − 1) i=0 Since ≤ h(x), h(y) < p − 1, we have that −(2 p − 1) < h(x) − h(y) < p − If we show that (h(x) − h(y)) mod (2p − 1) = 0, then h(x) = h(y) Since the sums in the hash functions are the same except for indices a and b, we have (h(x) − h(y)) mod (2 p − 1) = ((xa 2ap + xb 2bp ) − (ya 2ap + yb 2bp )) mod (2 p − 1) = ((xa 2ap + xb 2bp ) − (xb 2ap + xa 2bp )) mod (2 p − 1) = ((xa − xb )2ap − (xa − xb )2bp ) mod (2 p − 1) = ((xa − xb )(2ap − 2bp )) mod (2 p − 1) = ((xa − xb )2bp (2(a−b) p − 1)) mod (2 p − 1) By equation (A.5), a−b−1 pi = i=0 2(a−b) p − , 2p − and multiplying both sides by 2p − 1, we get 2(a−b) p − = Thus, (h(x) − h(y)) mod (2 p − 1) a−b−1 i=0 pi (2 p − 1) a−b−1 = (xa − xb )2bp pi (2 p − 1) mod (2 p − 1) i=0 = 0, since one of the factors is 2p − We have shown that (h(x) − h(y)) mod (2p − 1) = 0, and so h(x) = h(y) Solution to Exercise 11.3-5 Let b = |B| and u = |U | We start by showing that the total number of collisions is minimized by a hash function that maps u/b elements of U to each of the b values in B For a given hash function, let u j be the number of elements that map to j ∈ B We have u = j ∈B u j We also have that the number of collisions for a given value of j ∈ B is u2j = u j (u j − 1)/2 Lemma The total number of collisions is minimized when u j = u/b for each j ∈ B Proof If u j ≤ u/b, let us call j underloaded, and if u j ≥ u/b, let us call j overloaded Consider an unbalanced situation in which uj = u/b for at least one value j ∈ B We can think of converting a balanced situation in which all u j equal u/b into the unbalanced situation by repeatedly moving an element that maps to an underloaded value to map instead to an overloaded value (If you think 11-20 Solutions for Chapter 11: Hash Tables of the values of B as representing buckets, we are repeatedly moving elements from buckets containing at most u/b elements to buckets containing at least u/b elements.) We now show that each such move increases the number of collisions, so that all the moves together must increase the number of collisions Suppose that we move an element from an underloaded value j to an overloaded value k, and we leave all other elements alone Because j is underloaded and k is overloaded, u j ≤ u/b ≤ u k Considering just the collisions for values j and k, we have u j (u j − 1)/2 + u k (u k − 1)/2 collisions before the move and (u j − 1)(u j − 2)/2 + (u k + 1)u k /2 collisions afterward We wish to show that u j (u j − 1)/2 + u k (u k − 1)/2 < (u j − 1)(u j − 2)/2 + (u k + 1)u k /2 We have the following sequence of equivalent inequalities: u j < uk + 2u j < 2u k + −u k < u k − 2u j + 2 u j − u j + u k − u k < u − 3u j + + u + u k j k u j (u j − 1) + u k (u k − 1) < (u j − 1)(u j − 2) + (u k + 1)u k u j (u j − 1)/2 + u k (u k − 1)/2 < (u j − 1)(u j − 2)/2 + (u k + 1)u k /2 Thus, each move increases the number of collisions We conclude that the number of collisions is minimized when u j = u/b for each j ∈ B By the above lemma, for any hash function, the total number of collisions must be at least b(u/b)(u/b − 1)/2 The number of pairs of distinct elements is u = u(u − 1)/2 Thus, the number of collisions per pair of distinct elements must be at least u/b − b(u/b)(u/b − 1)/2 = u(u − 1)/2 u−1 u/b − > u 1 − = b u Thus, the bound on the probability of a collision for any pair of distinct elements can be no less than 1/b − 1/u = 1/|B| − 1/ |U | Solution to Problem 11-1 a Since we assume uniform hashing, we can use the same observation as is used in Corollary 11.7: that inserting a key entails an unsuccessful search followed by placing the key into the Þrst empty slot found As in the proof of Theorem 11.6, if we let X be the random variable denoting the number of probes in an unsuccessful search, then Pr {X ≥ i } ≤ α i−1 Since n ≤ m/2, we have α ≤ 1/2 Letting i = k + 1, we have Pr {X > k} = Pr {X ≥ k + 1} ≤ (1/2)(k+1)−1 = 2−k Solutions for Chapter 11: Hash Tables 11-21 b Substituting k = lg n into the statement of part (a) yields that the probability −2 that the ith insertion requires more than k = lg n probes is at most lg n = lg n −2 −2 (2 ) = n = 1/n c Let the event A be X > lg n, and for i = 1, 2, , n, let the event Ai be X i > lg n In part (b), we showed that Pr{ Ai } ≤ 1/n for i = 1, 2, , n From how we deÞned these events, A = A1 ∪ A2 ∪ · · · ∪ An Using Boole’s inequality, (C.18), we have Pr { A} ≤ Pr {A1 } + Pr { A2 } + · · · + Pr { An } ≤ n· n = 1/n d We use the deÞnition of expectation and break the sum into two parts: n k · Pr {X = k} E [X ] = k=1 lg n n k · Pr {X = k} + = k · Pr {X = k} k= lg n +1 k=1 lg n n n · Pr {X = k} lg n · Pr {X = k} + ≤ k= lg n +1 k=1 lg n = n Pr {X = k} + n lg n Pr {X = k} k= lg n +1 k=1 lg n Since X takes on exactly one value, we have that k=1 Pr {X = k} = Pr {X ≤ lg n } ≤ and n lg n +1 Pr {X = k} ≤ Pr {X > lg n} ≤ 1/n, k= by part (c) Therefore, E [X ] ≤ lg n · + n · (1/n) = lg n + = O(lg n) Solution to Problem 11-2 a A particular key is hashed to a particular slot with probability 1/n Suppose we select a speciÞc set of k keys The probability that these k keys are inserted into the slot in question and that all other keys are inserted elsewhere is n k n−k 1− n Since there are Qk = n n k k 1− ways to choose our k keys, we get n n−k n k 11-22 Solutions for Chapter 11: Hash Tables b For i = 1, 2, , n, let X i be a random variable denoting the number of keys that hash to slot i, and let Ai be the event that Xi = k, i.e., that exactly k keys hash to slot i From part (a), we have Pr{ A} = Q k Then, Pk = Pr {M = k} = Pr = ≤ = ≤ = max X i = k 1≤i≤n Pr {there exists i such that Xi = k and that X i ≤ k for i = 1, 2, , n} Pr {there exists i such that Xi = k} Pr { A1 ∪ A2 ∪ · · · ∪ An } (by inequality (C.18)) Pr { A1 } + Pr {A2 } + · · · + Pr { An } n Qk c We start by showing two facts First, − 1/n < 1, which implies (1 − 1/n)n−k < Second, n!/(n−k)! = n·(n−1)·(n−2) · · · (n−k +1) < nk Using these facts, along with the simpliÞcation k! > (k/e)k of equation (3.17), we have k n! n−k Qk = 1− n n k!(n − k)! n! ((1 − 1/n)n−k < 1) < n k k!(n − k)! (n!/(n − k)! < nk ) < k! ek (k! > (k/e)k ) < kk d Notice that when n = 2, lg lg n = 0, so to be precise, we need to assume that n ≥ In part (c), we showed that Qk < ek /k k for any k; in particular, this inequality holds for k0 Thus, it sufÞces to show that ek0 /k0 k0 < 1/n or, equivalently, that n < k0 k0 /ek0 Taking logarithms of both sides gives an equivalent condition: lg n < k0 (lg k0 − lg e) c lg n (lg c + lg lg n − lg lg lg n − lg e) = lg lg n Dividing both sides by lg n gives the condition c (lg c + lg lg n − lg lg lg n − lg e) < lg lg n lg c − lg e lg lg lg n − = c 1+ lg lg n lg lg n Let x be the last expression in parentheses: x = 1+ lg c − lg e lg lg lg n − lg lg n lg lg n We need to show that there exists a constant c > such that < cx Noting that limn→∞ x = 1, we see that there exists n0 such that x ≥ 1/2 for all n ≥ n Thus, any constant c > works for n ≥ n0 Solutions for Chapter 11: Hash Tables 11-23 We handle smaller values of n—in particular, ≤ n < n0 —as follows Since n is constrained to be an integer, there are a Þnite number of n in the range ≤ n < n We can evaluate the expression x for each such value of n and determine a value of c for which < cx for all values of n The Þnal value of c that we use is the larger of • • 6, which works for all n ≥ n0 , and max3≤n k0 , we will show that we can pick the constant c such that Qk < 1/n for all k ≥ k0 , and thus conclude that Pk < 1/n for all k ≥ k0 To pick c as required, we let c be large enough that k0 > > e Then e/k < for all k ≥ k0 , and so ek /k k decreases as k increases Thus, Q k < ek /k k ≤ ek0 /k k0 < 1/n for k ≥ k0 e The expectation of M is n k · Pr {M = k} E [M] = k=0 = k0 n k · Pr {M = k} + ≤ k0 k · Pr {M = k} k=k0 +1 k=0 n k0 · Pr {M = k} + ≤ k0 n · Pr {M = k} k=k0 +1 k=0 k0 n Pr {M = k} + n Pr {M = k} k=k0 +1 k=0 = k0 · Pr {M ≤ k0 } + n · Pr {M > k0 } , which is what we needed to show, since k0 = c lg n/ lg lg n To show that E [M] = O(lg n/ lg lg n), note that Pr {M ≤ k0 } ≤ and n Pr {M = k} Pr {M > k0 } = k=k0 +1 n = Pk k=k0 +1 n < 1/n k=k0 +1 < n · (1/n ) = 1/n (by part (d)) 11-24 Solutions for Chapter 11: Hash Tables We conclude that E [M] ≤ k0 · + n · (1/n) = k0 + = O(lg n/ lg lg n) Solution to Problem 11-3 a From how the probe-sequence computation is speciÞed, it is easy to see that the probe sequence is h(k), h(k) + 1, h(k) + + 2, h(k) + + + 3, , h(k) + + + + · · · + i , , where all the arithmetic is modulo m Starting the probe numbers from 0, the ith probe is offset (modulo m) from h(k) by i j= j =0 i(i + 1) = i2 + i 2 Thus, we can write the probe sequence as h (k, i) = h(k) + 1 i + i2 2 mod m , which demonstrates that this scheme is a special case of quadratic probing b Let h (k, i) denote the ith probe of our scheme We saw in part (a) that h (k, i) = (h(k) + i(i + 1)/2) mod m To show that our algorithm examines every table position in the worst case, we show that for a given key, each of the Þrst m probes hashes to a distinct value That is, for any key k and for any probe numbers i and j such that ≤ i < j < m, we have h (k, i) = h (k, j ) We so by showing that h (k, i) = h (k, j ) yields a contradiction Let us assume that there exists a key k and probe numbers i and j satsifying ≤ i < j < m for which h (k, i) = h (k, j ) Then h(k) + i(i + 1)/2 ≡ h(k) + j ( j + 1)/2 (mod m) , which in turn implies that i(i + 1)/2 ≡ j ( j + 1)/2 (mod m) , or j ( j + 1)/2 − i(i + 1)/2 ≡ (mod m) Since j ( j + 1)/2 − i(i + 1)/2 = ( j − i)( j + i + 1)/2, we have ( j − i)( j + i + 1)/2 ≡ (mod m) The factors j −i and j +i +1 must have different parities, i.e., j −i is even if and only if j +i +1 is odd (Work out the various cases in which i and j are even and odd.) Since ( j −i)( j +i +1)/2 ≡ (mod m), we have ( j −i)( j +i +1)/2 = rm for some integer r or, equivalently, ( j − i)( j + i + 1) = r · 2m Using the assumption that m is a power of 2, let m = 2p for some nonnegative integer p, so that now we have ( j − i)( j + i + 1) = r · 2p+1 Because exactly one of Solutions for Chapter 11: Hash Tables 11-25 the factors j − i and j + i + is even, 2p+1 must divide one of the factors It cannot be j − i, since j − i < m < 2p+1 But it also cannot be j + i + 1, since j + i + ≤ (m − 1) + (m − 2) + = 2m − < p+1 Thus we have derived the contradiction that 2p+1 divides neither of the factors j − i and j + i + We conclude that h (k, i) = h (k, j ) Lecture Notes for Chapter 12: Binary Search Trees Chapter 12 overview Search trees • • • Data structures that support many dynamic-set operations Can be used as both a dictionary and as a priority queue Basic operations take time proportional to the height of the tree • • • For complete binary tree with n nodes: worst case For linear chain of n nodes: worst case (n) (lg n) Different types of search trees include binary search trees, red-black trees (covered in Chapter 13), and B-trees (covered in Chapter 18) We will cover binary search trees, tree walks, and operations on binary search trees Binary search trees Binary search trees are an important data structure for dynamic sets • • • • Accomplish many dynamic-set operations in O(h) time, where h = height of tree As in Section 10.4, we represent a binary tree by a linked data structure in which each node is an object root[T ] points to the root of tree T Each node contains the ịelds ã ã ã ã • key (and possibly other satellite data) left: points to left child right: points to right child p: points to parent p[root[T ]] = NIL Stored keys must satisfy the binary-search-tree property • If y is in left subtree of x, then key[y] ≤ key[x] 12-2 Lecture Notes for Chapter 12: Binary Search Trees • If y is in right subtree of x, then key[y] ≥ key[x] Draw sample tree [This is Figure 12.1(a) from the text, using A, B , D , F , H , K in place of 2, 3, 5, 5, 7, 8, with alphabetic comparisons It’s OK to have duplicate keys, though there are none in this example Show that the binary-search-tree property holds.] F B A H D K The binary-search-tree property allows us to print keys in a binary search tree in order, recursively, using an algorithm called an inorder tree walk Elements are printed in monotonically increasing order How I NORDER -T REE -WALK works: • • • • Check to make sure that x is not NIL Recursively, print the keys of the nodes in x’s left subtree Print x’s key Recursively, print the keys of the nodes in x’s right subtree I NORDER -T REE -WALK (x) if x = NIL then I NORDER -T REE -WALK (left[x]) print key[x] I NORDER -T REE -WALK (right[x]) Example: Do the inorder tree walk on the example above, getting the output AB D F H K Correctness: Follows by induction directly from the binary-search-tree property Time: Intuitively, the walk takes (n) time for a tree with n nodes, because we visit and print each node once [Book has formal proof.] Querying a binary search tree Searching T REE -S EARCH (x, k) if x = NIL or k = key[x] then return x if k < key[x] then return T REE -S EARCH (left[x], k) else return T REE -S EARCH (right[x], k) Initial call is T REE -S EARCH (root[T ], k) ... we increment top[S], set S[top[S]] ← k, set S [top[S]] ← x, and set T [k] ← top[S] D ELETE: To delete object x with key k, assuming that this object is in the dictionary, we need to break the... pointer in the slot that pointed to j to point to the new slot Then insert the new element in the now-empty slot as usual To update the pointer to j , it is necessary to Þnd it by searching the chain... number of keys actually stored is small relative to the number of possible keys A hash table is an array, but it typically uses a size proportional to the number of keys to be stored (rather than the

Ngày đăng: 13/08/2014, 18:20

Từ khóa liên quan

Tài liệu cùng người dùng

  • Đang cập nhật ...

Tài liệu liên quan