A fast backtracking algorithm

13 153 0
A fast backtracking algorithm

Đang tải... (xem toàn văn)

Tài liệu hạn chế xem trước, để xem đầy đủ mời bạn chọn Tải xuống

Thông tin tài liệu

A Fast Backtracking Algorithm to Test Directed Graphs for Isomorphism Using Distance Matrices DOUGLAS C. SCHMIDT Vanderbdt Unwers~ty, Nashwlle, Tennessee AND LARRY E. DRUFFEL United States Aw Force Academy, Colorado ABSTRACT A backtracking algomthm for testing a pair of digraphs for isomorphism is presented The mformatlon contained m the distance matrix representation of a graph is used to estabhsh an lmtlal partltmn of the graph's vertices This distance matrix reformation is then apphed m a backtracking procedure to reduce the search tree ofposmble mappings While the algorithm is not guaranteed to run m polynomml time, it performs efficmntly for a large class of graphs KEY WORDS AND PHRASES backtrack programming, digraph, distance matrix, momorphlsm, partltmnmg CR CATEGORIES 5 32 A new backtracking algomthm for testing a pair of directed graphs for isomorphism is presented in this paper. The algorithm is efficient for a large class of graphs and terminates either providing the isomorphism, or Indicating that there is no isomor- phism for the given pair of graphs. Given two graphs, G 1 and G 2, the problem of determining an isomorphism, ff it exmts, is important. It has applicatmn m a varmty of fields including chemistry, network theory, and reformation retrieval. The problem can be solved by examining all N! permutations of N vertices. However, this approach is practical only for graphs with very small N. Although there have been attempts to find a mathematical function which will identify isomorphic graphs [1, 2], no such function has been found which can be computed in polynomial time. The problem is known to be in the set ofNP (nondeter- ministm polynomml) problems but it is not known if it is NP-complete (see Karp [3]) Since heuristics have been employed successfully on a variety of problems which have exhaustive search solutmns of factorial or exponential order, they were applied to the graph isomorphism problem by Unger [4], who reported processing times for some 24 node graphs at 2 min on an IBM 7090. For some 12 node graphs, however, the time was up to 7½ min. Other heuristm algorithms have been proposed [5-7]. Copyright © 1976, Association for Computing Machinery, Inc General permission to repubhsh, but not for profit, all or part of this maternal is granted provided that ACM's copymght notice is given and that reference is made to the pubhcatlon, to Its date of issue, and to the fact that reprinting pmvfleges were granted by permission of the Assoclatmn for Computing Machinery This work was done at the Systems and Information Science Department of the Vanderbdt Engineering School, Vanderbllt Unlvermty, Nashville, Tennessee, and was supported m part by a Vanderbllt Umversity Research Council Grant Authors' addresses D C Schmldt, Systems and Information Scmnce Department, Vanderbllt Umver- slty, Box 6146, Station B, Nashville, TN 37235, L E Druffel, Department of Astronautics and Computer Soence, Umted States Air Force Academy, CO 80840 Journal of the Association for Computing Machinery, Vol 23 No 3 July 1976 pp 433-445 434 D. C SCHMIDT AND L. E DRUFFEL Shah, Davida, and McCarthy [8] have proposed a backtracking algorithm for uniquely coding graphs. If the unique codes for a pair of graphs are the same, then the coding defines an isomorphism. Another type of graph coding has recently been applied by Berztlss [9], using a backtrack procedure to determine all isomorphisms. The processing times range from 36 to 241 sec for 12 vertex graphs on an IBM 360/50. Corneil and Gotheb [10, 11] proposed a very powerful algorithm for generating the automorphmm partition which leads to a polynomial time isomorphism algorithm. The method was based on a conjecture which R Mathon has since shown is not true [12]. Therefore their algorithm becomes a very useful backtracking method. They report times as low as .033 min for a 60-vertex graph, and up to 38.7 min for a 60- vertex polygon, using an IBM 7094. Hopcroft and Tarjan [13], Hopcroft and Wong [14], and Wemberg [15] have all developed polynomml time methods for testing planar graphs for isomorphism Unfortunately, each of these algorithms rehes on properties of planar graphs which do not extend to the general case. Fundamental Deftnttmns and Relatmnshlps Before descmbing the algomthm, the isomorphism problem will be formally defined. Many of the terms and concepts used here are well known m graph theory The terminology used in this paper is adapted from Behzad and Chartrand [16]. Given a pair of graphs, ~ G ~ and G 2, an isomorphism is a one-to-one mapping 4' from the vertices of G j onto the vertices of G ~ such that ~b preserves adjacency and nonadjacency of the vertices. In terms of the adjacency matrix, two graphs G ' and G ~ are isomorphic if a permutation of the rows and corresponding columns of adjacency matrix A ~ will produce the adjacency matrix AL In this paper the concept of partitioning by the composition of vectors will be used. For illustrative purposes, the composttion of a pair of vectors will be defined to be the termwise juxtaposition of their elements. In practme this process can be performed in hnear time using a list techmque developed by Hopcroft and Wong [14]. A degree sequence of graph is merely a listing of the degrees. Indegree and outdegree sequences can similarly be defined. In terms of the adjacency matrix, the degree sequence can be generated by summing the rows and columns corresponding to each vertex. The outdegree sequence, the mdegree sequence, and the degree sequence for Graph 1 (Figure 1) are [3,2,2,2,2,2], [2,2,2,3,2,2], and [5,4,4,5,4,4], respectively. For any graph G ~ to be isomorphic to G ~, G ~ must exhibit the same degree sequences as G j. This is a necessary but not sufficient condition for isomorphism. Most existing algorithms use this necessary condition to partltmn vertices according to their degrees. Thus existing heuristic algorithms consist of two phases: (1) generate an imtial partitioning of vertices based on the degrees of the vertices, and (2) employ heuristics whmh attempt to consider the neighborhood of each respective vertex. These heuristics represent attempts to characterize a vertex by its relatmnshlp with ever increasing numbers of vertices which are more distantly connected. What seems to be needed is a way of evaluating the relationship of a vertex with all other vertices, or ideally, some way to evaluate its relatmnship with the rest of the graph The Dtstance Matrtx The distance matrix is a characterization of a graph which offers information on the relationship between all vertices in the g~ven graph. The distance matrtx D is an N×N matrix in which the element d,, represents the length of the shortest path A superscmpt will be appended to all variables descmbmg propertms of a graph when a specffic graph is being discussed Thus, v, ~ is the ~th vertex ofG ~, etc A Fast Backtracking Algorithm Fm 1 Graph 1 435 between the vertices v, and v~ For every pair of vertices v, and v~, there is a umque minimum distance. If i = j, then d,~ = 0. If no path exists between the two vertices, the length is defined to be infimte. Theorem 1 is a restatement of a theorem introduced by Haklmi and Yau [17]. THEOREM 1. If G ~s an N-vertex reahzatmn of D, then G is unique Given a graph, generation of the distance matrix is a matter of finding the shortest distance between every pair of vertices. A number of algorithms have been developed to construct the distance matrix, many of which are summamzed by Dreyfus [18]. Floyd's algorithm [19], while of order O (N3), is simple and convement to implement. Given the adjacency matrix of Graph 1 (Table I), any of these algorithms will yield the distance matmx shown m Table II. An Improved Initzal Part~ttoning Since the distance matmx is a unique representation of a graph and since it contains information concerning the relationship between vertices in the graph, it offers a means of finding an initial partition which may be finer than that obtained by using the degrees of the vertices. Define a row characteristic matrix XR to be an N × (N-l) matrix such that the element xr,,~ is the number of vertices which are a distance m away from v~. Similarly define a column charactertstic matrix XC such that each element xc,,,~ is the number of vertices from which v, is a distance m. A characteristic matrix X Is formed by composing the corresponding rows of XR and XC. To illustrate the formation of these matrmes, consider Graph 1, which is defined by the adjacency matrix in Table I and the distance matrix shown in Table II. The charactemstic matrices XR, XC, and X for Graph 1 are shown in Tables III-V The characteristic matrix will be represented by the termwise juxtaposition of the appropriate elements in the array. Information from the characteristic matrix may be used to form an initial partition. v: will map to v~ in an isomorphism only if xl,, = x~ Vm. Vertices which exhibit identical rows of the characteristic matrix will be assigned the same class. In order to describe the partitioning process, a class vector C will be defined such that the element c, is the class of vertex v,. All vertices with the same class are then partitioned into the same cell The partition so obtained from the characteristic matrix is often superior to that obtained from the degree sequence. Algorithm 1 (Appendix A) is a formal definition of the initial partitioning process. 436 D. C. SCHMIDT AND L. E. DRUFFEL For example, let G' be Graph I and G 2 be Graph 2 (Figure 2). The distance matrix for Graph 2 is given in Table VI and the characteristic matrix X in Table VII. The initial partition, defined by C' = [1,2,3,4,3,2] and C 2 = [2,3,4,3,2,1], is [(1)(2,6)(3,5)(4)] and [(6)(1,5)(2,4)(3)]. The initial partitioning using the vertex degrees is [(1)(2,3,5,6)(4)] and [(6)(1,2,4,5)(3)]. The initial partition based on the distance matrix may be more refined than that based on the adjacency matrix and can never be less SO. TABLE I ADJACENCY MATRIX FOR GRAPH I TABLE II DISTANCE MATRIX FOR GRAPH 1 1 2 3 4 5 6 1 2 3 4 5 6 1 0 1 0 0 1 1 1 0 1 2 2 1 1 2 0 0 0 1 1 0 2 2 0 2 1 1 2 3 0 1 0 1 0 0 3 2 1 0 1 2 3 4 1 0 1 0 0 0 4 1 2 1 0 2 2 5 0 0 0 I 0 1 5 2 3 2 1 0 1 6 l 0 1 0 0 0 6 1 2 1 2 2 0 TABLE III. Row CHARACTERISTIC MATRIX FOR TABLE IV COLUMN CHARACTERISTIC MATRIX GRAPH ] FOR GRAPH 1 1 2 3 4 5 1 2 3 4 5 1 3 2 0 0 0 1 2 3 0 0 0 2 2 3 0 0 0 2 2 2 1 0 0 3 2 2 1 0 0 3 2 3 0 0 0 4 '2 3 0 0 0 4 3 2 0 0 0 5 2 2 1 0 0 5 2 3 0 0 0 6 2 3 0 0 0 6 2 2 1 0 0 TABLE V CHARACTERISTIC MATRIX FOR TABLE VI GRAPH 1 1 t 2 3 4 5 1 0 1 32 23 00 00 00 2 1 2 22 32 01 00 00 3 2 3 22 23 10 00 00 4 3 4 23 32 00 00 00 5 2 5 22 23 10 00 00 6 1 6 22 32 01 00 00 DISTANCE MATRIX FOR GRAPH 2 2 3 4 5 6 2 2 1 2 1 0 1 2 3 2 2 0 1 2 1 2 1 0 1 2 1 1 2 0 2 1 2 2 1 0 FIG 2 Graph 2 A Fast Backtracking Algorithm 437 TABLE VII CHARACTERISTIC MATRIX FOR GRAPH 2 1 2 3 4 5 6 1 22 32 01 oo oo 2 22 23 10 oo oo 3 23 32 oo oo oo 4 22 23 10 oo oo 5 22 32 Ol oo oo 6 32 23 oo oo oo THEOREM 2. If two vertices v, and v~ are partitioned ~nto separate classes by the degree sequence, they will also be partitioned ~nto separate classes by the characteris- tic vector. PROOF. The elements of the row characteristm matrix xr,~ are the elements of the outdegree sequence. A similar result follows for the column characteristic matrix elements xc,~. Since the imtml partitmn can be used to limit the size of the search tree, it may reduce the amount of computation that the backtracking algorithm must do. In some cases it will prowde a means of immedmtely determining that no isomorphism exists for a given pair of vertmes. Define a class count vector K such that the element k, is the number of vertices in class ~. It follows that lfG ~ is isomorphic to G 2, then k: = k~, Vi. Pmrs of graphs which do not have identical class count vectors may be immediately rejected when the initial partition is computed. The Backtrackzng Algorzthm While the initial partition generated by Algorithm 1 usually reduces the size of the search tree, in general it does not umquely characterize the vertices of a graph. For instance, the initial partitioning of vertices, where G 1 is Graph 1 and G 2 is Graph 2, requires that ifG 1 and G 2 are isomorphic, then v~ will be mapped to v~ and v~ will be mapped to v~. But this leaves some uncertainty about the mappings for vertices v~, v] and v], v]. Even though the initial partition using the distance matrix may be an improvement over thai of the degree sequence, it does not uniquely characterize the vertices. A second algomthm which selects possible vertex mappings between the two graphs and checks these mappings for consistency is used to resolve ambiguities in the mappings. The mapping of v~ to v~ is consistent if: (1) every element dis = d~8 and dJ, = ds2~ Vj, s such that v~ has been mapped to v~; and (2) every element d~k (v~ not previously mapped) has a corresponding d~p (v~ not previously mapped) such that c~ = c~. Thus a conmstent mapping implies that row 1 and column ~ of D 1 have corresponding elements in row r and column r of D 2, at least for all previously mapped vertices, and that the remaining elements of those rows and columns do not preclude further consistent vertex mappings if any mappings remain. The algorithm will choose pairs of vertices v~ and v~ which are m the same class and will investigate the consmtency of mapping v: to v~. Such a mapping is asserted by composing the class vector for the appropriate graph with the respective rows and columns of the distance matmces. The composition generates a new class vector, which defines a refined partition. If the mapping is consistent, another pair of vertmes can be chosen for mapping. If a partition is reached such that there are no consistent mappings, then a mapping between two vertices for which the mapping is not an isomorphism has been attempted and ~t is necessary to backtrack to try another mapping. A recursive algorithm which describes this process is given in Algorithm 2 (Appen- dix B). Since a new class vector and a new class count vector will be generated at each level of the recurslve process, an index will be added to these vectors. Thus C~ will refer to the class vector for Graph 1 generated at level t. An element of the vector c~t 438 D.C. SCHMIDT AND L. E. DRUFFEL refers to the class of vertex v~ in Graph 1 generated at level t. Similarly, an index will be added to the vector notation for the class count vector K. Taking G ~ as Graph 1 and G 2 as Graph 2, the example begun in the last section will be continued to illustrate the selection and refinement process by applying Algorithm 2 (the corresponding step number is indicated in parentheses). (1) t =O.P=O. (2) C~ = [1,2,3,4,3,2], Ca = [2,3,4,3,2,1] K~ = [1,2,2,1,0,0], K~ = [1,2,2,1,0,0] (3) t < N (4) t = Po = 0 (5) t = O. (6) Choose t = 1, pb = 1 (7) Choose r = 6 (8) r exists, go to 10 (10) Compose C~ with row I and column 1 of D 1 and C~ with row 6 and column 6 ofD z ymldmg C] = [1,2,3,4,5,6], C~ = [6,5,4,3,2,1] (11) K] = [1,1,1,1,1,1], K~ = [1,1,1,1,1,1] (12) K~ = K~ (13) t = 1 (14) Go to 3 Note that, although the algorithm must continue vemfying the consistency of the mappings, each of the vertmes has a unique class assignment so that the mapping at each level is determined. Since there was no arbitrary choice made at any level, either the mapping ~b = [1-6, 2-5, 3-4, 4-3, 5-2, 6-1] is an isomorphism or no isomorphism exists for the two graphs. In this case, it is easy to see that the mapping is an isomorphism. However, the algorithm must continue through all six levels to verify consmtency. Complextty of the Algortthm Since O(N 3) time is reqmred to generate the distance matrix, at least for dense graphs, the order of the combined algorithms cannot be less than N 3 Algorithm 1 is reasonably easy to analyze. Formation of the row and column characteristic matrices requires O(N 2) time for each matrix. Composition of the row and column characteristic matrices for the two graphs requires O(N 2) time. Thus steps 1 through 3 require O (N 2) time. The iteration steps 5 through 9 are bounded by O(N) time. Thus, the initial partition may be realized in O(N ~) time, given the distance matrix. Algorithm 2 is not as easily analyzed. Clearly the algorithm must reach level N of the tree to find an isomorphism. However, because it may backtrack, the number of branches is not easily determined. For each branch, a pair of vectors must be composed, which requires O(N) time and consistency must be verified requiring O(N) time. Thus each branch reqmres O(N) time. Since there must be at leastN branches, the lower bound of Algorithm 2 is O(N 2) time. The upper bound is O(N .N!). Although the dmtance matrix is powerful in obtaining an initial partitioning, there are some classes of graphs for which the information in the distance matrix is no better than that of the adjacency matrix. If, for every pair of nonadjacent vertices [v~, v~], there exists at least one vertex vk such that the edges [v,, vk] and [vk, v~] exist, then the maximum distance will be two. For example, the mformation in the distance matmx may be suppressed by adding a single vertex which is adjacent to every vertex in the original graph. The distance matmx for Graph 3 (Figure 3), excluding vertex vn, shown in Table VIII, presents a characterization of its vertices in which differ- ences are more obvious than that of the distance matrix of Table IX. Table IX reflects the addltmn of vertex v~. However, the isomorphism of a pair of graphs is unchanged by the addition of an equal number of vertices adjacent to all other vertmes, just as it is unchanged by addition of an equal number of vertices with degree zero. Note that A Fast Backtracking Algorithm 439 Fm 3 Graph 3 TABLE VIII DISTANCE MATRIX FOR GRAPH 3 WITHOUT Vll 1 2 3 4 5 6 7 8 9 10 1 0 2 3 2 2 1 3 2 1 3 2 2 0 2 3 1 2 2 3 1 3 3 3 2 0 2 1 2 2 2 3 1 4 2 3 2 0 2 1 2 2 3 1 5 2 1 1 2 0 1 1 2 2 2 6 1 2 2 1 1 0 2 1 2 2 7 3 2 2 2 1 2 0 1 3 1 8 2 3 2 2 2 1 1 0 1 1 9 1 1 3 3 2 2 3 1 0 2 10 3 3 1 1 2 2 1 1 2 0 TABLE IX DISTANCE MATRIX FOR GRAPH 3 INCLUDING V,, 1 2 3 4 5 6 7 8 9 I0 II l 0 2 2 2 2 l 2 2 1 2 1 2 2 0 2 2 i 2 2 2 I 2 1 3 2 2 0 2 1 2 2 2 2 1 1 4 2 2 2 0 2 1 2 2 2 1 1 5 2 1 1 2 0 1 1 2 2 2 1 6 1 2 2 1 1 0 2 1 2 2 1 7 2 2 2 2 1 2 0 1 2 1 1 8 2 2 2 2 2 1 1 0 1 1 1 9 1 1 2 2 2 2 2 1 0 2 1 10 2 2 1 1 2 2 1 1 2 0 1 11 1 1 1 1 1 1 1 1 1 1 0 this is not known to be true in the general case (see Ulam's conjecture [20]), but it is certainly true for vertices of degree zero or degree N - 1 It is tmvial to detect the presence of such vertices because the row and column will be all ones except for the diagonal element. These vertices may be removed from the graphs. In this case, the 440 D.C. SCHMIDT AND L. E. DRUFFEL information may be retrieved as easily as it was suppressed, and the original graphs may be compared for isomorphism. However, there are graphs for which the maximum distance is two and for which there is no simple way to distinguish one vertex from another. Graph 4 (Figure 4) is such a graph. The distance matrix for Graph 4 is shown in Table X. Clearly the distance matrix yields the same initial partition as the adjacency matrix for this class of graph. A Dynarn~c Bound Since the upper bound of Algorithm 2 may reach order factorial, it seems prudent to investigate the feasibility of predicting the difficulty of testing the isomorphism of a given pair of graphs. If there are L, vertices in the ~th cell of the initial partition, then an upper bound to the number of possible mappings which would have to be investi- gated is l[I~=, (L, !), where k is the number of cells m the initial partition. Typically this is a pessimistic bound. The amount of work required of the algomthm depends on the number of branches explored. Predicting the performance of the algorithm for a given pair of graphs therefore reqmres estimating the number of branches that must be investigated. If any vertices are umquely defined after the imtlal partition is obtained using FIG 4 Graph 4 TABLE X DISTANCE MATRIX FOR GRAPH 4 1 2 3 4 5 6 7 1 0 2 1 3 2 4 1 5 2 6 1 7 2 1 2 1 2 1 2 0 1 2 2 2 2 1 0 1 1 2 1 2 1 0 2 1 1 2 1 2 0 1 2 2 2 1 1 0 1 2 1 1 2 1 0 A Fast Backtracking Algorithm 441 Algorithm 1, then the mapping for these vertices presents no choice. If a mapping of v~ to v~ refines the part~tmn so that another vertex mapping is uniquely determined, then that mapping is also made. All uniquely defined vertices are thereby mapped. For many graphs, these successive assertions will, in fact, lead to a termination of the algorithm. For others, a level will be encountered at which a choice must be made; designate this level by p. To estimate the worst possible case, assume that G I has no automorphism other than the trivial automorphism and that G 2 is isomorphic to G 1 with the vertices arranged in such a way that the algorithm will be reqmred to investigate every choice at each level. At this point, only the vertmes of G 1 will be considered. Since all vertices which have identical adjacencies are interchangeable, clearly any deosion concerning one is eqmvalent to a like decision concerning the other. A pair of vertices v,, v~ are identical ifd,k = d~A and dk, = d~, k ~ ,j, and d,~ -~ d~,. Identical vertices are temporarily put into separate classes A vertex is then marked and the composition performed so that the next level is reached This process is repeated at each level until the composition with the row of the characteristm matrix corresponding to some vertex generates a partitioning such that each remaining vertex is in a class by itself. Designate this level q. The algorithm returns to level p, but the vertices chosen at each level are still marked and will always be chosen at that level by step 6 of Algorithm 2. The bound then is a count of the number of branches that might possibly be required to complete the algorithm Let st be the size of the class from which the vertex was marked at level t. Thus there are at most st branches from level t. The maximum number of branchings required ~s Hs~ + Nsj(n-q). l=p J=p ~=p Note that this is a bound on the number of branches which would be required to map G' onto any other graph. When all cells of a partition contain more than one vertex, some arbitrary choice must be made and the dynamic bounding is used. The vertices selected at each level of the bounding algorithm will always be selected at the respective level by step 6 of Algorithm 2. Rather than make arbitrary choices, several strategies were developed which ~'shape" the backtrack tree by making heuristically determined choices. Strategy i chooses a vertex in the smallest cell, i.e. chooses some cellj such thatL~ is smallest, then chooses some vertex v~ from cellj. With this strategy, refinement realized by asserting a pair of vertices in the smallest class may reduce the size of a larger cell and hence reduce the searching required at the next level. This strategy has the effect of reducing the breadth of search, but possibly permits a greater depth. It is easily and efficiently implemented. Strategy 2 reduces the depth of searching by getting the most refined partlti.on at the next level. This reqmres the compositmn of the respective row of the characteris- tic matrix for every vertex with the current partitmn to determine which vertex would yield the most cells of the next partition. Such a strategy may decrease the depth of search but can also increase its breadth. The consequence of choosing a vertex from cell j is that, at most, (L~) H~=~ (L',!) mappings need be checked where the primed components are from the partition at the next level. In strategy 3 the product is computed for each vertex and that vertex which produces the minimum product is chosen. Such a strategy attempts to mini- mize the breadth and depth of the search at the same time. It also requires intersec- tion of each vertex to determine the next partition, and calculatmn of the term for each vertex. As an example of the bounding, let Graph 4 be G ~. The distance matmx for Graph 4 442 D. C. SCHMIDT AND L. E. DRUFFEL is given in Table X. The initial partitioning is defined by C~ = [1,2,3,3,2,3,1]. An arbitrary choice must be made since no vertex is in a class by itself. Therefore p = 0 and So = 2. Using strategy 1, choose a vertex from the smallest cell, which leaves an arbitrary choice between v~, v~, v~, v¢. Arbitrarily choose v] and compose C~ with row 1 and column 1 olD ~, which yields C~ = [1,2,3,4,5,4,6]. Vertices v~, v], v~, v~, and v~ are then in cells of size 1, so that s~ = 1. Arbitrarily choose v~ and compose C~ with row 2 and column 2 ofD ~ yielding C~ = [1,2,3,4,5,4,6]. No improvement has been realized, so s2 = 1. Arbitrarily choose v~ and compose C~ with row 3 and column 3 ofD ~ yielding C~ = [1,2,3,4,5,6,7]. Since this composition drives all vertices into single cells, q = 2. Thus the maximum number of branches which could possibly be investigated in trying to match some Graph G 2 to Graph 4 by making a fixed set of choices from the same cells in the corresponding partition ofG 2 from which v ~, v~, and v~ were chosen is (2 + (2.1) + (2.1.1).(7 - 2)) = 14. Implementatmn Algorithms 1 and 2 were Implemented using Fortran IV on a Xerox Data Systems Sigma-7 with 46K, 32-bit words of available core storage. Without providing for swapping data, the practical limit for this implementation is N = 100. Since previ- ously reported implementations except that of Cornell appeared to be time bound rather than core bound for graphs greater than 25 vertices, this seemed to be a reasonable limit. As it turned out, the implementation is core bound rather than time bound for most graphs. Using swapping techniques, with only one row of each matrix in core at a time, storage requirements can be reduced to O(N) However, severe penalties are paid in executmn time for this. Performance of the Algortthm In evaluating performance, it is Important to distinguish between the performance of the implementation and the performance of the algorithm. Performance of the implementation is measured by the total time required for the program to process a pair of graphs. The performance of the backtracking algorithm will be measured by the number of branches required to process a pair of graphs. To test performance, a random graph generating program was used. All randomly generated nonisomorphic pairs of graphs were rejected at the initial partition. A second program was implemented to permute the adjacency matrix of a graph to create a second graph isomorphic to the first. No such pair of lsomorphm graphs required Algorithm 2 to backtrack. Performance of Algorithm 2 in these cases was N times the work required for each branch, which is simply the composing of N element vectors. This suggests an O (N 2) performance in these cases The performance of the program is shown in Table XI. A difference analysis on this data indicates O(N 2) performance for random graphs. However, random graphs do not generally contain interesting propertms. Randomly generated regular graphs were tested and either were rejected as nonisomorphic at the initial partition or required no backtracking to prove the graphs were isomorphic. TABLE XI RUN TIMES IN MILLIMINUTES FOR RANDOM GRAPHS N Distance matrix Algorithm 30 29 41 40 88 54 50 187 67 60 336 80 70 533 114 80 798 148 90 1122 191 100 1551 224 [...]... The worst case observed was mapping S 1 onto S 5 The average time for these graphs using strategy 1 was less than 1 min A similar collection of strongly regular graphs of order 36 was developed by Bussemaker and Seidel [26], and one of order 26 by Corneil and Mathon [25] Conclusion The backtracking algorithm described in this paper appears to be very powerful for a large class of graphs An upper bound... WEINBERG, L A simple and efficmnt algorithm for determining momorphlsm of planar triply connected graphs IEEE Trans on C~rcu~t Theory CT-13 (1966), 142-148 16 BEHZAD,M , AND CHARTRAND, G Introductmn to the Theory of Graphs Allyn and Bacon, Boston, 1971 17 HAKIMI, S L , AND YAU, S S Distance matrix of a graph and its r e h a b l h t y Quart of Appl Math XXll, 4 (1965), 305-317 18 DREYFUS, S E An appraisal of... with no backtracking Similarly, complete k-partite graphs with k < 5 did not require backtracking Graphs with special properties, such as Petersen's Graph, when compared with a random permutation of their vertices, did not reqmre backtracking Hoffman [22] suggested applying the algorithm to a set of strongly regular graphs (see Bose [23]) produced by Paulus [24] and, independently, by Corneil and Mathon... BERZTISS, A T A backtrack procedure for isomorphism of directed graphs J ACM 20, 3 (July 1973), 365-377 10 CORNEIL,D G , ANn GOTLIEB, C C An efficient algorithm for graph momorphism J ACM 17, 1 ( J a n 1970), 51-64 11 CORNE|L,D G Graph isomorphism Ph D D i s s , Dep of Comput Sci , U of Toronto, Toronto, Ontarm, Canada, 1968 1 A Fast Backtracking Algorithm 12 445 D G The analysm of graph theoretic algomthms...443 A Fast Backtracking Algorithm Since the randomly generated graphs did not reqmre backtracking, tests were made on simple polygons Performance for these graphs did not differ from t h a t of randomly generated graphs This is predictable since all simple polygons with the same number of vertices are isomorphic, and every choice at any level is consistent Starred polygons of varying symbols... There are fifteen nonisomorphic graphs of order 25 with d(v,) = 12, Vi, and Pll = 5 Since these graphs required the algorithm to backtrack, the three strategies for making choices at step 6 of Algorithm 2 were studied Table XII shows the predicted number of branches and the actual number taken using each of the three strategies on various combinations of graphs From this sampling, it appears that strategy... large class of graphs An upper bound is presented if the pair of graphs m a y cause backtracking It is capable of handling problems of realistic size For graphs encountered in practice, the algorithm appears extremely efficient TABLE XII DYNAMIC BOUND AND ACTUAL NUMBER OF BRANCHES FOR STRONGLY REGULAR GRAPHS OF ORDER 25 Strategy 1 Strategy 2 Strategy 3 G'-G" Bound S'-S 2 S'-S ~ S'-S ~ S'-S s S ' - S... provides a somewhat more pessimistic dynamic bound t h a n do strategies 2 and 3 However, the performance seems to be comparable Since strategy 1 is the simplest and most efficiently computed, it is an attractive strategy Although strategies 2 and 3 provide slightly more realistic estimates, they are less efficient and do not necessarily perform better Table XII illustrates that the bound is based on... Toronto, Toronto, Ontario, Canada, 1974 13 HOPCROFT, J E , AND TARJAN, R E Isomorphism of planar graphs In Complexity of Computer Computatmns, R E Miller and J W Thatcher, Eds , P l e n u m Press, New York, 1972, pp 143-150 14 HOPCROFT, J E , AND WONG, J K Linear time algorithm for isomorphism of planar graphs Proc of the 6th A n n u a l ACM Symp on the Theory of Computing, Seattle, W a s h , Apm130, 1974,... distance algorithms Oper Res 17 (1969), 395-412 19 FLOYD, R W Algorithm 97, Shortest path Comm ACM 5, 6 (June 1962), 345 20 ULAM, S M A Collectmn of Mathemat~calProblems Wiley, New York, 1960 21 TURNER, J Pomt-symmetmc graphs with a prime n u m b e r of points J Comb Theor 3 (1967), 136145 22 HOFFMAN, A J Private communication 23 BOSE, R C Strongly regular graphs, partml geometries and partially balanced . States Aw Force Academy, Colorado ABSTRACT A backtracking algomthm for testing a pair of digraphs for isomorphism is presented The mformatlon contained m the distance matrix representation of a. distance matrix, at least for dense graphs, the order of the combined algorithms cannot be less than N 3 Algorithm 1 is reasonably easy to analyze. Formation of the row and column characteristic. from another. Graph 4 (Figure 4) is such a graph. The distance matrix for Graph 4 is shown in Table X. Clearly the distance matrix yields the same initial partition as the adjacency matrix

Ngày đăng: 22/10/2014, 21:28

Tài liệu cùng người dùng

  • Đang cập nhật ...

Tài liệu liên quan