首页 Path-based depth-first search for strong and biconnected components

Path-based depth-first search for strong and biconnected components

举报
开通vip

Path-based depth-first search for strong and biconnected components Information Processing Letters 74 (2000) 107–114 Path-based depth-first search for strong and biconnected components Harold N. Gabow 1 Department of Computer Science, University of Colorado at Boulder, Boulder, CO 80309, USA Received 19 October 1999; recei...

Path-based depth-first search for strong and biconnected components
Information Processing Letters 74 (2000) 107–114 Path-based depth-first search for strong and biconnected components Harold N. Gabow 1 Department of Computer Science, University of Colorado at Boulder, Boulder, CO 80309, USA Received 19 October 1999; received in revised form 1 March 2000 Communicated by S.E. Hambrusch Keywords: Graph; Depth-first search; Strongly connected component; Biconnected component; Stack; Algorithms 1. Introduction Depth-first search, as developed by Tarjan and co- authors, is a fundamental technique of efficient al- gorithm design for graphs [23]. This note presents depth-first search algorithms for two basic problems, strong and biconnected components. Previous algo- rithms either compute auxiliary quantities based on the depth-first search tree (e.g., LOWPOINT values) or re- quire two passes. We present one-pass algorithms that only maintain a representation of the depth-first search path. This gives a simplified view of depth-first search without sacrificing efficiency. In greater detail, most depth-first search algorithms (e.g., [23,10,11]) compute so-called LOWPOINT val- ues that are defined in terms of the depth-first search tree. Because of the success of this method LOW- POINT values have become almost synonymous with depth-first search. LOWPOINT values are regarded as crucial in the strong and biconnected component algo- rithms, e.g., [14, pp. 94, 514]. Tarjan’s LOWPOINT method for strong components is presented in texts [1, 7,14,16,17,21]. The strong component algorithm of Kosaraju and Sharir [22] is often viewed as conceptu- 1 Email: hal@cs.colorado.edu. ally simpler but it requires two passes over the graph. It is presented in texts [2,4,6,25]. Tarjan’s LOW- POINT biconnected component algorithm is presented in texts [1,2,4,5,7,13,14,16,17,21,25]. A two-pass bi- connected component algorithm of Micali that avoids LOWPOINT values is sketched in [7, pp. 67–68]. This paper presents strong and biconnected compo- nent algorithms that are based on the depth-first search path. This natural approach appears to have first been proposed by Purdom [19] and Munro [18] for strong components. It is regarded as requiring an extra data structure for set merging in order to be asymptotically efficient, and hence unlikely to be efficient in prac- tice [23]. We present linear-time implementations of this approach for both strong and biconnected compo- nents. Our implementations use only stacks and arrays as data structures. A line-by-line pseudocode compar- ison of our algorithms with the tree-based algorithms of [23] shows the two approaches are similar in terms of lower level resource usage; performance differences are likely to be small or platform-dependent. Our algo- rithms show that the simpler path-based view of depth- first search suffices for these properties. One can design other path-based depth-first search algorithms for properties such as ear decomposi- tion [15], st-numbering [7], topological numbering, 0020-0190/00/$ – see front matter Ó 2000 Published by Elsevier Science B.V. All rights reserved. PII: S0020-0190(00) 00 05 1- X 108 H.N. Gabow / Information Processing Letters 74 (2000) 107–114 Fig. 1. (a) Digraph G. (b)–(e) Path P (solid edges) in the first several steps of the algorithm. Strong component {3} is output in (d). etc. The complete version of this paper [8] includes an algorithm to find the bridges of an undirected graph, leading to an immediate proof of Robbins’ Theo- rem [20]. It also includes a simple articulation points algorithm, and a previously unpublished strong com- ponent algorithm of Tarjan that can be interpreted as path-based. Section 2 presents our strong component algorithm and Section 3 presents the biconnected component algorithm. Appendix A proves a simple property of biconnected components. We conclude this section with some terminology. Singleton sets are usually denoted by omitting set braces, e.g., for a set S and element x , S − x denotes S−{x}. We assume all input graphs contain n vertices and m edges. We use the following operations to manipulate a stack S: PUSH(x, S) adds x to S at the (new) top of S. POP(S) removes the value at the top of the stack and returns that value. TOP(S) is the index of the value at the top of the stack. Hence S[TOP(S)] is the value at the top of the stack. 2. Strong components Consider a digraph G = (V ,E). Two vertices are in the same strong component of G if and only if they are mutually reachable, i.e., there is a path from each vertex to the other. The strong component graph is formed by contracting the vertices of each strong component. Equivalently the strong component graph is the acyclic digraph, formed by contracting vertices ofG, that has as many vertices as possible. In short we say the strong component graph is the finest acyclic contraction of G. This characterization suggests the following high- level algorithm to find the strong component graph of G = (V ,E). See Fig. 1. The algorithm maintains a graph H that is a contraction of G with some vertices deleted. It also maintains a path P in H . Initially H is the given graphG. If H has no vertices stop. Otherwise start a new path P by choosing a vertex v and setting P = (v). Continue by growing P as follows. To grow the path P = (v1, . . . , vk) choose an edge (vk,w) directed from the last vertex of P and do the following: • If w /∈ P , add w to P , making it the new last vertex of P . Continue growing P . • If w ∈ P , say w = vi , contract the cycle vi, vi+1, . . . , vk , both in H and in P . P is now a path in the new graph H . Continue growing P . • If no edge leaves vk , output vk as a vertex of the strong component graph. Delete vk from both H and P . If P is now nonempty continue growing P . Otherwise try to start a new path P . It is easy to see that this algorithm forms the finest acyclic contraction of G. (For instance if no edge H.N. Gabow / Information Processing Letters 74 (2000) 107–114 109 Fig. 2. (a)–(d) show the data structure for Fig. 1(b)–(e), respectively. Stack S is shown. Arrows to the left of S represent stack B. Arrows to the right of S represent the entries of I that are used in contract steps. For example, in (a) the algorithm reads I [2] = 2 and then contracts cycle 2,4,5 to get (b). In (c) I [3] changes from 6 to 7, the latter being the strong component number of vertex 3. leaves vk then vk is a vertex of the finest acyclic contraction.) Thus the algorithm correctly computes the strong components. This high-level algorithm was originally proposed by Purdom [19] and Munro [18]. The time for an ef- ficient implementation is dominated by the time to keep track of the new vertices formed by contraction operations. Any data structure for disjoint set merg- ing [6] can be used for this purpose. Purdom [19] and Munro [18] use simple set-merging data structures, achieving total time O(n2) and O(m+n logn), respec- tively. Tarjan has shown set merging can be more ef- ficiently, giving total time O(mα(m,n) + n) [24]. In fact the incremental tree set merging algorithm of [9] can be used. This reduces the time to O(m+ n), giv- ing a linear-time algorithm to find strong components. However the overhead of using incremental tree set merging may be significant in practice. Also the incre- mental tree algorithm requires a RAM machine and does not apply to a pointer machine. Now we give a simple list-based implementation that achieves linear time. The data structure is illustrated in Fig. 2. Assume the vertices of the given graph G are numbered by consecutive integers from 1 to n. The algorithm numbers the strong components of G by consecutive integers starting at n + 1. It records the strong component number for each vertex (see (2) below). Two stacks are used to represent the path P . Stack S contains the sequence of (original) vertices in P and stack B contains the boundaries between contracted vertices. More specifically S and B correspond to P = (v1, . . . , vk) where k = TOP(B) and for i = 1, . . . , k, vi = { S[j ]: B[i]6 j < B[i + 1]}. (1) When k > 0 we have B[1] = 1. Also when i = k in (1) we interpret B[k + 1] to be TOP(S)+ 1. An array I [1..n] is used to store stack indices. It also stores the strong component number of a vertex when that number is known. More precisely for a given vertex v at any point in time, I [v] =  0 if v has never been in P ; j if v is currently in P and S[j ] = v; c if the strong component containing v has been deleted and numbered as c. (2) Since there are only n vertices, there can be no confusion between an index j and a component number c in (2). A variable c is used to keep track of the component numbers. The algorithm consists of a main routine STRONG and a recursive procedure DFS: procedure STRONG(G) 1. empty stacks S and B; 2. for v ∈ V do I [v] = 0; 3. c= n; 4. for v ∈ V do if I [v] = 0 then DFS(v); procedure DFS(v) 1. PUSH(v, S); I [v] = TOP(S); PUSH(I [v],B); /* add v to the end of P */ 110 H.N. Gabow / Information Processing Letters 74 (2000) 107–114 2. for edges (v,w) ∈E do 3. if I [w] = 0 then DFS(w) 4. else /* contract if necessary */ while I [w] n then the component containing w has been deleted and D4 does nothing. We turn to showing that the time and space are O(m + n). We assume the given graph G is stored as a collection of adjacency lists. Observe that every vertex is pushed onto and popped from each stack S, B exactly once. Hence it is easy to see that the algorithm spends O(1) time on each vertex or edge. 2 Comparing our code to the algorithm of [23], both methods use stack S; our size n array I corresponds to a similar array that holds depth-first discovery numbers; our stack B corresponds to a size n array that holds LOWLINK values. Both S and B contain at most n entries at any time. An algorithm almost identical to STRONG finds the bridges of an undirected graph. The high-level algorithm is based on the fact that contracting the vertices of a cycle does not change the bridges of a graph. The details are given in [8]. 3. Biconnected components We present our algorithm for biconnected compo- nents in the language of hypergraphs. This is not log- ically necessary but it brings out the similarity to the strong components algorithm. We start by reviewing basic definitions about hy- pergraphs [3,15]. A hypergraph H = (V ,E) consists of a finite set V of vertices and a finite set E of edges, each edge a subset of V . A path is a sequence (v1, e1, . . . , vk, ek) of distinct vertices vi and distinct edges ei , 16 i 6 k, with v1 ∈ e1 and vi ∈ ei−1 ∩ ei for every 1 < i 6 k. The set of all vertices in edges of P is denoted V (P)= k⋃ i=1 ei. A cycle is a path with the additional properties that k > 1 and v1 ∈ ek . A hypergraph is acyclic if it contains no cycle. Notice that in a path P each vertex vi+1, 16 i < k belongs to ei−vi . For this reason the sets ei−vi figure prominently in our algorithm (e.g., see (4) below). The algorithm also uses this operation on hypergraphs: H.N. Gabow / Information Processing Letters 74 (2000) 107–114 111 Fig. 3. (a) Undirected graph G. (b)–(e) Path P (solid edges) in the first several steps of the algorithm. Biconnected component {5,6,7} is output in (e). To merge a collection of edges ei , i = 1, . . . , k, add a new edge ⋃k i=1 ei and delete every edge of E contained in it (e.g., ei ). A merging of hypergraph H is a hypergraph formed by doing zero or more merges on H . Now consider an undirected graph G = (V ,E). Two distinct edges are in the same biconnected com- ponent of G if and only if some simple cycle con- tains both of them. This relation is easily seen to be an equivalence relation over the edges, so the bi- connected components are well-defined. The “block- cutpoint tree” of a graph represents the biconnected components and cutpoints [12]. We will use a hyper- graph variant of this notion: The block hypergraph H ofG is the hypergraph formed by merging the edges of each biconnected component ofG.H is an acyclic hy- pergraph. In fact H can be characterized as the finest acyclic merging of G, i.e., it is the acyclic hypergraph formed by merging edges of G that has as many hy- peredges as possible. For completeness this character- ization is proved in Appendix A. The characterization suggests the following high- level algorithm to find the block hypergraph of G = (V ,E). See Fig. 3. The algorithm maintains a hy- pergraph H that is a merging of G with some edges deleted, and a path P in H . Initially H is the given graph G. If H has no edges stop. Otherwise start a new path P by choosing an edge {v,w} and setting P = (v, {v,w}) (choose the end v arbitrarily). Continue by growing P as follows. To grow the path P = (v1, e1, . . . , vk, ek) choose an edge {v,w} 6= ek with v ∈ ek − vk and do the following: • If w /∈ V (P), add v, {v,w} to the end of P . Continue growing P . • If w ∈ V (P), say w ∈ ei − vi+1, merge the edges of the cycle w,ei, vi+1, ei+1, . . . , vk, ek, v, {v,w} to a new edge e =⋃kj=i ej , both in H and in P . P is now a path ending with e (i.e., (vi , e) has replaced (vi , ei, . . . , vk, ek)). Continue growing P . • If no edge leaves ek − vk , output ek as an edge of the block hypergraph. Delete ek from H and delete (vk, ek) from P . If P is now nonempty continue growing P . Otherwise try to start a new path P . Correctness of this algorithm is based on two simple observations: When v, {v,w} is added to P the result is a valid path, by the condition v ∈ ek − vk . When edges are merged they form a valid cycle, by the condition {v,w} 6= ek . Now a straightforward inductive argument proves the algorithm correctly forms the finest acyclic merging of G, i.e., it finds the block hypergraph as desired. As in Section 2 we give a list-based implementa- tion that achieves linear time. The data structure is illustrated in Fig. 4. As before assume the vertices of G are numbered by consecutive integers from 1 112 H.N. Gabow / Information Processing Letters 74 (2000) 107–114 Fig. 4. (a)–(d) illustrate the data structure for Fig. 3(b)–(e), respectively. S , B and I are represented as in Fig. 2. Every other arrowhead of B is drawn filled. For example, in (a) the algorithm reads I [2] = 2 and then merges cycle 2,3,5,4 to get (b). In (d) I [6] and I [7] change to 8, the number of the first biconnected component. to n. The algorithm numbers the biconnected compo- nents of G by consecutive integers starting at n + 1. The biconnected components are represented by as- signing a number I [v] to each vertex v in such a way that each edge {v,w} belongs to the biconnected component with number min{I [v], I [w]} (see (5) be- low). Two stacks are used to represent the path P . Stack S contains the vertices V (P) and stack B represents the boundaries between edges of P , two vertices per boundary. More specifically S and B correspond to P = (v1, e1, . . . , vk, ek), where TOP(B)= 2k and for i = 1, . . . , k, vi = S [ B[2i − 1]]; (3) ei − vi = { S[j ]: B[2i]6 j < B[2i + 2]}. (4) Thus in Fig. 4 the open arrows of B point to the vertices vi of P . The filled arrows demarcate the sets ei − vi ; these sets are the “nonfirst” vertices of edges ei of P . When P is nonempty we have B[i] = i for i = 1,2. Also when i = k in (4) we interpret B[2k+2] to be TOP(S)+ 1. As in the strong components algorithm an array I stores stack indices as well as biconnected component numbers. More precisely for a given vertex v at any point in time, I [v] =  0 if v has never been in P ; j if v is currently in P and S[j ] = v; c if the last biconnected component containing v has been output and numbered as c. (5) As before there can be no confusion between an index j and a component number c in (5). A variable c is used to keep track of the component numbers. The algorithm consists of a main routine BICONN and a recursive procedure DFS: procedure BICONN(G) 1. empty stacks S and B; 2. for v ∈ V do I [v] = 0; 3. c= n; 4. for v ∈ V do if I [v] = 0 and v is not isolated then DFS(v); procedure DFS(v) 1. PUSH(v, S); I [v] = TOP(S); if I [v]> 1 then PUSH(I [v],B); /* create a filled arrow on B */ 2. for edges {v,w} ∈E do 3. if I [w] = 0 then {PUSH(I [v],B); DFS(w) /* create an open arrow on B */} 4. else /* possible merge */ while I [v]> 1 and I [w] 1 into B4 allows DFS itself to be simplified. Theorem 3.1. When BICONN(G) halts any edge {v,w} ∈ E belongs to the biconnected component numbered min{I [v], I [w]}. The time and space are both O(m+ n). Proof. The argument is similar to Theorem 2.1 and uses the conventions introduced in that proof. We prove the first assertion of the theorem by showing that BICONN is a valid implementation of the high-level algorithm. We first specify how the high-level algorithm chooses the pair v, {v,w} to grow P . Say a vertex w ∈ V becomes active the first time it gets added to P . As before the most active vertex is the currently ac- tive vertex that was activated most recently. To choose v, {v,w}, let v be the most active vertex. Choose a previously unchosen edge {v,w}. If all edges incident to v have been chosen then deactivate v. If P is non- empty and this makes all vertices of ek − vk inactive then ou
本文档为【Path-based depth-first search for strong and biconnected components】,请使用软件OFFICE或WPS软件打开。作品中的文字与图均可以修改和编辑, 图片更改请在作品中右键图片并更换,文字修改请直接点击文字进行修改,也可以新增和删除文档中的内容。
该文档来自用户分享,如有侵权行为请发邮件ishare@vip.sina.com联系网站客服,我们会及时删除。
[版权声明] 本站所有资料为用户分享产生,若发现您的权利被侵害,请联系客服邮件isharekefu@iask.cn,我们尽快处理。
本作品所展示的图片、画像、字体、音乐的版权可能需版权方额外授权,请谨慎使用。
网站提供的党政主题相关内容(国旗、国徽、党徽..)目的在于配合国家政策宣传,仅限个人学习分享使用,禁止用于任何广告和商用目的。
下载需要: 免费 已有0 人下载
最新资料
资料动态
专题动态
is_312136
暂无简介~
格式:pdf
大小:110KB
软件:PDF阅读器
页数:0
分类:互联网
上传时间:2013-04-26
浏览量:13