Computing 30, 3 5 9 - 371 (1983)
Col~pl,
lUl~
9 by Springer-Verlag1983
Short Communications / Kurze Mitteilungen An Improved Transitive Closure Algorithm L. Schmitz, Neubiberg Received October 25, 1982
Abstract - - Zusammenfassung
An Improved Transitive Closure Algorithm. Several efficient transitive closure algorithms operate on the strongly connected components of a digraph, some of them using Tarjan's algorithm [ 17]. Exploiting facts from graph theory and the special properties of Tarjan's algorithm we develop a new, improved algorithm. The transitive reduction of a digraph defined in [1] may be obtained as a byproduct.
AMS Subject Classification: 68 E 10. Key words: Digraphs, strongly connected components, transitive closure, transitive reduction. Ein verbesserter Transitive-Hiille-Algorithmus. Verschiedene effiziente Transitive-Hiille-Algorithmen
arbeiten auf den stark zusammenNingenden Komponenten eines gerichteten Graphen; einige davon verwenden den Algorithmus yon Tarjan [17]. Wir nfitzen Sachverhalte aus der Graphentheorie und die speziellen Eigenschaften yon Tarjans Algorithmus aus, um einen neuen, verbesserten Algorithmus zu entwickeln. Die transitive Reduktion eines gerichteten Graphen, wie sie in [ 1] definiert wird, 1/il3tsich als Nebenprodukt bestimmen.
I. Introduction The problem of computing the transitive closure of a digraph has stimulated considerable interest in the literature ( [ 2 ] - [ 5 ] , [ 7 ] - [ 1 0 ] , [12]-[16], [18]). Among the algorithms proposed there are two major groups: The first group ([2], [10], [12], [13]) reduces the problem of finding the transitive closure to that of multiplying Boolean matrices. The second group ([3], [7] - [9], [14]) uses the fact that vertices belonging to the same strongly connected component need not be distinguished because in the transitive closure they have identical successors. For finding the strongly connected components of a digraph Tarjan [17] has developed an elegant O (n2) algorithm. In this paper we propose an algorithm which like [8] and [9] is an extension of Tarjan's algorithm. Using graph theoretic concepts we take advantage of the special properties of Tarjan's algorithm in a new way. The material on graph theory in section 2 has been adapted from [11] (the main difference being that we allow a digraph to contain loops). More background and the proofs omitted here may be found in [11] and [1]. In section 3 we shortly describe Tarjan's algorithm. For more details and proofs the reader is referred to O010-485 X/83/0030/0359/$ 02.60
360
L. Schmitz:
[17] and [9]. T h e different versions of our algorithm as developed in section 4 are c o m p a r e d with other transitive closure algorithms in section 5. Finally, we modify our algorithm to yield the transitive reduction of a digraph.
2. Digraphs: Definitions and Facts
A directed graph (digraph for short) D = (V, E) consists of a finite set of vertices (nodes) and a set E ~_ V x V of edges (arcs, lines). A path from vertex a to vertex b is a finite, n o n e m p t y sequence el, ..., % of edges satisfying (1)
3 x E V : el=(a,x) A 3 y e V : em=(y,b)
(2)
Vl
(3)
[el = (x, y)/x e~ = (x, z) v el = (y, x)/x ej = (z, x)]
3x, y, ze V: [ei=(x,y) A ei+l=(y,z)]
~i=j. A path from a to a of length m = 1 is called a loop. A path from a to a of length m > 1 is called a cycle. The transitive closure of a digraph D =(V, E) is the digraph D *=(V, Et), where E *= {(a, b)e V x V I there is a p a t h from a to b in D}. Vertex b is said to be reachable from vertex a if a = b or (a, b) ~ E *. T h e strongly connected component V x of vertex x is defined by Vx= {x} u { a e VI ( a , x ) e E t A (x,a)6Et}. The condensation digraph D = (V, E) of a digraph D = (V, E) with respect to strongly connected c o m p o n e n t s is defined by ~'={Vx [ x e V}
and
E = {(A,B)e V x V[ 3 a~ A, be B: (a,b)~ E}. Fact 1: Let A , B ~ V and a ~ A and b e B . Then the following equivalence holds true: (a, b) ~ E t ~ (A, B) ~/~t.
[]
Let D = (V, E) be a digraph and let F be a subset of E. Then F generates the subgraph D e = (V, F) of D. An edge basis B is a minimal subset of E satisfying D t = ( D J . Fact 2: The condensation digraph/5 of a digraph D contains no cycles. If a digraph contains no cycles, then it has a unique edge basis. Hence, every condensation digraph/3 has a unique edge basis. [] A set of edges m a y be represented by adjacency lists or by an adjacency matrix: The
adjacency list F (x) of vertex x is defined by F (x) = {a I (x, a) 6 E}.
An Improved Transitive Closure Algorithm
361
The entries of the Boolean adjacency matrix R are defined by
~true R (x, y) = (false
if (x,y) e E if (x, y) r E.
For F and R, dependence on D is denoted in the obvious way: e.g. pt belongs to b t and R B belongs to D e. By R (x, *) we denote the x-row of R which consists of the elements R (x, y), where y ~ V.
An example: Let D = (V, E) be given by
V={a, b, c, d, e,f,g, h, i,j} and by the adjacency lists F (a) = 0 F (b) = {a, c}
r(f)={c,d,h}
r(d)=O
r (o) = {d, f, o} F (h) = {e, g} r (i) = {a, e, h, i,j}
F (e) = {a, c,f}
F(j) = { g , h , i } .
r (c) = {a, b, d}
Fig. 2.1 shows a picture of D. Its strongly connected components (enclosed by circles in the picture) are: Va= {a} Vb = {b, c) vd = {d}
Ve={e,f,g,h} Vi = {i,j}.
Fig. 2.1
362
L. Schmitz:
The condensation digraph/3 of D is shown in Fig. 2.2. It contains three loops but no cycles.
,)
va -"-
v~
%
Fig. 2.2
The unique edge basis B of/3 generates the digraph D B shown in Fig. 2.3.
c)
Fig. 2.3
3. Tarjan's Algorithm Tarjan's algorithm computes the strongly connected components o f a digraph which is given by its adjacency lists. The algorithm consists mainly of the recursive procedure VISIT shown in Fig. 3.1. s is a stack of at most I VI vertices. For each x e V the vector element h (x) contains a number from 0, 1, 2,..., I VI, oo and the vector element rep (x) contains a vertex. If initially PRE: t o p = 0 A Vx~V: h ( x ) = 0 is satisfied, then the "calling loop" for x e V do if h (x) = 0 then VISIT (x) fi od will establish POST: V x s V; [rep (x) e Vx A (y, z ~ V x--* rep (y) = rep (z))].
We use Tarjan's algorithm as a host algorithm into which we insert transitive closure subalgorithms at the place indicated by the box in Fig. 3.1. All we need to know about the VISIT procedure for this purpose is the following.
An Improved Transitive Closure Algorithm
363
(It is not the purpose of this section to explain how Tarjan's algorithm works. For a detailed description see [9].) Fact 3: Whenever the box is reaced the stack elements s(xpos),...,s(top) contain the vertices of Vx. Moreover, if in/3 there is a non-loop path from Vx to Vy then Vy has already been processed by an earlier incarnation of VISIT. For all vertices y reachable from x its representative rep (y) has been found. Finally, for all y ~ V x we have rep (y) = x. [] In other words: We may view Tarjan's algorithm as a means for enumerating the strongly connected components of D in such a way that for all Vx all the Vyreachable from Vx will be enumerated prior to Vx. R e m a r k : If a condition equivalent to fact 3 is met by some other algorithm, then this
algorithm may be used instead of Tarjan's. An example is the iterative (!) p r o g r a m developed by Dijkstra in chapter 25 of [6] : Immediately after completing the "inner loop" the positions cc.high through nvc of the vector knar contain the vertices of the strongly connected component that has just been found. Like in Tarjan's algorithm the V~'s are enumerated in topological order with respect to reachability. procedure VISIT (x :vertex); begin var w : vertex; xpos integer;
h (x): = xpos: = top: = top + 1; s(top): =x;/* pushx */ for w e F (x) do if h(w)=O then VISIT(w)fi;
h (x): = rain (h (x),h (w)) od; if h(x) 4:xpos then return fi; for i:=xpos to top do rep(s(i)):= x;
'h (s (i)):= oo od;
Insert transitive closure subalgorithms here top:=xpos- 1;/* pop Vx */ end VISIT;
Fig. 3.1
4. Transitive Closure Subalgorithms Our aim is to compute the adjacency matrix R ~ofD t. In terms of adjacency matrices fact 1 m a y be expressed as V a, bm V: [R'(a,b)~--~R'(Va, Vb)].
364
L. Schmitz:
Therefore, and since rep (x) uniquely represents Vx for all x, it suffices to compute a Boolean array R § satisfying the condition
V a, be V: [Rt(Va, Vb)~R + (rep(a),rep(b))]. Defining the predicate P (M) for all subsets M of V by
P(M)~--*V a, bCM:-7 e + (rep(a),rep(b)) A V a, b e M: [/~t(V,, Vb)*-->R+ (rep (a), rep (b))] the above condition may be written as P (V). We assume all entries of R § to be initialized by false. This initialization obviously satisfies P (0)The set of strongly connected components reachable, but different from Vx is
vx).
0x =
The elements of 0x cover the set
ux= U r, v, eo~
Fact 3 suggests the following mode of operation for our subalgorithm: Upon entering the box the precondition P (M) is fulfilled for some M satisfying H x_c_M. When leaving the box the postcondition P (M ~ V~) has been established. The following lemmata indicate how this may be accomplished. Lemma 1 :
F' (Vx)= F (V~) ~
LJ
Ft (V~).
Proof: Immediate from the definitions.
[]
L e m m a 2:
Vw~ P (~)-~ P (Vw)~-P ( O . Proof: Follows from the transitivity o f / ) t.
[]
The subalgorithm shown in Fig. 4.1 essentially is a transscription of the equation from lemma 1 : The loops and the assignment "w: = rep (v)" serve to enumerate the begin var i integer; v, w vertex; for i : = x p o s to top do for v ~ F (s (i)) do w : = r e p ( v ) ; if w @ x A ~ R + ( x , w ) then R + (x, *): = R + (x, *) v R + (w, *)
fi;
i
R + (x, w): = true
od od end; Fig. 4.1
An Improved Transitive Closure Algorithm
365
set F ( V x ) , which appears twice in the equation (some elements of F(Vx)may be enumerated more than once). Within the loops the row R + (x, ,) corresponding to P(V~) (remember x=rep(x)) is built up successively. The assignment "R + (x, w): = true" includes the elements of F (V~) into r ~(Vx). A vector operation "R + (x, ,): = R + (x, ,) v R + (w, ,)" augments/~ (Vx)by/~ (V~) which because of the precondition P (M) has already been computed. The first part "w 4;x" of the /f-condition corresponds to "\{ V~}" in lemma 1. The second part "-7 R + (x, w)" of the/f-condition is justified by lemma 2;it suppresses the vector operation for elements of F (V~) that have been enumerated before or are reachable from such elements. The number of vector operations carried out by our algorithm depends on the order in which the elements of the adjacency lists are enumerated. Consider, for example, the graph shown in Fig. 4.2. If each adjacency list is enumerated in lexicographical order, then our algorithm will carry out three vector operations. Six vector operations are required if each adjacency list is enumerated in inverse lexicographicaI' order.
a
b.
c
.
d
Fig. 4.2
Fact 2 states that any condensation digraph/3 has a unique edge basis, say B. By definition B is the minimal subset of E which satisfies D~ =/3t. Hence, if in lemma 1 we replace both occurences of F (V~)by F B (Vx), the equation will remain correct. We characterize F s ( V x ) in terms of 13 and/3t:
r~(vx)={vw~F(Vx)lV v~F(vx)\{Vx, vw}:-~~'(vz, v~)). Since i~( V~)\{ V~, V,~} is a subset of 0~ and because of the precondition P(M) upon entering the box all the /~(V~, Vw)'s needed here have already been computed. Hence, to enumerate FB(V~) we may proceed as follows: First enumerate F(V~) leaving our doublettes and elements V w with ]~t(V~, Vw) for some V z enumerated prior to Vw. Then enumerate the resulting set in the opposite order again leaving out elements V~ with Rt(V~, V~) for some V~ enumerated prior to V~. The subalgorithm shown in Fig.4.3 works this way. To reflect the order of enumeration the result of the firstenumeration (for-loops) is stored as a sequence q of vertices. The assignment "q:--wq" appends the element w to the left end of the sequence q. The functions head and tail are defined for non-empty sequences and satisfy
head (wq) = w 25 Computing 30/4
and
tail(wq) = q.
366
L. Schmitz: begin var i integer; v, w vertex; q sequence of vertex; q:=8: for i : = x p o s to top do for v e F (s (i)) do w:=rep(v); /fw=x then R + (x, w): = true else if ~ 3 y~ V, 2 , # : q = 2 y # A [ R § (y,w) v y = w ] then q: = wq
fi fi od od; while q ~ e do w:=head(q); q:=tail(q);
if 7R+ (x,w) then R + (x, *): = R + (x, *) v R + (w, *); R + (x, w): = true
fi od end;
Fig. 4.3
e denotes the empty sequence. Loop-edges from B are handled separately during the first enumeration. In the second enumeration (white-loop) lemma 2 is used to decide which elements can be left out. Version 2 performs exactly one vector operation per non-loop edge from B. The digraph D shown in Fig. 4.2 is isomorphic to its condensation digraph/5. We find E B= {( V~, Vb), ( Vb, Vc),( Vc, Vd)}. Since E B contains no loop version 2 carries out three vector operations. A useful improvement ensues from the following trivial observation: The vector operation may be skipped if all entries in R § (w, *) are false or, equivalently, if F(w)=0. Replacing in version l the /f-condition " w ~ : x A ~ R + ( x , w ) '' by " F (w) :~ 0 A w ~: x A --7 R + (x, w)" and in version 2 the /f-condition "w = x " by " F (w)= 0 v w = x" we obtain versions 1' and 2', respectively.
5. Comparison The behaviour of our algorithm may be characterized as follows: The VISIT procedure is called exactly once for each vertex of the digraph. Each adjacency list F(x) is enumerated twice: once in VISIT(x) and once in VISIT (rep(x)). Within VISIT (rep (x)) the set V~is enumerated twice: first, to establish the rep- and h-values for V~ and then as part of the enumeration of F (V~). Version I performs one vector operation for each element of some set H satisfying E B_ H _~E, where E~ generates the edge basis of/). Version 2 performs one vector operation for each dement of EB. The overhead required by version 2 to determine E R is bounded by
o( Z v~
Counting one vector operation as i V] elementary operations the maximum number of operations needed by version i is
A n Improved Transitive Closure Algorithm
367
(1) Co+Cl]VI+cztEl+c3JVllHl for some constants c~, 0 < i_< 3. The corresponding formula for version 2 is
(2) Co+Ca [Vl+c21Et+c3l VI IE.l+c. Y, IP(Vx)l2. v~
For the algorithm developed by Eve and Kurki-Suonio in E9] we obtain
where E S =
(3) Co+CllVl+c21El+c31VI (IEsl+l VI-I V'I), {(x, y)~ E I V x ~ Vy}. (The c,'s in (1)-(3) are presumably
not identical.)
Table 5.1 n
d
[18]
[9]
V1
V2
VI'
V2'
30
0.03 0.05 0.07 0.09 0.11 0.13 0.15 0.17 0.19 0.21
212 572 1205 1582 2247 2521 2675 3124 3195 3487
124 191 192 193 174 161 165 145 145 145
111 127 57 23 12 6 4 0 0 0
108 108 52 23 12 6 4 0 0 0
57 77 40 11 6 4 2 0 0 0
57 67 36 11 6 4 2 0 0 0
50
0.01 0.03 0.05 0.07 0.09 0.11 0.13 0.15 0.17 0.19
159 1165 3950 5984 7393 8474 9427 9842 10148 10263
124 319 300 278 247 259 245 245 245 245
123 209 47 18 1 3 0 0 0 0
123 178 40 16 1 3 0 0 0 0
39 139 32 15 0 2 0 0 0 0
39 120 27 13 0 2 0 0 0 0
70
0.005 0.010 0.015 0.020 0.025 0.030 0.035 0.040 0.045 0.050
140 317 832 2145 3655 4866 7542 8900 10489 11488
120 238 350 435 480 503 466 433 433 402
120 231 314 265 225 181 100 55 42 31
119 229 308 239 188 149 85 48 40 26
30 91 198 173 167 136 68 39 25 21
30 91 195 160 140 t10 56 32 23 17
90
0.005 0.010 0.015 0.020 0.025 0.030 0.035 0.040 0.045
252 702 2147 5780 9069 14917 16377 19117 21229
198 394 568 610 605 539 551 497 478
197 379 445 255 161 64 53 23 10
197 373 415 225 133 58 45 22 10
70 210 315 157 101 39 36 11 3
70 208 294 140 87 36 31 10 3
25*
368
L. Schmitz:
Table 5.2 n
d
30
0.03 0.05 0.07 0.09 0.11 0.13 0.15 0.17 0.19 0.21
50
[18]
[9]
V1
V2
VI'
V2'
58 148 299 390 557 628 655 763 788 854
32 49 49 50 46 47 46 41 48 45
30 34 17 11 8 7 8 9 7 10
27 29 14 9 10 9 8 5 8 8
15 22 14 7 6 8 7 10 10 10
18 19 13 7 7 7 7 7 9 9
0.01 0.03 0.05 0.07 0.09 0.11 0.13 0.15 0.17 0.19
88 493 1607 2426 2978 3450 3796 3945 4073 4132
53 135 128 122 110 118 114 115 117 121
51 89 26 16 13 14 17 18 21 23
53 75 23 15 12 14 14 17 18 23
19 62 19 16 12 15 16 19 22 24
18 53 18 14 11 14 16 16 20 24
70
0.005 0.010 0.015 0.020 0.025 0.030 0.035 0.040 0.045 0.050
128 232 509 1260 2100 2788 4265 5064 6003 6520
76 139 203 258 283 303 273 260 266 242
73 135 180 157 132 109 64 41 37 30
73 136 176 142 111 92 57 39 35 25
21 56 115 103 102 84 47 33 27 26
20 58 114 96 84 69 40 28 24 21
90
0.005 0.010 0.015 0.020 0.025 0.030 0.035 0.040 0.045
264 572 1613 4188 6554 10685 11795 13776 15634
155 301 421 452 443 396 424 375 371
152 276 325 195 124 59 55 33 24
148 269 304 172 107 53 50 33 25
56 155 235 123 85 42 43 25 20
53 154 216 111 72 36 37 24 20
N o w , if LH [ (or I E n l) is s m a l l e r t h a n I Es I + I V I - I f ' l , t h e n v e r s i o n 1 (or v e r s i o n 2) o f o u r a l g o r i t h m s h o u l d p e r f o r m b e t t e r t h a n t h e a l g o r i t h m f r o m [ 9 ] . T h i s is s u b s t a n t i a t e d b y t h e e m p i r i c a l d a t a g i v e n i n T a b l e s 5.1 a n d 5.2, w h e r e t h e d i f f e r e n t versions of our algorithm are compared with [9] and the well-known Warshalla l g o r i t h m [ 1 8 ] . P L / 1 - i m p l e m e n t a t i o n s of t h e a l g o r i t h m s w e r e r u n o n d i g r a p h s o f o r d e r s n = 10 ( 1 0 ) 1 0 0 a n d of d i f f e r e n t d e n s i t i e s d ~ [0, 1]. F o r a n y fixed o r d e r a n d fixed d e n s i t y five d i g r a p h s w e r e r a n d o m l y g e n e r a t e d , e a c h w i t h n n o d e s a n d d. n 2 edges. F r o m all t h e d a t a t h u s o b t a i n e d T a b l e s 5.1 a n d 5.2 d i s p l a y a few t y p i c a l
An Improved Transitive Closure Algorithm
369
samples. Each entry in Table 5.1 shows the total number of vector operations required for five digraphs. Table 5.2 gives the corresponding run times. It may seem surprising that for increasing densities d the number of vector operations very rapidly decreases towards zero. This is explained by the following observation from [4] concerning ramdomly generated graphs: For large n a graph containing n nodes and more than n. log e (n) edges has a high probability of being strongly connected. All the algorithms compared can exploit Boolean vector operations, if available. As observed in [8] and [9] Warshall's algorithm does not take advantage of the presence of strongly connected components. Therefore, it performs considerably more vector operations than the other algorithms and, consequently, it gains most from the use of efficient Boolean vector operations. For sparse graphs, the algorithms can easily be adapted to operate on adjacency lists instead of adjacency matrices (for Warshall's algorithm both the successors and the predecessors of every node are needed). 6. Transitive Reduction
Let F be a minimal subset of V x V satisfying Ft=E t. The digraph U=(V,F) generated by F is called a transitive reduction of D = (V, E).
A cyclic expansion of the edge basis D B= (~', EB) of the condensation digraph/3 is a digraph S = (V, H) satisfying the following conditions: (1) Ill Vx[ > 1, then the edges of H connecting vertices from Vx form a cycle through all vertices of Vx. If [ Vx[=l, i.e. Vx={x}, then (x,x)eH+--~(x,x)eE. (2) For each edge (Vx, Vy)e Es, Vx ~ Vy, there is exactly one edge (a, b)e H, where a e Vx and be Vy. (3) H contains no edges other than those defined in (1) and (2). Theorem 2 of [1] states that any cyclic expansion of D~ is a transitive reduction and vice versa. Fig. 6.1 shows a transitive reduction of the example digraph D from section 2. Notice that U is not an edge basis of D. i"
e
j
f//1~g
c d
Fig. 6.1
370
L. Schmitz:
N o w v e r s i o n 2 of o u r s u b a l g o r i t h m is easily e x t e n d e d to c o m p u t e a t r a n s i t i v e r e d u c t i o n U of D: T h e / f - s t a t e m e n t s h o w n in Fig. 6.2 a is to be i n s e r t e d in v e r s i o n 2 r i g h t after the d e c l a r a t i o n s . Its p u r p o s e is to e s t a b l i s h c o n d i t i o n (1) of t h e d e f i n i t i o n of a cyclic e x p a n s i o n . T h e a s s i g n m e n t s h o w n in Fig. 6.2 b c o r r e s p o n d s to c o n d i t i o n (2) a n d is to b e i n s e r t e d j u s t b e f o r e o r after the v e c t o r o p e r a t i o n . /f xpos < top A x E F (x) then i: = xpos; w h i l e / < t o p do Fv (s(i)):= {s(i+ 1)} ; i: = i + 1 od Fv (s (top)): = {x} else Fo(x):=O fi; Fig. 6.2 a
ru (x): = rv (x) u (w} ; Fig. 6.2 b
References
[1] Aho, A. V., Garey, M. R., Ullmann, J. D. : The transitive reduction of a directed graph. SIAM J. Comput. 1, 131 - 137 (1972). [2] Arlazarov, V. L., Dinic, E. A., Kronrod, M. A., Faradzev, I. A. : On economical construction ofthe transitive closure of an oriented graph. Soviet Math. Dokl. 11, 1209-1210 (1970). [3] Bayer, R. : Aggregates: A software design method and its application to a family of transitive closure algorithms. Technische Universitiit Miinchen, 1974. [4] Bloniarz, P. A., Fischer, M. J., Meyer, A. R. : A note on the average time to compute transitive closures. In: Automata, Languages and Programming (Michaelson, Milner, eds.). Edinburgh: Edinburgh University Press 1976. [5] Coffy, J. : On computing the time complexity of transitive closure algorithms. Inf. Proc. Letters 2, 39-42 (1973). [6] Dijkstra, E. W. : A discipline of programming. Englewood Cliffs, N. J. : Prentice-Hall 1976. [7] Dzikiewicz, J. : An algorithm for finding the transitive closure of a digraph. Computing 15, 75 - 79 (1975). [8] Ebert, J. : A sensitive transitive closure algorithm. Inf. Proc. Letters 12, 255-258 (1981). [9] Eve, J., Kurki-Suonio, R. : On computing the transitive closure of a relation. Acta Informatica 8, 303 - 314 (t977). [10] Furman, M. E. : Applications of a method of fast multiplication of matrices in the problem of finding the transitive closure of a graph. Soviet Math. Dokl. 11, 1252 (1970). [11] Harary, F., Norman, R. Z., Cartwright, D. : Structured models: An introduction to the theory of directed graphs. New York: Wiley 1965. [12] Munro, I. : Efficient determination of the transitive closure of a directed graph. Inf. Proc. Letters 1, 56-58 (1971). [13] O'Neil, P. E., O'Neil, E. J. : A fast expected time algorithm for Boolean matrix multiplication and transitive closure. In: Courant Computer Science Syrup. 9: Combinatorial algorithms (Rustin, ed.). New York: Algorithmics Press 1973. [14] Purdom, P. : A transitive closure algorithm. BIT I0, 76-94 (1970). [15] Schnorr, C. P. : An algorithm for transitive closure with linear expected time. SIAM J. Comput. 7, 127 - 133 (1978). [16] Syslo, M. M., Dzikiewicz, J. : Computational experiences with some transitive closure algorithms. Computing 15, 33-39 (1975).
An Improved Transitive Closure Algorithm
371
[17] Tarjan, R. : Depth-first search and linear graph algorithms. SIAM J. Comput. 1, 146 - 160 (1972). [18] Warshall, S. : A theorem on Boolean matrices. J. ACM 1962, 11 - 1 2 . Dr. L. Schmitz Fachbereich Informatik Hochschule der Bundeswehr Mfinchen Werner-Heisenberg-Weg 39 D-8014 Neubiberg Federal Republic of Germany