Algorithmica (1988) 3:549-560
Algorithmica
9 1988 Springer-VeflagNew York Inc.
On Finding Optimal and Near-Optimal Lineal Spanning Trees Michael R. Fellows, 1 Donald K. Friesen, 2 and Michael A. Langston3 Abstract. The search for good lineal, or depth-first, spanning trees is an important aspect in the implementation of a wide assortment of graph algorithms. We consider the complexity of finding optimal lineal spanning trees under various notions of optimality. In particular, we show that several natural problems, such as constructing a shortest or a tallest lineal tree, are NP-hard. We also address the issue of polynomial-time, near-optimization strategies for these difficult problems, showing that efficient absolute approximation algorithms cannot exist unless P = NP. Key Words. Lineal spanning trees, Constraint satisfaction problems, Depth-first search, NPcompleteness, Approximation algorithms.
1. Introduction. Lineal or depth-first spanning trees are a fundamental tool in a great variety of graph algorithms, including planarity testing [HT], identifying triconnected components [Vo], and subgraph homeomorphism [LG]. That such a spanning tree can be found in time linear in the number of edges of a graph contributes significantly to the efficiency of these and other algorithms. An important open question in the complexity of parallel algorithms is whether the problem of finding such a spanning tree is in NC [An], [Sm], [Ti]. Depth-first search is the basis for numerous algorithms studied in artificial intelligence, such as the interpreter in logic programming [Ko]. An application which provided some impetus for the work we report herein is the use of lineal spanning trees to structure the search space for backtrack algorithms used in constraint satisfaction problems [HE], [Na], such as scene labeling in computer vision [Ma]. An instance of the binary constraint satisfaction problem consists of a collection of variables, each with a finite domain of possible values, and a constraint (list of forbidden values) for every pair of variables. The goal is to produce all variable assignments which satisfy all constraints. An instance of this problem can be represented by a labeled graph which has a vertex for each variable and an edge for each nonempty constraint. (This is the version of the
Department of Computer Science, University of Idaho, Moscow, ID 83843, USA. This author's research was supported in part by the Sandia University Research Program and by the National Science Foundation under Grant MIP-8603879. 2 Department of Computer Science, Texas A&M University, College Station, TX 77840, USA. 3 Department of Computer Science, Washington State University, Pullman, WA 99164-1210, USA. This author's research was supported in part by the National Science Foundation under Grants ECS-8403859 and MIP-8603879. Received September 2, 1986; revised June 29, 1987. Communicated by C. K. Wong.
550
M.R. Fellows, D. K. Friesen, and M, A. Langston
problem that has to date received the most attention. It can be generalized by allowing constraints over three or more variables and thus be represented by a hypergraph.) An implementation of a backtracking algorithm which employs lineal spanning trees to structure the search space for binary constraint satisfaction problems has demonstrated substantial speedups over unstructured backtracking for sparse constraint graphs, which are realistic in a number of applications [FQ1]. (The extension to nonbinary constraints requires only trivial modification of the algorithm.) The key idea in this approach is to partition the variables into a sequence of independent sets. The vertices at each level of a lineal spanning tree constitute just such an independent set. This approach can be profitably extended to parallel algorithms as well [FQ2]. Under reasonable assumptions, these algorithms have running times sensitive to the height of the lineal spanning tree which structures the search. In this and other applications, there is a need to find certain kinds of lineal spanning trees (for example, short ones). Surprisingly, given the many applications of depth-first search and lineal spanning trees, the complexity of finding optimal spanning trees of this sort has not previously been addressed. While it has been acknowledged that these problems appear to be difficult, no formal justification for this sentiment has until now been supplied. In this paper we prove that a number of these problems are NP-hard. We also investigate the performance of fast, heuristic procedures for finding near-optimal lineal spanning trees and show that (unless P -- NP) no efficient absolute approximation algorithm is possible. Depth-first traversal of a graph is usually implemented by means of a stack. Many of the problems we address can be interpreted as measuring the quantity of stack resource required to permit a depth-first traversal, or as measuring the maximum or minimum number of stack operations in such a traversal. As with most graph problems, other reasonable interpretations exist, some of which we will suggest later. In what follows we will assume an undirected graph G is connected, so that we can simply refer to spanning trees rather than spanning forests. A spanning tree T with root r in G is said to be lineal if, for every edge uv in E(G) -E(T), either u lies on the unique path in T from r to v, or v lies on the unique path in T from r to u. That is, edges not in T are "back edges," not "cross edges." It is elementary [Wi] that a spanning tree T in a graph G is lineal if and only if there is a depth-first traversal of G beginning at r which traverses only edges of T. Let max(G, T) denote the maximum length of any path from the root to a leaf in the spanning tree T of G. Let min(G, T) denote the minimum length of any such path. In the sequel, we consider the difficulty of finding lineal spanning trees that optimize or near-optimize these values. In Section 2 we investigate the complexity of minimizing max(G, T) and rain(G, T). Section 3 contains corresponding complexity analyses for maximizing max(G, T) and rain(G, T). In Section 4 we focus our attention on efficient heuristic algorithms for these difficult problems. The final section contains a few remarks pertinent to the future study of this topic.
On Finding Optimal and Near-Optimal Lineal Spanning Trees
551
2. M i n i m i z a t i o n Problems. Minimizing max(G, T) corresponds to finding a shortest lineal spanning tree. Under the aforementioned stack interpretation, the parameter mint{max(G, T)} measures the minimum bound on the stack resource which allows a depth-first traversal to succeed. The decision problem corresponding to this optimization problem is defined as follows: M I N MAX Instance: A graph G and an integer k. Question: Is there a lineal spanning tree T of G with max(G, T)<-k? Let Tpartia I denote the depth-first subtree of G, rooted at some vertex r, which corresponds to nodes already visited in a partially completed depth-first traversal. LEMMA 1. If y is the current vertex of the traversal and x is any vertex not in Tpa~tlal which is reachable from y in G-Tpa~tial, then in any complete depth-first spanning tree of G resulting from the completion of Tparti~b the unique path from r to x must include y and depth(x) -> depth(y) + d(x, y), where d(x, y) is the shortest distance from y to x in G - Tpartial. PROOF. Observe that any depth-first search from y must visit every reachable vertex in G-Tpa~tia~, including x, before backing up above y. The lemma follows. [] THEOREM 1.
M I N M A X is NP-complete.
PROOF. Clearly the problem is in NP. The NP-hard reduction is from 3-SAT [GJ]. Given an expression E, representing an arbitrary instance of 3-SAT, we construct a graph G~ and define an associated integer k~ such that G~ has a lineal spanning tree of height less than or equal to kE if and only if E is satisfiable. The general plan of G~ is shown in Figure 1. The dotted edges are used to identify a variable, or its negation, with the clauses which contain it. To the vertex labeled r we attach a pendant path of length ke. We will attach shorter pendant paths to other vertices as we build G~. The effect of these, as we will
clause
clause
Fig. 1
552
M.R. Fellows, D. K. Friesen, and M. A. Langston
argue below, is to force a "successful" depth-first search (that is, one for which the height of the corresponding spanning tree does not exceed k~) to traverse the graph in a constrained way. If the points of attachment of the pendant paths are not reached in the proper manner, then the pendant path (by its length) will force the height of the spanning tree to exceed k~. CLAIM 1. I f GE has a lineal spanning tree o f height less than or equal to kE, then it has one rooted at r. PROOF OF CLAIM 1. If a depth-first traversal begins at any vertex v # r which does not belong to the pendant path attached to r, then necessarily the resulting lineal spanning tree will have height exceeding k~. If GE permits a lineal spanning tree of height not exceeding k~ rooted at some vertex v on the pendant path, then it clearly permits one rooted at r. [] Figure 2 depicts a variable component of Ge. In this figure, m is the maximum of the number of clauses in which the variable occurs as a positive literal and the number of clauses in which it occurs as a negative literal. There is one such variable component for each variable of E. To simplify the discussion, let us assume for the moment that Figure 2 represents the first variable component in the string of components which begin at node r. Thus r = xl, and the attached pendant path has length ke. Corresponding to the fact that the minimum distance from xl to x2 is 2 m + 3 , there is a pendant path attached to x2 which has length k~ - ( 2 m + 3 ) . CLAIM 2. I f GE has a lineal spanning tree T of height less than or equal to IcE, then T must include a path from Xl to x2 within one of the two symmetric subgraphs indicated in Figure 3. PROOF OF CLAIM 2. If a depth-first traversal deviates from the possibilities within the indicated subgraphs, then an application of Lemma 1 shows that, since x] has been visited, x2 must (with one proviso) occur in the resulting lineal spanning tree T at a depth exceeding 2m + 3 and therefore the height of T exceeds kE. The proviso is that there is no shortcut to x2 passing through the rest of G~. This can be ensured by making the perimeter paths of the clause components sufficiently long. A length greater than 2m +3 will suffice. []
-
-
-
-
9 9
9
"
~
m
+ 2
x m+3=Yl al ~
cm "'"
Fig. 2
+2 am+,
On Finding Optimal and Near-Optimal Lineal Spanning Trees ~..
553
b2....~.. 9 .b.3...~..... b 4 "
9
.
""
00@
". 9
Xm+3=Y 1
X 1
"""
am §
Fig.
3
We now describe the lengths of the pendant paths attached to the vertices x3 . . . . , x,,+3 = y. These lengths are, respectively, ke - (2m + 3) - 4, k~ - (2m + 3) 6 , . . . . , ke - (2m + 3 ) - 2 ( m +2). Without loss of generality, assume a partial depth-first traversal of the variable c o m p o n e n t of Figure 2 follows the route xlalsla2s 2 amSmam+tam+2X2. (This corresponds to a "true" assignment to the first variable.) 9
9
9
CLAIM 39 Under this assumption, the resulting spanning tree T must include the subgraph o f the variable component which is shown in Figure 4. PROOF OF CLAIM 3.
[]
Repeatedly apply L e m m a 1.
A "false" assignment uses bi and ti vertices in place of ai and si, respectively 9 By symmetry, we know from Claim 3 that, in such a case, c~ must be visited the way d~ would be if the variable were "true." Thus we ensure that a variable cannot be both "true" and "false." A vertex si or t~ is also in general on the perimeter of a clause component, according to the identifications suggested in Figure 1. The main ideas of the p r o o f should now be evident. For each variable, its truth assignment corresponds to which of the two routes is used in the variable component. If "true" ("false"), the effect is to cut exactly those clause components which represent clauses containing the (complement of the) variable. The integer kE is chosen so that, when the depth-first search reaches vertex s (see Figure 1), the resulting spanning
~
Xl ~
_
z .
a2 ~
a3 , ~
aO-C1~* ,,,, ~ m + l
.~
L~':d3 ~ Ill
Xm+2~
9
"',.N
am-Z~
am*~
X3 ~,
a-4 Fig.
000 4
~
*[
x ;I
2:~ x rn+3=,l
554
M.R. Fellows, D. K. Friesen, and M. A. Langston
e
b 9 9
o~
Fig. 5
tree can have a height less than or equal to kE only if each clause component has been cut at least once. The calculation of the lengths of the pendant paths for vertices in each subsequent variable component is done in a fashion analogous to that of the first component. Consideration of the clause components will motivate our calculation of k~. So far, we have only assumed that it is large enough for the arguments above to hold. A clause component, C, is shown in Figure 5. C consists of a ring of 6L vertices, with L an integer greater than 2 M + 3, where M denotes the m a x i m u m number of occurrences of any variable or its complement. The vertices a, b, and c, which represent C ' s variables, are identified with the corresponding variable components as sketched in Figure 1. Let S denote a subset of {a, b, c}. CLAIM 4. A shortest lineal spanning tree (of C - S ) rooted at vertex v has height 6L if S is empty and height 3L or less if S is nonempty. PROOF OF CLAIM 4.
Simply consider the various cases.
[]
We can now describe the calculation of kE. Let t be the shortest distance between r and s in the variable components of G~ (where each component is traversed as described in Claim 2). Then we set kE = 3 L + t + 1. CLAIM 5. If G has a lineal spanning tree of height less than or equal to ke, then for each clause component, and for the depth-first traversal of G corresponding to the spanningtree, the set of vertices of the component which are visited before v in the traversal must contain a nonempty subset of S. PROOF OF CLAIM 5.
Use Claims 2, 3, and 4.
[]
By Claim 5, Ge has a lineal spanning tree of height not exceeding kE only if, in the corresponding depth-first traversal, at least one of the vertices a, b, or c is visited prior to the vertex v for every clause component C. From this and the construction of G~, we know that for any such tree there is a corresponding truth assignment satisfying E. Conversely, if E is satisfiable, then by choosing the traversal of each variable component which corresponds to the variable's truth
On Finding Optimal and Near-Optimal Lineal Spanning Trees
555
assignment, at least one of a, b, or c can be visited prior to v for each clause component, and the resulting spanning tree has height bounded above by ke. This completes the p r o o f of Theorem 1. [] Minimizing min(G, T) could be thought of as a way to force a tour of G to take as long as possible since, by visiting the leaf nearest the root last, we maximize the total number of vertex visits in a depth-first traversal. We define the decision problem: MIN MIN A graph G and an integer k. Instance: Question: Is there a lineal spanning tree T of G with min(G, T)<_ k? THEOREM 2.
M I N M I N is NP-complete.
PROOF. That the problem is in NP is clear. To see that M I N M I N is NP-hard, we reduce from H A M I L T O N I A N PATH [G J]. Given a graph G on n vertices, we construct a graph G ' so that G has a Hamiltonian path if and only if G' has minr{min(G', T)}< - n. G ' is obtained from G by adding a new vertex, v, with an edge to every vertex in G and by attaching a cycle (with n + 1 new vertices) of length n + 2 to each vertex of G. This construction is depicted in Figure 6. I f G has a Hamiltonian path p with end vertices x and y, then there is a lineal spanning tree T of G rooted at x, including the edges of p, and with the edge yv. Since v is not now adjacent to any unvisited vertex in G, min(G', T) < - n. Conversely, if min(G', T) < - n for some lineal spanning tree T of G', then T must have a leaf 1 at a distance no more than n from the root r of T. It is easy to see that only v can be I. A simple consideration of the possible candidates for r reveals that r must be a vertex in G. Moreover, every vertex in G must be visited before v is, forcing the existence of a Hamiltonian path. This completes the p r o o f of Theorem 2. []
Y /' /' Fig. 6
556
M.R. Fellows, D. K. Friesen, and M. A. Langston
3. Maximization Problems. In contrast to the last problem discussed in Section 2, maximizing max(G, T) can be viewed as a means for minimizing the total number of vertex visits in a depth-first traversal of G, as long as the vertex farthest from the root is visited last. Also, the largest integer for which there is such a lineal spanning tree bounds the depth of a stack used to implement any depth-first traversal of G. The decision problem can be stated as: MAX MAX Instance: A graph G and an integer k. Question: Is there a lineal spanning tree T of G with max(G, T)>- k? THEOREM 3.
M A X M A X is NP-complete.
PROOF. It is immediate that max T-{max( G, T ) } - > n - 1 for a graph G on n vertices if and only if G has a Hamiltonian path. Membership of the problem in NP is equally clear. [] The parameter maxr{min(G, T)} measures the minimum stack resource required for a searcher to succeed. For example, if T is regarded as a puzzle (or cryptographic) tree, then this becomes a measure of the strength or resistance of the puzzle to attack. Naturally, the decision problem is: MAX M I N Instance: A graph G and an integer k. Question: Is there a lineal spanning tree T of G with rain(G, T)>_k? THEOREM 4.
M A X M I N is NP-complete.
PROOF. It is easy to see that maxr{min(G, T ) } - n - 1 for a graph G on n vertices implies that T is a Hamiltonian path in G, and conversely a Hamiltonian path is a lineal spanning tree when rooted at one of its end vertices. The problem is plainly in NP. []
4. The Issue of Near-Optimization. Once a problem has been shown to be NP-complete or harder, researchers and practitioners frequently turn to the study of fast heuristic rules in the hope of guaranteeing solutions always close to the optimum. For a heuristic algorithm ALG and an optimization routine OPT, let ALG(G) and OPT(G) denote their respective solution values for the graph G. If the problem at hand is one of minimization, then we seek to discover the least real number RALC -->1 such that ALG(G) <- RALC" OPT(G). Analogous, reversed inequalities are used for maximization problems, with RALc ~ (0, 1]. Note that RALC is not a function of G; it must hold for every problem instance. RALC is usually referred to as algorithm ALG's worst-case performance ratio. In some cases we can improve the worst-case ratio by focusing on asymptotic behavior and sacrificing an additive constant CALGto obtain results of the form ALG(G) <-RALC" OPT(G)+CAL~. If we can derive a positive, constant value for RAL~,
On Finding Optimal and Near-Optimal Lineal Spanning Trees
557
then we say that A L G is a relative approximation algorithm. If we can achieve RALG 1 (which, of course, incurs a nonzero additive constant for an NP-complete problem, unless P = NP), then we say that A L G is an absolute approximation algorithm. Note that if a problem permits an absolute approximation algorithm, then by definition it has a relative approximation algorithm, although the reverse is clearly not true. For the four specific lineal spanning tree problems we have addressed in the last two sections, we believe the rich combinatorial nature of graphs makes finding good approximation algorithms exceedingly difficult. In fact, we will now prove that (unless P = NP), then for any polynomial-time algorithm ALG, RALC ~ 1. =
Unless P = NP, none of the problems M I N MAX, M I N MIN, M A X MAX, or M A X M I N possesses a polynomial-time absolute approximation algorithm.
THEOREM 5.
PROOF. We first tackle M I N MAX, for which we assume the contrary for some heuristic A L G and additive integer constant c. That is, we assume that whenever A L G is applied to the optimization version of M I N MAX on a graph G, A L G ( G ) <- OPT(G) + c. We will proceed to derive a contradiction by demonstrating that we can employ A L G to solve (in polynomial time) the optimization version of M I N MAX exactly, a problem which is NP-hard since we have already proved its decision version to be NP-complete. Let n denote the n u m b e r of vertices in G. Figure 7 depicts the essential features of the construction we use to connect a series of copies of G, giving rise to G'. Notice that our construction ensures that no tree for G ' can contain a branch which exits a copy of G, enters another copy, and later returns. We actually attach r to two copies of G, each of which is then expanded as shown in Figure 7. This guarantees that an optimal solution to M I N MAX is rooted at vertex r, since any solution rooted elsewhere could be shortened. Therefore, the longest path in an optimal solution for G ' begins at r, then includes the longest path in an optimal solution to M I N M A X (this time on G), then on to the longest path in another (possibly distinct) optimal solution (again on G), and so on for c + 1 iterations. Hence we find that OPT(G')= (c + 1) 9 O P T ( G ) + 2c + 1. Thus, by our assumption, A L G ( G ' ) --- OPT(G') + c = ( c + 1) 9 O P T ( G ) + 3 c + 1 = ( c + 1) 9 ( O P T ( G ) + 1) +2c. However, this means that even if A L G used r as the root in its solution and thereby needed only the minimum 2 c + 1 connecting edges in its longest path, it must have found an optimal solution to M I N MAX for at least one copy of G. Therefore, we simply ignore the connecting edges and select the shortest tree from among the O(n C+1) produced in ALG's solution, returning this as our solution to M I N MAX on G. Since the construction we have employed takes only polynomial time in n (recall that c is a constant), and since we have used A L G to solve an NP-hard problem, we conclude that (unless P = NP) A L G does not run in polynomial time, contradicting our original assumption. We next consider M I N M I N . Assuming there exists an algorithm A L G with constant c which violates the statement of the theorem, we employ the construction described in the p r o o f of Theorem 2 and depicted in Figure 6. We modify G ' in
558
M.R. Fellows, D. K. Friesen, and M. A. Langston
r
ii 9
9
n c copies
n copies
Fig. 7 that attached cycles are given n + c + 1 new vertices and hence have length n + c + 2 . If m i n ( G ' , T ) < - n for some lineal spanning tree T of G', then A L G must visit every vertex in G before v. Therefore, although A L G may choose as a root any vertex in G or any vertex as far as c nodes out on an attached cycle, it can guarantee that A L G ( G ) <- O P T ( G ) + c only by finding a Hamiltonian path if G has one. We move on to MAX MAX, this time building G' from G as depicted in Figure 8. If a polynomial-time absolute approximation algorithm A L G with constant c is at hand, then we use it on G', deriving A L G ( G ' ) > - O P T ( G ' ) c= ((c+I).OPT(G)+2c+2)-c=(c+I).(OPT(G)-I)+2c+3. Hence, even if A L G roots its solution at rl or r2, thereby using the maximum 2c + 2 connecting
odo
copy 1
copy 2
~ o ~ ? ~ ' copy c+1
Fig. 8
r2
On Finding Optimal and Near-Optimal Lineal Spanning Trees
559
edges, it must have found an optimal solution to MAX MAX for at least one copy of G. Finally, for MAX MIN, we return to the construction described in Figure 7 (this time attaching r to one copy of G, not two as with MIN MAX). Arguments analogous to those above demonstrate that the only way to guarantee A L G ( G ' ) >O P T ( G ' ) - c is to find an optimal solution to MAX MIN on G. This completes the proof of Theorem 5. [] Relative approximation algorithms are not necessarily easier to provide. As of this writing, we have neither devised algorithms with bounded worst-case behavior for these problems nor have we proved that (unless P = NP) no such algorithms are possible. We note that our problems are much akin to the graph-bisection and longest-path problems [GJ], both of which have avoided attempts either to produce relative approximation algorithms or to prove none exist.
5. Remarks. While a lineal spanning tree can be constructed in linear time, finding one which is optimal in several natural ways with respect to the distances from the root to the leaves is NP-hard. Even near-optimization seems extremely difficult (in terms of a worst-cast scenario). All of the preceding results hold as well for directed graphs. This can be seen by replacing every edge in an undirected graph with two directed edges, one in each direction. In some sense, a few of our results are counterintuitive. That is, it is well known that there are many allied problem "pairs," where one problem is easy (solvable in low-order polynomial time) and the other is hard (NP-complete). For example, shortest path versus longest path and edge cover versus vertex cover are two such pairs from graph theory. Hence, we might suspect that finding short lineal spanning trees would be easier than finding tall lineal spanning trees. Theorems 1-4, however, show this not to be the case, although our proofs were much more complicated for short trees (Section 2) than for tall ones (Section 3). We mention some related, open problems we think are of interest. What can be said of the behavior of fast heuristic algorithms on the average, as opposed to simple worst-case ratios? Given certain a priori assumptions on graph connectedness, edge density, and so on, it is imaginable that randomly generated lineal spanning trees may even be acceptable. What happens when attention is restricted to certain classes of graphs (e.g., planar, directed, biconnected, bipartite, series-parallel, etc.)? Even if finding proper trees remains NP-complete, can more be done to evaluate near-optimization strategies on these families of graphs? We note that counting the number of distinct spanning trees in a graph G can be performed in polynomial time using matrix techniques [ST]. What is the complexity of counting the number of distinct lineal spanning trees in G ? Finally, we might seek to optimize the total weight of a lineal spanning tree in a weighted graph. For the minimization version, for example, suppose we take any graph G with unit-weight edges, add enough edges of weight 2 to make a complete graph G', and ask for an optimal solution. Since any lineal spanning tree of a complete graph is a Hamiltonian path, G' has a solution of cost n - 1
560
M.R. Fellows, D. K. Friesen, and M. A. Langston
if a n d only if G has a H a m i l t o n i a n path. M o r e o v e r , in a spirit similar to that used in the n o n - E u c l i d e a n t r a v e l in g s a l e s m a n p r o b l e m (see, for e x a m p l e , [PS]), we c o u l d i n s t ead a d d edges o f arbitrarily large weight. H e n c e we k n o w that a p r o b l e m such as this is n o t only N P - h a r d to solve exactly, but c a n n o t even be a p p r o x i m a t e d unless P = NP.
References [An] R. Anderson, A Parallel Algorithm for the Maximal Path Problem, Proceedings of the [FQ1]
[FQ2]
[GJ] [HE] [HT] [Ko] [LG] [Ma] [Na] [PS] [Sm] [ST] [Ti] [Vo] [Wi]
Seventeenth Annual ACM Symposium on Theory of Computing (1985), 33-37. E. C. Freuder and M. J. Quinn, Taking Advantage of Stable Sets of Variables in Constraint Satisfaction Problems, Proceedings of the Ninth International Joint Conference on Artificial Intelligence (1985), 1076-1078. E. C. Freuder and M. J. Quinn, Parallelism in an Algorithm that Takes Advantage of Stable Sets of Variables to Solve Constraint Satisfaction Problems, Computer Science Technical Report, University of New Hampshire, 1985. M.R. Garey and D. S. Johnson, Computers and Intractability: A Guide to the Theory of NP-Completeness, Freeman, San Francisco, 1979. R.M. Haralick and G. L. Elliot, Increasing Tree Search Efficiency for Constraint Satisfaction Problems, Artificial Intelligence 14 (1980), 263-313. J. Hopcroft, and R. Tarjan, Efficient Planarity Testing, Journal of the Association for Computing Machinery 21 (1974), 549-568. R. Kowalski, Logic for Problem Solving, North Holland, Amsterdam, 1979. P.C. Liu and R. C. Geldmacher, An O(max(m, n)) Algorithm for Finding a Subgraph Homeomorphic to K4, Congressus Numerantium 29 (1980), 597-609. A. Mackworth, Constraint Satisfaction, Computer Science Technical Report, University of British Columbia, 1985. B.A. Nadel, Theory-Based Search-Order Selection for Constraint Satisfaction Problems, Computer Science Technical Report, University of Michigan, 1986. C.H. Papadimitriou and K. Steiglitz, Combinatorial Optimization: Algorithms and Complexity, Prentice-Hall, Englewood Cliffs, NJ, 1982. J.R. Smith, Parallel Algorithms for Depth-First Searches I. Planar Graphs, SlAM Journal on Computing 15 (1986), 814-830. M.N.S. Swany and K. Thulasiraman, Graphs, Networks, and Algorithms, Wiley, New York, 1981. P. Tiwari, An Efficient Parallel Algorithm for Shifting the Root of a Depth-First Spanning Tree, Journal of Algorithms 7 (1986), 105-119. K.P. Vo, Finding Triconnected Components of Graphs, Linear and Multilinear Algebra 13 (1983), 143-165. S.G. Williamson, Combinatoricsfor Computer Science, Computer Science Press, Rockville, MD, 1985.