SpaceEfficient DFS and Applications:
Simpler, Leaner, Faster
Abstract
The problem of spaceefficient depthfirst search (DFS) is reconsidered. A particularly simple and fast algorithm is presented that, on a directed or undirected input graph with vertices and edges, carries out a DFS in time with bits of working memory, where is the (total) degree of , for each , and . A slightly more complicated variant of the algorithm works in the same time with at most bits. It is also shown that a DFS can be carried out in a graph with vertices and edges in time with bits or in time with either bits or, for arbitrary integer , bits. These results among them subsume or improve most earlier results on spaceefficient DFS. Some of the new time and space bounds are shown to extend to applications of DFS such as the computation of cut vertices, bridges, biconnected components and 2edgeconnected components in undirected graphs.
Keywords: Graph algorithms, space efficiency, depthfirst search, DFS.
1 Introduction and Related Work
Depthfirst search or DFS is a very wellknown method for visiting the vertices and edges of a directed or undirected graph [7, 19]. DFS is set off from other ways of traversing the graph such as breadthfirst search by the DFS rule: Whenever two or more vertices were discovered by the search and have unexplored incident (out)edges, an (out)edge incident on the most recently discovered such vertex is explored first. The DFS rule confers a number of structural properties on the resulting graph traversal that cause DFS to have a large number of applications. The rule can be implemented with the aid of a stack that contains those vertices discovered by the search that still have unexplored incident (out)edges, with more recently discovered vertices being located closer to the top of the stack. The stack is the main obstacle to a spaceefficient implementation of DFS.
In the following discussion, let and denote the number of vertices and of edges, respectively, of an input graph. Let us also use the common picture according to which every vertex is initially white, becomes gray when it is discovered and pushed on the stack, and turns black when all its incident (out)edges have been explored and it leaves the stack. The study of spaceefficient DFS was initiated by Asano et al. [2]. Besides a number of DFS algorithms whose running times were characterized only as polynomial in or worse, they described an algorithm that uses time and bits and another algorithm that uses time and at most bits, for arbitrary fixed , where “”, here and in the remainder of the paper, denotes the binary logarithm function . Their basic idea was, since the stack of gray vertices cannot be kept in full (it might occupy bits), to drop (forget) stack entries and to restore them in smaller or bigger chunks when they are later needed. Using the same idea, Elmasry, Hagerup and Kammer [9] observed that one can obtain the best of both algorithms, namely a running time of with bits. Assuming a slightly stronger representation of the input graph as a set of adjacency arrays rather than adjacency lists, they also devised an algorithm that runs in time with bits or in time with bits, or anything in between with the same timespace product. The new idea necessary to obtain this result was, rather than to forget stack entries entirely, to keep for each gray vertex a little information about its entry on the stack and a little information about the position of that stack entry.
The space bounds cited so far may be characterized as densityindependent in that they depend only on and not on . If one is willing to settle for densitydependent space bounds that depend on or perhaps on the multiset of vertex degrees, it becomes feasible to store with each gray vertex an indication of the vertex immediately above it on the stack, which is necessarily a neighbor of and therefore expressible in bits, where is the degree of . Since , this yields a DFS algorithm that works in time with bits, as observed in [3, 15]. One can also use Jensen’s inequality to bound the space requirements of the pointers to neighboring vertices by bits. This was done in [9] for problems for which the authors were unable to obtain densityindependent bounds. In the context of DFS, it was mentioned by Chakraborty, Raman and Satti [5].
Several applications of DFS relevant to the present paper can be characterized by means of equivalence relations on vertices or edges. Let be a graph. If is directed and , let us write if contains a path from to and one from to . If is undirected and , write (, respectively) if or and belong to a common simple cycle (a not necessarily simple cycle, respectively) in . Then is an equivalence relation on and and are equivalence relations on . Each subgraph induced by an equivalence class of one of these relations is called a strongly connected component (SCC) in the case of , a biconnected component (BCC) or block in the case of , and a edgeconnected component (which we shall abbreviate to 2ECC) in the case of . Sometimes a single edge with its endpoints is not considered a biconnected or 2edgeconnected component; adapting our algorithms to alternative definitions that differ in this respect is a trivial matter. Suppose that is undirected. A cut vertex (also known as an articulation point) in is a vertex that belongs to more than one BCC in ; equivalently, it is a vertex whose removal from increases the number of connected components. A bridge in is an edge that belongs to no cycle in ; equivalently, it is an edge whose removal from increases the number of connected components.
For each of the three kinds of components introduced above, we may want the components of an input graph to be output one by one. Correspondingly, we will speak of the SCC, the BCC and the 2ECC problems. Outputting a component may mean outputting its vertices or edges or both. Correspondingly, we may describe an algorithm as, e.g., computing the strongly connected components of a graph with their vertices. We may either output special separator symbols between consecutive components or number the components consecutively and output vertices and edges together with their component numbers; for our purposes, these two conventions are equivalent. Topologically sorting a directed acyclic graph means outputting the vertices of in an order such that for each , is output before .
Elmasry et al. [9] gave algorithms for the SCC problem and for topological sorting that work in time using bits. Their main tool was a method for “coarsegrained reversal” of a DFS computation that makes it possible to output the vertices of the input graph in reverse postorder, i.e., in the reverse of the order in which the vertices turn black in the course of the DFS. Various bounds for these problems were claimed without proof by Banerjee, Chakraborty and Raman [3]: time with bits for the SCC problem and time with bits as well as time with bits for topological sorting. For the BCC problem and the computation of cut vertices, Kammer, Kratsch and Laudahn [15] described an algorithm that works in time using bits and can be seen as an implementation of an algorithm of Schmidt [18]. Essentially the same algorithm was sketched by Banerjee et al. [3], who also applied it to the 2ECC problem and the computation of bridges. Space bounds of the form for the same problems were mentioned by Chakraborty et al. [5]. Essentially reinventing an algorithm of Gabow [10] and combining it with machinery from [9] and with new ideas, Kammer et al. [15] also demonstrated how to compute the cut vertices in time with bits. Finally, decomposing the input graph into subtrees and processing the subtrees one by one, Chakraborty et al. [5] were able to solve the BCC problem and compute the cut vertices in time with bits.
2 New Results and Techniques
The main thrust of this work is to establish new densitydependent and densityindependent space bounds for fast DFS algorithms. Let us begin by developing simple notation that allows the results to be stated conveniently.
When is a directed or undirected graph, is the (total) degree of for each and is an integer, let
When is directed, we use and to denote quantities defined in the same way, but now with taken to mean the indegree and the outdegree of , respectively.
Let be a directed or undirected graph with vertices and edges. Then

;

If is directed, then and are both bounded by .
Let and, for each , denote by the (total) degree of . To prove part (a), observe first that for all integers . Since the function is concave on and , the result follows from Jensen’s inequality. Part (b) is proved in the same way, noting that the relevant vertex degrees now sum to .
Our most accurate space bounds involve terms of the form . More convenient bounds can be derived from them with Lemma 2. Note that for all , the quantity can also be written as . The latter form will be preferred here.
Our first algorithm carries out a DFS of a graph with vertices and edges in time using at most bits. The number of bits needed, which can also be bounded by and by , is noteworthy only for the constant factors involved. Comparable earlier space bounds were indicated only as or bits, and no argument offered in their support points to as small constant factors as ours. Moreover, all of the earlier algorithms make use of rankselect structures [6], namely to store variablelength information indexed by vertex numbers. Whereas asymptotically spaceefficient and fast rankselect structures are known, it is generally accepted that in practice they come at a considerable price in terms of time and especially space (see, e.g., [20]) and a certain coding complexity. In contrast, we view the algorithm presented here as the first truly practical spaceefficent DFS algorithm.
The simple but novel idea that enables us to make do without rankselect structures is a different organization of the DFS stack. The vertices on the stack, in the order from the bottom to the top of the stack, always form a directed path, in itself if is directed and in the directed version of if not, that we call the gray path. Assume that is undirected. Instead of having a table that maps each vertex to how far it has progressed in the exploration of its incident edges, which in some sense distributes the stack over the single vertices and is what necessitates a rankselect structure, we return to using a stack implemented in contiguous memory locations and store there for each internal vertex on the gray path the distance in its adjacency array, considered as a cyclic structure, from the predecessor of to the successor of on the gray path. More intuitively, one can think of the stack entry as describing the “turn” that the gray path makes at , namely from via to . Knowing , and the “turn value”, one can compute . Provided that outside of the stack we always remember the current vertex of the DFS, the vertex on top of the DFS stack and at the end of the gray path, and the position in ’s adjacency array of the predecessor of on the gray path, if any, this allows us to pop from the stack in constant time, and pushing is equally easy. In the course of the processing of , the “turn value” can be stepped from 1 (“after entering from , take the next exit”) to , where is the (total) degree of (directed edges that enter are simply ignored). Aside from the somewhat unusual stack, the DFS can proceed as a usual DFS and complete in linear time. Handling vertices of small degree specially, we can lower the space bound to bits and solve the SCC problem in time with bits. Resorting to using rankselect structures, we describe lineartime algorithms for the SCC, BCC and 2ECC problems and for the computation of topological sortings, cut vertices and bridges with space bounds of the form or bits, where , and are positive constants. Apart from minor tricks to reduce the values of , and , no new techniques are involved here.
Turning to space bounds that are independent on or almost so, we first describe a DFS algorithm that works in time with bits. The algorithm is similar to an algorithm of Elmasry et al. [9] that uses bits. Our superior space bound is made possible by two new elements: First, the algorithm is changed to use a stack of “turn values” rather than of “progress counters”, as discussed above. And second, when stack entries have to be dropped to save space, we keep approximations of the lost entries that turn out to work better than those employed in [9]. Our space bound is attractive because it unifies the earlier bounds of the forms , and bits, being at least as good as all of them for every graph density and better than each of them for some densities.
Subsequently we show how to carry out a DFS in time with bits or, with a slight variation, in time with bits for arbitrary fixed . Here denotes fold repeated application of , e.g., , and . The main new idea instrumental in obtaining this result is to let each vertex dropped from the stack record, instead of a fixed approximation of its stack position as in earlier algorithms, an approximation of that position that changes dynamically to become coarser when is farther removed from the top of the stack. Adapting an algorithm of Kammer et al. [15] for computing cut vertices, we show that the time and space bounds indicated in this paragraph extend to the problems of computing biconnected and 2edgeconnected components, cut vertices and bridges of undirected graphs.
3 Preliminaries
We assume a representation of an undirected input graph that is practically identical to the one used in [14]: For some known integer , , the degree of each can be obtained as , and for each and each , and yield the st neighbor of and the integer with , respectively, for some arbitrary numbering, starting at 0, ordering of the neighbors of each vertex. The access functions , and run in constant time. The representation of a directed graph is similar in spirit: is represented with in/out adjacency arrays, i.e., we can access the inneighbors as well as the outneighbors of a given vertex one by one, and there are cross links, i.e., the function now, for each edge , maps the position of in the adjacency array of to that of in the adjacency array of and vice versa.
A DFS of a graph is associated with a spanning forest of in an obvious way: If a vertex is discovered by the DFS when the current vertex is , becomes a child of . is called the DFS forest corresponding to the DFS, and its edges are called tree edges, whereas the other edges of may be called nontree edges. At the outermost level, the DFS steps through the vertices of in a particular order, called its root order, and every vertex found not to have been discovered at that time becomes the root of the next DFS tree in . The parent pointer of a given vertex in is an indication of the parent of in , if any. If the root order of a DFS is simply and the DFS always explores the edges incident on the current vertex in the order in which their endpoints occur in the adjacency array of , the corresponding DFS forest is the lexicographic DFS forest of the adjacencyarray representation.
The following lemmas describe two auxiliary data structures that we use repeatedly: the choice dictionary of Kammer and Hagerup [12, 13] and the ternary array of Dodis, Pǎtraşcu and Thorup [8, Theorem 1].
There is a data structure that, for every , can be initialized for universe size in constant time and subsequently occupies bits and maintains an initially empty subset of under insertion, deletion, membership queries and the operation \Tvnchoice (return an arbitrary element of ) in constant time as well as iteration over in time.
There is a data structure that can be initialized with an arbitrary in time and subsequently occupies bits and maintains a sequence drawn from under constanttime reading and writing of individual elements of the sequence.
4 DensityDependent Bounds
4.1 DepthFirst Search
{theorem}A DFS of a directed or undirected graph with vertices and edges can be carried out in time with at most any of the following numbers of bits of working memory:

;

;

.
We first show part (a) for the case in which is undirected. The algorithm was described in Section 2, and it was argued there that it works in time. What remains is to bound the number of bits needed.
If an internal vertex on the gray path has degree , its stack entry can be taken to be an integer in that indicates the number of edges incident on that were explored with as the current vertex. The stack entry can therefore be represented in bits, so that the entire stack never occupies more than bits. In addition to the information on the stack, the DFS must know for each vertex whether is white; this takes bits. (Unless an application calls for it, the DFS has no need to distinguish between gray and black vertices.) Finally the DFS must store a few simple variables in bits, for a grand total of bits. This concludes the proof of part (a) for undirected graphs.
If is directed, we can pretend that the inneighbors and the outneighbors of each vertex are stored in the same adjacency array (whether or not this is the case in the actual representation of ). We can then use the same algorithm, except that an edge should not be explored in the wrong direction, i.e., when is the current vertex of the DFS.
To show part (b) of the theorem, let be the (total) degree of for each and observe that for all integers , so that . Part (c) follows immediately from part (a) by an application of Lemma 2(a).
At the price of introducing a slight complication in the algorithm, we can obtain another space bound of bits for a smaller constant . If is allowed to increase, it is also possible (but of little interest) to lower as far as desired towards 0 by treating vertices of small degree separately in the analysis.
A DFS of a directed or undirected graph with vertices and edges can be carried out in time with at most bits of working memory.
The relation is satisfied for all integers except 4, 6 and 7. To handle the stack entries of vertices of degree 4, we divide these into groups of 5 and represent each group on the stack through a single combined entry of bits instead of 5 individual entries of bits each. Since , the combined entry is small enough for the bound of the theorem. At all times, an incomplete group of up to 4 individual entries is kept outside of the stack in a constant number of bits. Similarly, groups of 3 entries for vertices of degree 6 are represented in bits, and groups of 3 entries for vertices of degree 7 are represented in bits. Since and , this altogether yields a space bound of bits.
The simplicity of the algorithm of Theorem 4.1 is demonstrated in Fig. 1, which shows an implementation of it for an undirected input graph . The description is given in complete detail except for items like the declaration of variables and for the specification of a bit stack with the following two operations in addition to an appropriate initialization to being empty: , where and are integers with and , pushes on the bit binary representation of , and , where again is an integer with , correspondingly pops bits from , interprets these as the binary representation of an integer and returns . The task of the DFS is assumed to be the execution of certain user procedures at the appropriate times: and , for each , when turns gray and when it turns black, respectively, , for , when the edge is explored with as the current vertex and becomes a tree edge, when the DFS later withdraws from to , and , for , when is explored with as the current vertex but does not lead to a new vertex. The code is made slightly more involved by a special handling of the first and last vertices of the gray path and by the fact that no stack entries are stored for vertices of degree 2. Timing experiments with an implementation of the algorithm of Fig. 1 showed it to be sometimes faster and sometimes slower than an alternative algorithm that also manages its own stack but makes no attempt at being spaceefficient.
DFS:  
for do ; initially all vertices are undiscovered  
for do if then if has not yet been discovered  
; begin a new DFS tree rooted at  
; ;  
;  
;  
repeat until breaking out of the loop with break below  
Invariant: is the current vertex, with data on stored in and  
; advance in ’s adjacency array  
if then if still has unexplored incident edges  
; the next neighbor of  
if then  
;  
if then ; save in rather than on  
else if then ; push in nontrivial cases  
; index at of  
; prepare to take the first turn out of  
; make the current vertex  
;  
;  
else ;  
else has no more unexplored incident edges  
if then break; done at the root – a DFS tree is finished  
; the parent of in the DFS tree  
if then ; retrieve from rather than from  
else if then ; trivial case – nothing stored on  
else ; pop in nontrivial cases  
; index at of ’s parent  
;  
;  
; make the current vertex  
forever;  
; 
4.2 Strongly Connected Components and Topological Sorting
{theorem}The strongly connected components of a directed graph with vertices and edges can be computed with their vertices and/or edges in time with bits of working memory.
Let be the input graph and let by replacing each edge by the antiparallel edge . We use an algorithm attributed to Kosaraju and Sharir in [1] that identifies the vertex set of each SCC as that of a DFS tree constructed by a standard DFS of . that, however, employs as its root order the reverse postorder defined by an (arbitrary) DFS of be the directed graph obtained from
Consider each vertex in to have a circular incidence array that contains all edges entering as well as all edges leaving . A DFS of can be viewed as entering each nonroot vertex at a particular (tree) edge and each root at a fixed position in its incidence array and eventually traversing ’s incidence array exactly once from that entry point, classifying certain edges out of as tree edges and skipping over the remaining edges, either because they lead to vertices that were already discovered or because they enter , before finally, if is a nonroot, retreating over the tree edge to ’s parent. During such a DFS of that uses the root order , we construct a bit sequence by appending a 1 to an initially empty sequence whenever the DFS discovers a new vertex or withdraws over a tree edge and by appending a 0 whenever the DFS skips over an edge. The total number of bits in is exactly , and can be seen to represent an Euler tour of each tree in the forest defined by the DFS in a natural way. We also use an array of bits to mark those vertices that are roots in . Observe that the pair supports an Euler traversal that, in time and using only additional bits, enumerates the vertices in in reverse postorder with respect to . In particular, whenever an Euler tour of a tree in with root has been followed backwards completely from end to start, is used to find the end vertex of the next Euler tour, if any, as the largest root smaller than .
We carry out a DFS of . We could execute the algorithm using bits for , bits for and, according to Theorem 4.1, bits for the depthfirst searches. Recall, however, that the space bound of Theorem 4.1 is obtained as the sum of bits for an array and bits for the DFS stack and related variables. It turns out that we can realize and \Tvnwhite together through a single ternary array with entries. To see this, it suffices in the case of the DFS of to note that a vertex classified as a root certainly is not white. For the DFS of with root , it will never again need to inspect for any and, on the other hand, that every vertex reachable in in must satisfy —otherwise would belong to an earlier tree (with respect to the DFS of ) and not to . from a vertex , assume first that we want to output only the vertices of the strongly connected components, as is standard. Then even a binary array would suffice—we could use the same binary value to denote both “root” and “not white”. The reason for this is, on the one hand, that when the Euler traversal has entered a tree , interleaved with an execution of the Euler traversal that supplies new root vertices as needed, and output the vertex set of each resulting tree as an SCC. The total time spent is
If we want to output not only the vertices, but also the edges of each SCC and perhaps to highlight those edges whose endpoints belong to different strongly connected components (the “intercomponent” edges), we need to know for each edge explored during the DFS of belongs to the DFS tree under construction at that time (then is an edge of the current SCC) or to an older DFS tree (then is an “intercomponent” edge). We solve this problem again resorting to a ternary array, splitting the value “not white” into “not white, but in the current tree” and “in an older tree”. Whenever the DFS of 3. completes a tree, we repeat the DFS of that tree, treating the color “not white, but in the current tree” as “white” and replacing all its occurrences by “in an older tree”. The space bound follows from Lemma whether
When is larger relative to , it is advantageous, instead of storing the bit vector , to store for each vertex a parent pointer of bits, where is the indegree of , that indicates ’s parent in the DFS forest of or no parent at all (i.e., is a root). For this we need the standard static space allocation:
There is a data structure that can be initialized for a positive integer and nonnegative integers in time, where for and , and subsequently occupies bits and realizes an array of entries of bits under constanttime reading and writing of individual entries in .
Maintain the entries of in an array of bits and store the sequence of bits. For , is located in , where for , and can be evaluated in constant time given bits of bookkeeping information [11, 17] that can be computed in time.
A representation of the parent pointers of the lexicographic DFS forest of an adjacencyarray representation of a graph with vertices and edges that allows constanttime access to the parent of a given vertex can be stored in bits and computed in time with additional bits, where if is undirected and if is directed.
The parent pointers themselves can be stored in bits, and constanttime access to them can be provided according to Lemma 4.2. To compute the parent pointers, carry out a DFS of , using the additional bits to store for each vertex whether it is still white. When the DFS ends the processing at a vertex , it follows the parent pointer of to withdraw to ’s parent in the DFS forest, and from there proceeds to explore the edge that follows or in ’s incidence array, if any, and to store the appropriate new parent pointer if this edge leads to a white vertex. The procedure to follow at the first exploration of an edge from a newly discovered vertex is analogous.
The strongly connected components of a directed graph with vertices and edges can be computed in time with at most
The parent pointers of a DFS of by themselves support the Euler traversal of the proof of Theorem 4.2 in time, using additional bits. To see this, observe that one can visit the children of a vertex by inspecting the outneighbors of one by one to see which of them indicate as their parent and that the array is superfluous since a vertex is a root in the DFS forest if and only if its parent pointer does not point to one of its neighbors—a value was reserved for this purpose. Thus first compute the parent pointers (Lemma 4.2) and then carry out a DFS of , and the number of bits is at most the sum of the bounds of Theorem 4.1 and Lemma 4.2. To prove the second bound, use Lemma 2. interleaved with the Euler traversal. The time needed is
If the input graph happens to be acyclic, the algorithms of Theorems 4.2 and 4.2 output the vertices of in the order of a topological sorting. In the case of Theorem 4.2 this may yield the most practical algorithm. Better space bounds for topological sorting can, however, be obtained by implementing an alternative standard algorithm, due to Knuth [16], that repeatedly removes a vertex of indegree 0 while keeping track only of the indegrees of all vertices. This was also suggested by Banerjee et al. [3]. As mentioned in the discussion of related work, they indicated a space bound of bits; it is not clear to this author, however, how such a bound is to be proved.
A topological sorting of a directed acyclic input graph with vertices and edges can be computed in time with at most any of the following numbers of bits:

;

;

.
Maintain the current set of vertices of indegree 0 in an instance of the choice dictionary of Lemma 3, which needs bits. Also maintain the current indegrees according to Lemma 4.2. Since we can store an arbitrary value or nothing for vertices of current indegree 0, we need only distinguish between different values for a vertex of original indegree , so that bits suffice. With these data structures, the algorithm of Knuth [16] can be executed in time. This proves part (a). Part (b) follows from part (a) since for all integers , and part (c) follows from part (a) with Lemma 2(b).
4.3 Biconnected and 2EdgeConnected Components
In this subsection we will see that closely related algorithms can be used to compute the cut vertices, the bridges and the biconnected and 2edgeconnected components of an undirected graph. Our algorithms are similar to those of [3, 5, 15], but whereas the earlier authors indicated the space bounds only as or bits, we will strive to obtain small constant factors and indicate these explicitly.
A simple but crucial fact is that for every DFS forest of an undirected graph , every edge in joins an ancestor to a descendant within a tree in . DFS is also known to interact harmoniously with the graph structures of interest in this subsection as exemplified, e.g., in the following lemma.
Let be a DFS forest of an undirected graph and let . Then the subgraph of induced by the edges in equivalent to under is a subtree of whose root has degree in .
Every edge in is equivalent under to an edge in , so is not the empty graph. Let us first prove that is connected. Suppose for this that a simple path in contains the edges , and in that order and that and belong to . To show that also belongs to , let be a simple cycle in that contains and and let be the maximal subpath of that contains and whose internal vertices do not belong to . The endpoints of lie on , so and a suitably chosen subpath of together form a simple cycle that contains and at least one of and . Thus is indeed a subtree of with a root . A simple cycle in that contains two edges in incident on must necessarily also contain a proper ancestor of , contradicting the fact that the edge between and its parent in , if any, does not belong to . Thus the degree of in is 1.
Let be a DFS forest of an undirected graph with vertices and edges. Let us call a subtree of as in Lemma 4.3 a BCC subtree and its root a BCC root. A vertex common to two edgedisjoint subtrees of a rooted tree is a root in at least one of the subtrees. Therefore every cut vertex in is a BCC root. Conversely, a BCC root is also a cut vertex in unless is a root in with only one child. Every BCC of consists precisely of the vertices in a particular BCC subtree and the edges in that join two such vertices, i.e., whose lower endpoint (with respect to ) lies in but is not the root of . An edge is a bridge exactly if, together with its endpoints, it constitutes a full BCC subtree. A 2ECC, finally, is either such a 1edge BCC subtree or a maximal connected subgraph of with at least one edge and without bridges.
For each , denote by the assertion that has a parent in and contains at least one edge between a descendant of and a proper ancestor of . If is an edge in and is the parent of , exactly if is the root of the BCC subtree that contains . We can compute for all by initializing all entries in a Boolean array to \Tvnfalse and processing all nontree edges as follows: To process a nontree edge , where is a descendant of , start at and follow the path in from to , setting for every vertex visited, but omitting this action for the last two vertices (namely and a child of ). Suppose that we process each nontree edge , where is a descendant of , when a preorder traversal of reaches and before it proceeds to children of . Then we can stop the processing of once we reach a vertex for which already has the value \Tvntrue—the same will be true for all outstanding vertices . Therefore the processing of all nontree edges can be carried out in time, after which for all . To solve one of the problems considered in this subsection, compute the DFS forest and traverse it to compute , as just described, while executing the following additional problemspecific steps:
Cut vertices: Output each vertex in that has a child with and is not a root in or has two or more children.
Bridges: Output each tree edge , where is the parent of , for which and for every child of .
Biconnected components: Specialize the traversal of to always visit a vertex with before a sibling of with . To compute the biconnected components of with their vertices and edges, when the traversal withdraws over a tree edge from a vertex to its parent , output the edges in that have as their lower endpoint (including ), output itself and, if , also output and wrap up the current BCC, i.e., except in the case of the very last component, output a component separator or increment the component counter. Visiting the children of a vertex with after those with ensures that the vertices and edges of the BCC that contains are output together for each without intervening vertices and edges of other biconnected components.
2edgeconnected components: Specialize the traversal of so that for each vertex , a child of for which is a bridge is always visited before a child of for which is not a bridge. Suppose that the traversal withdraws from a vertex to its parent . If has at least one incident edge that is not a bridge, output and, if is a bridge, wrap up the current 2ECC. If is a bridge, output , and and wrap up the current 2ECC. If is not a bridge, output all nonbridge edges of which is the lower endpoint (including ). Finally, when the traversal withdraws from a root with at least one incident edge that is not a bridge, output and wrap up the current 2ECC. As above, visiting those children of a given vertex that are adjacent to via bridges before the other children of ensures that the vertices and edges of the 2ECC that contains several edges incident on , if any, are output together without intervening vertices and edges of other 2edgeconnected components.
When the traversal of reaches a vertex with a child , will have reached its final value, , and will never again be written to. It is now obvious that we can test at that time whether is a cut vertex and whether the edge between and its parent in , if any, is a bridge in time, where is the degree of . It follows that each of the four problems considered above can be solved in time. In the most complicated case, that of 2edgeconnected components, in order to test during the processing of a vertex whether an edge is a bridge, where is a child of in , carry out a “preliminary visit” of the children of in .
We can compute the DFS forest with the algorithm of Lemma 4.2, which needs bits plus bits that can be reused. The subsequent traversal needs bits for the array . In addition, when the computation of described above processes a nontree edge , it needs to know whether is an ancestor or a descendant of . We can use another Boolean array to handle this issue, ensuring for each that at all times if and only if is an ancestor of the current vertex of the traversal of (i.e., if is gray). In some cases, however, we can make do with less space. Observe that in the computation of cut vertices and bridges, the value of is never again used after the arrival of the traversal of at . When the traversal reaches and has been inspected, we can therefore set without detriment to the use of . Suppose that when processing a nontree edge in the computation of , we consult instead of to know whether is an ancestor of the current vertex . If , the artificial change to introduced above ensures that we necessarily also have , so that the algorithm proceeds correctly. If , we may have , in which case the processing of stops immediately, but then ( has not yet been reached by the traversal, and so was not set artificially to \Tvntrue) and it is correct to do nothing.
If our goal is to compute the biconnected or 2edgeconnected components of with their vertices, but not with their edges, we are in an intermediate situation: We need to distinguish between three different combinations of and (now with the original ), but if the value of is immaterial as above, and never changes from \Tvntrue to \Tvnfalse. We can therefore represent and together through a ternary array with entries. Altogether, we have proved the following result.
Given an undirected graph with vertices and edges, we can compute the following in time and with the number of bits indicated:

The cut vertices and bridges of with bits;

The biconnected and 2edgeconnected components of with their vertices with bits;

The biconnected and 2edgeconnected components of with their edges and possibly vertices with bits.
Kammer et al. [15] consider the problem of preprocessing an undirected graph so as later to be able to output the vertices and/or edges of a single BCC, identified via one of its edges, in time at most proportional to the number of items output. Having available the parent pointers of a DFS forest and the array corresponding to , we can solve the problem in the following way, which is the translation of the procedure of Kammer et al. to our setting: Given a request to output the BCC that contains an edge , first follow parent pointers in parallel from and until one of the searches hits the other endpoint or a root in . This allows us to determine which of and is an ancestor of the other vertex in time at most proportional to the number of items to be output. Then traverse the subtree of reachable from the lower endpoint of without crossing any edge between a BCC root and its single child, producing the same output at each vertex as described above for the output of all biconnected components. In addition, at the uniquely defined vertex with visited by the search, also output the parent of (but without continuing the traversal from ). In order to carry out this procedure efficiently, we need a way to iterate over the edges in incident on a given vertex and, if the edges of are to be output, over the nontree edges that have as their lower endpoint. To this end we can equip each vertex of degree with a choice dictionary (Lemma 3) for a universe size of that allows us to iterate over the relevant edges in time at most proportional to their number. This needs another bits. Very similar constructions allow us to output the vertices and/or the edges of a single 2edgeconnected component.
There is a data structure that can be initialized for an undirected graph with vertices and edges in time, subsequently allows the vertices and/or the edges of the biconnected or 2edgeconnected component that contains a given edge to be output in time at most proportional to the number of items output, and uses bits.
5 The DensityIndependent Case
5.1 DepthFirst Search
Some aspects of the following proof are similar to those of [9, Lemma 3.2].
A DFS of a directed or undirected graph with vertices and edges can be carried out in time with bits of working memory.
Assume without loss of generality that . We simulate the algorithm of Theorem 4.1, but using asymptotically less space (unless ). Recall that the algorithm employs a stack whose size is always bounded by , where . When a vertex is discovered by the DFS and enters , it is permanently assigned an integer hue. The first vertices to be discovered are given hue 1, the next ones receive hue 2, etc., and the vertices on with a common hue are said to form a segment. In general, a new segment is begun whenever the current segment for the first time occupies more than bits on . Thus no hue larger than is ever assigned.
As in [9], the algorithm does not actually store , which is too large, but only a part of consisting of the one or two segments at the top of . When a new segment is begun and already contains two segments, the older of these is first dropped to make room for the new segment. By construction, always occupies bits.
The algorithm operates as that of Theorem 4.1, using in place of , except when a pop causes but not to become empty. Whenever this happens the top segment of is restored on , as explained below, after which the DFS can resume. Between two stack restorations a full segment disappears forever from , so the total number of stack restorations is bounded by .
In order to enable efficient stack restoration, we maintain for each vertex (a) its color—white, gray or black; (b) its hue; (c) whether it is currently on ; (d) the number of groups of (out)edges incident on that have been explored with as the current vertex. The number of bits needed is for items (a) and (c), for item (b) and for item (d), where is the degree of . Summed over all vertices, this yields a bound of bits, as required. For each segment on , we also store on a second stack the last vertex of (the vertex closest to the top of ) and the number of (out)edges incident on explored by the DFS with as the current vertex. The space occupied by is negligible.
To restore a segment , we push the bottommost entry of on and initialize accordingly the variables kept outside of to interpret entries of correctly. This can be done in constant time by consulting either the entry on immediately below the top entry or separately remembered information concerning the root of the current DFS tree. We proceed to push on the remaining vertices in one by one, stopping when the top entries on and agree, at which point the restoration of is complete and the normal DFS can resume. Each entry on above that of a vertex is found by determining the first gray vertex in ’s adjacency array (counted cyclically from the position of ’s parent) that belongs to (as we can tell from its hue) and is not already on .
Because of item (d) of the information kept for each vertex, the search in the adjacency array of a vertex of degree is easily made to spend time on entries that were inspected before. Over all at most restorations and over all vertices of degree at most , this sums to . Since a restoration involves vertices of degree larger than , the sum over all restorations and over all such vertices is . Altogether, therefore, the algorithm spends time on restorations and time outside of restorations.
Our remaining algorithms depend on the following lemma, which can be seen as a weak dynamic version of Lemma 4.2.
For all , following an time initialization, an array of initially empty binary strings that at all times satisfy for and can be maintained in bits under constanttime reading and amortized constanttime writing of individual array entries.
Compute a positive integer with and partition the strings into groups of strings each, except that the last group may be smaller. For each group, store the strings in the group in piles, each of which holds all strings of one particular length in no particular order. For each string, we store its length and its position within the corresponding pile. Conversely, the entry for a string on a pile, besides the string itself, stores the number of the string within its group. The size of the bookkeeping information amounts to bits per string and bits altogether.
When a string changes, the string may need to move from one pile to another within its group. This usually leaves a “hole” in one pile, which is immediately filled by the entry that used to be on top of the pile. This can be done in constant time, which also covers the necessary update of bookkeeping information.
We are now left with the problem of representing piles. Divide memory into words of bits, each of which is large enough to hold one of the strings . Rounding upwards, assume that each pile at all times occupies an integer number of words—this wastes bits. The update of a string may cause the sizes of up to two piles to increase or decrease by one word, but no operation changes the size of a pile by more than one word. Each pile is stored in a container, of which it occupies at least a quarter. For each pile we maintain its size, the size of its container, and the location in memory of its container, a total of bits per pile and bits altogether. We also maintain in a free pointer the address of the first memory word after the last container.
When a pile outgrows its container, a new container, twice as large, is first allocated for it starting at the address in the free pointer, which is incremented correspondingly. The pile is moved to its new container, after which its old container is considered dead. Conversely, when a pile would occupy less than a quarter of its container after losing a string, the pile is first moved to a new container of size twice the size of the pile after the operation and also allocated from the address in the free pointer. If every operation that operates on a pile pays 5 coins to the pile, by the time when the pile needs to migrate to a new container, it will have accumulated enough coins to place a coin on every position in the old container and a coin on every element of the pile. In terms of an amortized time bound, the latter coins can pay for the migration of the pile to its new container.
When an operation would cause the size of the dead containers to exceed that of the live containers (the containers that are currently in use) plus bits, we carry out a “garbage collection” that eliminates the dead containers and reallocates the piles in tightly packed new live containers in the beginning of the available memory, where each new container is made twice as large as the pile that it contains, and the free pointer is reset accordingly. The garbage collection can be paid for by the coins left on dead containers.
A string can be read in constant time, and updating it with a new value takes constant amortized time, as argued above. Because every pile occupies at least a quarter of its (live) container and the dead containers are never allowed to occupy more space than the live containers, plus bits, the number of bits occupied by the array of strings at all times is .
For all , following an time initialization, an array of initially empty binary strings that at all times satisfy for and can be maintained in bits under constanttime reading and amortized constanttime writing of individual array entries.
Maintain groups of strings with the data structure of the previous lemma. In more detail, compute a positive integer with and partition the strings into blobs of consecutive strings each, except that the last blob may be smaller. If a blobs consists of the strings , let its label be the binary string . Because the label of a blob is of bits, given the label and the number of a string within the blob, we can extract in constant time by lookup in a table of bits that can be computed in time. Similarly, given a new value for , we can update within the blob in constant time. Maintain the sequence of blobs of bits each with the data structure of Lemma 5.1. The number of bits needed is , and the operation times are as claimed.
A DFS of a directed or undirected graph with vertices and edges can be carried out in time with bits of working memory.
Assume without loss of generality that . Compute a positive integer and a sequence of powers of 2 with the following properties:

.

For , .
