![]() | This ![]() It is of interest to the following WikiProjects: | |||||||||||||||||||||||||||||||||||||
|
When talking about the iterative approach to dfs, the article mentions one of the differences with bfs is that:
"it delays checking whether a vertex has been discovered until the vertex is popped from the stack rather than making this check before adding the vertex"
That's a moot point, as dfs could have just as easily written with the discovered check happening before adding in the stack. — Preceding
unsigned comment added by
24.56.255.3 (
talk)
11:57, 10 June 2019 (UTC)
No, it couldn't have been written like that. Consider the graph {1: (2, 3), 2: (), 3: (4, 2), 4: ()}
(Python dictionary notation, mapping vertices to their immediate successors). If you do the discovered check before pushing, i.e. you write the algorithm as
procedure DFS_iterative(G, v) is let S be a stack S.push(v) while S is not empty do v = S.pop() for all edges from v to w in G.adjacentEdges(v) do if w is not labelled as discovered then label w as discovered S.push(w)
this gives you 1, 3, 4, 2 and not 1, 3, 2, 4, since 2 is labelled as discovered after 1 is popped, and hence it doesn't get pushed after 3 is popped. The underlying issue is that 2 occurs in two places in the graph, as a child of 1 and as a child of 3, and the place where it's discovered first is later in the depth-first order.
For breadth-first search it doesn't affect the correctness of the algorithm whether you do the check before enqueueing or after dequeueing. But for depth-first search, it does matter. The paragraph below the part you quoted tries to point this out, maybe somewhat unclearly. It could be improved, perhaps. Stickinsect2 ( talk) 16:13, 1 January 2022 (UTC)
Another more basic example should be given to start with, without the link from F to E. Surely it would be more intuitive to put the letters in a different order Something like
A B C D E F G
Or maybe the order that the search would go?
A B E G C D F
Opticyclic 16:33, 21 April 2006 (UTC)
I think the psuedocode give in incorrect-- it enters an infinite loop! the node must be marked as visited BEFORE recursively calling dfs on adjacent nodes. pleaes confirm this! Yonestar 17:26, 20 April 2006 (UTC)
What's with the "ie where's the recursion?" DFS doesn't require recursion... no algorithm does. The stack makes it so you don't have to recurse (that's all recursion really does for you anyway, is add to a stack). If lack of recursion is the only "problem" with the algorithm, then there is no problem.
Changed from "DFS is optimal" to "DFS is not optimal". Please Confirm.
can you show me the program in c for dfs
Whoever "corrected" the page to put E on the end of the path, please do not un-correct it again. DFS will follow the link from F to E before getting to E from A. The apparent tree view does not reflect the actual DFS tree, and I chose it that way deliberately for didactic purposes. Jerf
Can you explain what all non-negative path costs is refering to ?
DFS's can be preformed for graphs in general as well, not just Trees. Though if you're thinking specifically of the technique as used in state-space search's I guess not. Perhaps, these two concepts should be seperated? Deepak 01:39, 17 Jan 2004 (UTC)
They're awfully closely related... I've tried to improve the situation in my last edit, but there still needs to be some more seperation of "depth-first traversal" and "depth-first search". Jerf
Why does the page say that DFS is a method for traversing trees, and then goes on to give an example of a general graph? (as it clearly has a cycle it is not a tree). The page should either state that it is a method for traversing graphs, or change the example to a tree.
The first example should be one of a plain hierarchical tree instead of a graph. Simple examples first more complex later.
Well, headline explains it all. It is not depth first for a general graph because of the Color(grey) thing.
I think the color(grey) thing shouldn't be there. and, there should be a color check just after the pop, to skip the node or not.
also, for generalization of the alg. to graphs, there should be an iteration over the nodes above the while loop. I'll change the pseudocode, if anyone has objections, we can revert it.
Why do we have TWO solutions, both of them iterative, creatings stacks, etc? It seems that in general, iterative approaches that have to heavily-manipulate stacks are well-suited to recursive approaches (at least - it is conceptually easier) - and that is certainly true here. I'm definitely going to replace one of these algorithms with a recursive solution. I'd prefer to replace the second, it looks less intuitive. (Remember we're not here to give the best-optimised code, but to teach how it works - therefore the conceptually easier code is always the best on wikipedia). — EatMyShortz 11:46, 28 February 2006 (UTC)
As much as I love python, it shouldn't be marked as pseudocode. The second one was better, but still not pseudocode.
That's true. I changed it. —Preceding unsigned comment added by Srrrgei ( talk • contribs) 20:27, 3 May 2009 (UTC)
The different classifications of edges are briefly discussed here, but should perhaps be moved to a new article. Consider:
I think this distinction should be made in the interest of better explaining the ability of detecting cycles in a graph using topological sorting. -- Stimpy 20:10, 17 December 2006 (UTC)
Either the "reverse postordering" section has an error, or I am confused. Since "reverse postordering is the reverse of a postordering, i.e. a list of the vertices in the opposite order of their last visit", why is D not first in reverse postordering?
TJA —Preceding unsigned comment added by 68.162.149.237 ( talk) 17:10, 25 March 2008 (UTC)
The preordering is ABDC or ACDB. I think the postordering is BDCA or CDBA. The reverse postordering looks to be a DFS to a bottom vertex, and then a DFS to all of the possible vertices. Although, I'm still not sure if "reverse postordering" is something. "Reverse postordering" just seems like a reverse DFS from a root vertex. Somebody please correct me. — Preceding unsigned comment added by 24.21.136.68 ( talk) 23:01, 10 February 2013 (UTC)
Why is the cycle in the depth-first graph "A, B, D, F, E"? I don't understand its assumptions. It seems to me that unless you're going to put some sort of directionality on the edges, you'll get something like A, B, D, B, D...
You can't even get from D to F without revisiting B. Sure, B is a backtrack and you don't usually include those in the search order, but if this search was keeping track of what was previously in the path, it wouldn't get into a cycle in the first place. It would take some sort of memory to go from B to D the first time and B to F the second. This strikes me as an error. rspeer / ɹəədsɹ 06:49, 28 August 2008 (UTC)
Agh, it gets worse. The section on iterative deepening seems to make the same unknown assumptions.
Memory-less depth first search on an undirected graph is quite silly and nobody ever does it. Whoever wrote the example seems to have assumed that an edge is never crossed twice in opposite directions (but can be in the same direction). I may replace this example with a directed graph to make it make some sort of sense. rspeer / ɹəədsɹ 15:56, 28 August 2008 (UTC)
I just noticed that there is no proof of the algorithm running time (big O) listed? I thought this was standard for graph algorithms? 129.59.172.24 ( talk) 22:55, 10 February 2009 (UTC)
Shouldn't DFS have to memorize branching_factor * depth nodes? Consider it visiting a node N expanding two non-goal leaf nodes A and B, after backtracking from A, information about N's remaining children is necessary to avoid running into A again, isn't it? And as this holds for every node visited, this would require to memorize O(branching_factor * search_depth) nodes, doesn't it? —Preceding unsigned comment added by 78.49.51.103 ( talk) 17:44, 19 April 2009 (UTC)
From the article:
In these applications it also uses space O(|V| + |E|) in the worst case to store the stack of vertices on the current search path as well as the set of already-visited vertices.
From the box:
Worst case space complexity: O(h) where h is the length of the longest simple path in the graph.
These statements are contradictory. I think the box needs to be changed to reflect O(|V| + |E|). —Preceding unsigned comment added by 12.25.230.211 ( talk) 21:15, 13 August 2009 (UTC)
You didn't read the section just above here in the talk, right? The O(|V|+|E|) style analysis is for applications of DFS in theoretical computer science, where one is given a whole explicit graph in advance, one must search the whole graph, and one uses a data structure of already-visited vertices to avoid repetitions. In particular this data structure means that the space used by the stack is only a small part of the total space so there is no point in analyzing it separately. The O(h) style analysis is for applications in AI, where one typically has an implicit graph, possibly infinite, and one does not eliminate repeated nodes. It has nothing to do with whether the implementation is recursive or not. — David Eppstein ( talk) 19:46, 15 November 2009 (UTC)
I suggest we remove the "optimal" entry from the infobox. Its interpretation is confusing. For an algorithm which solves 1 problem, it is clear what optimal means, it means that the best lower bound for the problem matches the upper bound provided by the algorithm. It doesn't mean anything to say an algorithm is optimal unless you refer to the problem. Since DFS (and BFS) solve so many graph theoretic problems, its not clear what optimal means. Which problem's lower bound is being matched by DFS (or BFS)? (On the other hand, sorting is the problem solved by bubble sort, and it is entirely clear what it means for bubble sort to be non-optimal.) -- Robin ( talk) 23:48, 16 November 2009 (UTC)
This article needs pseudocode, not a langauage war. Would anyone object to outright removal of all of these language-specific examples? I don't think they add anything. I wouldn't be opposed to moving them to Wikibooks and linking; it is useful to have a reference implementation in your language of choice, but the present state of things obscures the point of this page, which, believe it or not, is to explain depth-first search. —Ben FrantzDale ( talk) 12:51, 11 May 2010 (UTC)
How is a two sentence lede section "too long"? Justin W Smith talk/ stalk 14:36, 3 July 2010 (UTC)
Is line 24 of the iterative solution correct? Why is the pop required at this point? — Preceding unsigned comment added by 202.81.18.30 ( talk) 06:23, 1 August 2013 (UTC)
A back edge found only if w is discovered, not explored. The iterative version gets this right. The recursive has a bug. It will report cycles on many simple DAGs where both a node an its successor have an edge to the same third node. — Preceding unsigned comment added by 69.181.252.29 ( talk • contribs)
The new iterative pseudocode requires a stack of quadratic size in the worst case since a vertex can be pushed multiple times. This can be useful in some applications, but as a skeleton for the basic DFS process it is wasteful of both time and space. How is this justified? McKay ( talk) 05:02, 20 December 2013 (UTC)
173.11.122.193 ( talk) 17:22, 12 March 2014 (UTC)
I noticed that there is a phrase in Vertex ordering - it is natural to consider this code in the order A B C D or A C B D, but not natural to use the order A B D C or A C D B. Is this a claim? I mean how does this sentence add anylokirockz 13:09, 17 May 2014 (UTC) value to the reader? Lokesh Ravindranathan 13:09, 17 May 2014 (UTC)
Is it possible to add the definitions for |V| and |E| in the Properties section; because of the 'rule': explain abbreviations where they appear? Corresponding paragraph: In theoretical computer science, DFS is typically used to traverse an entire graph, and takes time Θ(|V| + |E|), [...] [1] Averrhoa ( talk) 09:30, 16 October 2015 (UTC)
References
The non-recursive pseudocode is unnecessary complicated and it is not from Kleinberg and Tardos as opposed to what is claimed. "and not in stack" is not an operation you can perform on a stack and it is not even needed here. The innermost "if w is not labeled as discovered" is unnecessary too, since this condition is checked anyway in the while loop. At this stage you can simply push the node to the stack. Below is the exact pseudocode from Kleinberg and Tardos, Algorithm Design, page 93. I don't know how to format things here so please someone repair this.
DFS (s) : Initialize S to be a stack with one element s While S is not empty Take a node u from S If Explored[u] = false then Set Explored[u] = true For each edge (u, v) incident to u Add v to the stack S Endfor Endif Endwhile
94.222.108.200 ( talk) 17:12, 10 August 2016 (UTC)
I just found out that the recent edits by 47.19.74.2 on 7 July 2016 and by 2601:246:100:58b2:48b7:c3e6:2773:8af4 on 20 July 2016 are responsible for the blunders in the pseudocode. The code should be reverted to the version before that. 94.222.108.200 ( talk) 17:35, 10 August 2016 (UTC)
There is a big problem with recent edits. They mention lexicographic DFS. It is not defined anywhere on this page, and there is no link to any wikipedia page explaining what LexDFS is. There is a Lexicographic BFS wikipedia article, but it seems that there is no lexBFS article. Therefore, either the paragraph complexity should be removed, or the explanation should be largely expanded. In particular, the sentence «the lexicographic DFS order of a rooted graph» is misleading, I think. Because it means that, given a rooted graph, there is a single lexicographic DFS. This statement would be false for LexBFS, I doubt it would be true for LexDFS. Sadly, I'm not graph savy enough to do the correction myself Arthur MILCHIOR ( talk) —Preceding undated comment added 10:21, 27 June 2017 (UTC)
The computational complexity of DFS was investigated by John Reif. More precisely, given a graph , let be the ordering computed by the standard recursive DFS algorithm. He considered the complexity of computing given . A decision version of the problem (testing whether some vertex u occurs before some vertex v in ) is P-complete[9]. It means that it is "a nightmare for parallel processing".[10]:189
Note that is a precise DFS. However, many other DFS exists. They can be obtained, using the standard recursive algoirthm on , obtained from by reordering the outgoing edges of each vertex. An arbitrary DFS can be computed by a randomized parallel algorithm in the complexity class RNC.[11] As of 1997, it remained unknown whether a depth-first traversal could be constructed by a deterministic parallel algorithm, in the complexity class NC.[12] — Preceding unsigned comment added by Arthur MILCHIOR ( talk • contribs) 23:38, 27 June 2017 (UTC)
@ David Eppstein: I noticed that you removed one of the examples that I added here because it was "too specialized." Would this example be suitable for inclusion elsewhere in this article? Jarble ( talk) 00:57, 11 July 2017 (UTC)
It's unclear to me why the iterative algorithm shown here has this difference with BFS, noted after the algorithm, that the test for having been visited is done after pop rather than before push. This does not seem to be intrinsically needed by the algorithm. DFS and BFS should only differ by the underlying structure (stack vs queue). -- 24.148.56.233 ( talk) 01:03, 26 October 2019 (UTC)
![]() | This ![]() It is of interest to the following WikiProjects: | |||||||||||||||||||||||||||||||||||||
|
When talking about the iterative approach to dfs, the article mentions one of the differences with bfs is that:
"it delays checking whether a vertex has been discovered until the vertex is popped from the stack rather than making this check before adding the vertex"
That's a moot point, as dfs could have just as easily written with the discovered check happening before adding in the stack. — Preceding
unsigned comment added by
24.56.255.3 (
talk)
11:57, 10 June 2019 (UTC)
No, it couldn't have been written like that. Consider the graph {1: (2, 3), 2: (), 3: (4, 2), 4: ()}
(Python dictionary notation, mapping vertices to their immediate successors). If you do the discovered check before pushing, i.e. you write the algorithm as
procedure DFS_iterative(G, v) is let S be a stack S.push(v) while S is not empty do v = S.pop() for all edges from v to w in G.adjacentEdges(v) do if w is not labelled as discovered then label w as discovered S.push(w)
this gives you 1, 3, 4, 2 and not 1, 3, 2, 4, since 2 is labelled as discovered after 1 is popped, and hence it doesn't get pushed after 3 is popped. The underlying issue is that 2 occurs in two places in the graph, as a child of 1 and as a child of 3, and the place where it's discovered first is later in the depth-first order.
For breadth-first search it doesn't affect the correctness of the algorithm whether you do the check before enqueueing or after dequeueing. But for depth-first search, it does matter. The paragraph below the part you quoted tries to point this out, maybe somewhat unclearly. It could be improved, perhaps. Stickinsect2 ( talk) 16:13, 1 January 2022 (UTC)
Another more basic example should be given to start with, without the link from F to E. Surely it would be more intuitive to put the letters in a different order Something like
A B C D E F G
Or maybe the order that the search would go?
A B E G C D F
Opticyclic 16:33, 21 April 2006 (UTC)
I think the psuedocode give in incorrect-- it enters an infinite loop! the node must be marked as visited BEFORE recursively calling dfs on adjacent nodes. pleaes confirm this! Yonestar 17:26, 20 April 2006 (UTC)
What's with the "ie where's the recursion?" DFS doesn't require recursion... no algorithm does. The stack makes it so you don't have to recurse (that's all recursion really does for you anyway, is add to a stack). If lack of recursion is the only "problem" with the algorithm, then there is no problem.
Changed from "DFS is optimal" to "DFS is not optimal". Please Confirm.
can you show me the program in c for dfs
Whoever "corrected" the page to put E on the end of the path, please do not un-correct it again. DFS will follow the link from F to E before getting to E from A. The apparent tree view does not reflect the actual DFS tree, and I chose it that way deliberately for didactic purposes. Jerf
Can you explain what all non-negative path costs is refering to ?
DFS's can be preformed for graphs in general as well, not just Trees. Though if you're thinking specifically of the technique as used in state-space search's I guess not. Perhaps, these two concepts should be seperated? Deepak 01:39, 17 Jan 2004 (UTC)
They're awfully closely related... I've tried to improve the situation in my last edit, but there still needs to be some more seperation of "depth-first traversal" and "depth-first search". Jerf
Why does the page say that DFS is a method for traversing trees, and then goes on to give an example of a general graph? (as it clearly has a cycle it is not a tree). The page should either state that it is a method for traversing graphs, or change the example to a tree.
The first example should be one of a plain hierarchical tree instead of a graph. Simple examples first more complex later.
Well, headline explains it all. It is not depth first for a general graph because of the Color(grey) thing.
I think the color(grey) thing shouldn't be there. and, there should be a color check just after the pop, to skip the node or not.
also, for generalization of the alg. to graphs, there should be an iteration over the nodes above the while loop. I'll change the pseudocode, if anyone has objections, we can revert it.
Why do we have TWO solutions, both of them iterative, creatings stacks, etc? It seems that in general, iterative approaches that have to heavily-manipulate stacks are well-suited to recursive approaches (at least - it is conceptually easier) - and that is certainly true here. I'm definitely going to replace one of these algorithms with a recursive solution. I'd prefer to replace the second, it looks less intuitive. (Remember we're not here to give the best-optimised code, but to teach how it works - therefore the conceptually easier code is always the best on wikipedia). — EatMyShortz 11:46, 28 February 2006 (UTC)
As much as I love python, it shouldn't be marked as pseudocode. The second one was better, but still not pseudocode.
That's true. I changed it. —Preceding unsigned comment added by Srrrgei ( talk • contribs) 20:27, 3 May 2009 (UTC)
The different classifications of edges are briefly discussed here, but should perhaps be moved to a new article. Consider:
I think this distinction should be made in the interest of better explaining the ability of detecting cycles in a graph using topological sorting. -- Stimpy 20:10, 17 December 2006 (UTC)
Either the "reverse postordering" section has an error, or I am confused. Since "reverse postordering is the reverse of a postordering, i.e. a list of the vertices in the opposite order of their last visit", why is D not first in reverse postordering?
TJA —Preceding unsigned comment added by 68.162.149.237 ( talk) 17:10, 25 March 2008 (UTC)
The preordering is ABDC or ACDB. I think the postordering is BDCA or CDBA. The reverse postordering looks to be a DFS to a bottom vertex, and then a DFS to all of the possible vertices. Although, I'm still not sure if "reverse postordering" is something. "Reverse postordering" just seems like a reverse DFS from a root vertex. Somebody please correct me. — Preceding unsigned comment added by 24.21.136.68 ( talk) 23:01, 10 February 2013 (UTC)
Why is the cycle in the depth-first graph "A, B, D, F, E"? I don't understand its assumptions. It seems to me that unless you're going to put some sort of directionality on the edges, you'll get something like A, B, D, B, D...
You can't even get from D to F without revisiting B. Sure, B is a backtrack and you don't usually include those in the search order, but if this search was keeping track of what was previously in the path, it wouldn't get into a cycle in the first place. It would take some sort of memory to go from B to D the first time and B to F the second. This strikes me as an error. rspeer / ɹəədsɹ 06:49, 28 August 2008 (UTC)
Agh, it gets worse. The section on iterative deepening seems to make the same unknown assumptions.
Memory-less depth first search on an undirected graph is quite silly and nobody ever does it. Whoever wrote the example seems to have assumed that an edge is never crossed twice in opposite directions (but can be in the same direction). I may replace this example with a directed graph to make it make some sort of sense. rspeer / ɹəədsɹ 15:56, 28 August 2008 (UTC)
I just noticed that there is no proof of the algorithm running time (big O) listed? I thought this was standard for graph algorithms? 129.59.172.24 ( talk) 22:55, 10 February 2009 (UTC)
Shouldn't DFS have to memorize branching_factor * depth nodes? Consider it visiting a node N expanding two non-goal leaf nodes A and B, after backtracking from A, information about N's remaining children is necessary to avoid running into A again, isn't it? And as this holds for every node visited, this would require to memorize O(branching_factor * search_depth) nodes, doesn't it? —Preceding unsigned comment added by 78.49.51.103 ( talk) 17:44, 19 April 2009 (UTC)
From the article:
In these applications it also uses space O(|V| + |E|) in the worst case to store the stack of vertices on the current search path as well as the set of already-visited vertices.
From the box:
Worst case space complexity: O(h) where h is the length of the longest simple path in the graph.
These statements are contradictory. I think the box needs to be changed to reflect O(|V| + |E|). —Preceding unsigned comment added by 12.25.230.211 ( talk) 21:15, 13 August 2009 (UTC)
You didn't read the section just above here in the talk, right? The O(|V|+|E|) style analysis is for applications of DFS in theoretical computer science, where one is given a whole explicit graph in advance, one must search the whole graph, and one uses a data structure of already-visited vertices to avoid repetitions. In particular this data structure means that the space used by the stack is only a small part of the total space so there is no point in analyzing it separately. The O(h) style analysis is for applications in AI, where one typically has an implicit graph, possibly infinite, and one does not eliminate repeated nodes. It has nothing to do with whether the implementation is recursive or not. — David Eppstein ( talk) 19:46, 15 November 2009 (UTC)
I suggest we remove the "optimal" entry from the infobox. Its interpretation is confusing. For an algorithm which solves 1 problem, it is clear what optimal means, it means that the best lower bound for the problem matches the upper bound provided by the algorithm. It doesn't mean anything to say an algorithm is optimal unless you refer to the problem. Since DFS (and BFS) solve so many graph theoretic problems, its not clear what optimal means. Which problem's lower bound is being matched by DFS (or BFS)? (On the other hand, sorting is the problem solved by bubble sort, and it is entirely clear what it means for bubble sort to be non-optimal.) -- Robin ( talk) 23:48, 16 November 2009 (UTC)
This article needs pseudocode, not a langauage war. Would anyone object to outright removal of all of these language-specific examples? I don't think they add anything. I wouldn't be opposed to moving them to Wikibooks and linking; it is useful to have a reference implementation in your language of choice, but the present state of things obscures the point of this page, which, believe it or not, is to explain depth-first search. —Ben FrantzDale ( talk) 12:51, 11 May 2010 (UTC)
How is a two sentence lede section "too long"? Justin W Smith talk/ stalk 14:36, 3 July 2010 (UTC)
Is line 24 of the iterative solution correct? Why is the pop required at this point? — Preceding unsigned comment added by 202.81.18.30 ( talk) 06:23, 1 August 2013 (UTC)
A back edge found only if w is discovered, not explored. The iterative version gets this right. The recursive has a bug. It will report cycles on many simple DAGs where both a node an its successor have an edge to the same third node. — Preceding unsigned comment added by 69.181.252.29 ( talk • contribs)
The new iterative pseudocode requires a stack of quadratic size in the worst case since a vertex can be pushed multiple times. This can be useful in some applications, but as a skeleton for the basic DFS process it is wasteful of both time and space. How is this justified? McKay ( talk) 05:02, 20 December 2013 (UTC)
173.11.122.193 ( talk) 17:22, 12 March 2014 (UTC)
I noticed that there is a phrase in Vertex ordering - it is natural to consider this code in the order A B C D or A C B D, but not natural to use the order A B D C or A C D B. Is this a claim? I mean how does this sentence add anylokirockz 13:09, 17 May 2014 (UTC) value to the reader? Lokesh Ravindranathan 13:09, 17 May 2014 (UTC)
Is it possible to add the definitions for |V| and |E| in the Properties section; because of the 'rule': explain abbreviations where they appear? Corresponding paragraph: In theoretical computer science, DFS is typically used to traverse an entire graph, and takes time Θ(|V| + |E|), [...] [1] Averrhoa ( talk) 09:30, 16 October 2015 (UTC)
References
The non-recursive pseudocode is unnecessary complicated and it is not from Kleinberg and Tardos as opposed to what is claimed. "and not in stack" is not an operation you can perform on a stack and it is not even needed here. The innermost "if w is not labeled as discovered" is unnecessary too, since this condition is checked anyway in the while loop. At this stage you can simply push the node to the stack. Below is the exact pseudocode from Kleinberg and Tardos, Algorithm Design, page 93. I don't know how to format things here so please someone repair this.
DFS (s) : Initialize S to be a stack with one element s While S is not empty Take a node u from S If Explored[u] = false then Set Explored[u] = true For each edge (u, v) incident to u Add v to the stack S Endfor Endif Endwhile
94.222.108.200 ( talk) 17:12, 10 August 2016 (UTC)
I just found out that the recent edits by 47.19.74.2 on 7 July 2016 and by 2601:246:100:58b2:48b7:c3e6:2773:8af4 on 20 July 2016 are responsible for the blunders in the pseudocode. The code should be reverted to the version before that. 94.222.108.200 ( talk) 17:35, 10 August 2016 (UTC)
There is a big problem with recent edits. They mention lexicographic DFS. It is not defined anywhere on this page, and there is no link to any wikipedia page explaining what LexDFS is. There is a Lexicographic BFS wikipedia article, but it seems that there is no lexBFS article. Therefore, either the paragraph complexity should be removed, or the explanation should be largely expanded. In particular, the sentence «the lexicographic DFS order of a rooted graph» is misleading, I think. Because it means that, given a rooted graph, there is a single lexicographic DFS. This statement would be false for LexBFS, I doubt it would be true for LexDFS. Sadly, I'm not graph savy enough to do the correction myself Arthur MILCHIOR ( talk) —Preceding undated comment added 10:21, 27 June 2017 (UTC)
The computational complexity of DFS was investigated by John Reif. More precisely, given a graph , let be the ordering computed by the standard recursive DFS algorithm. He considered the complexity of computing given . A decision version of the problem (testing whether some vertex u occurs before some vertex v in ) is P-complete[9]. It means that it is "a nightmare for parallel processing".[10]:189
Note that is a precise DFS. However, many other DFS exists. They can be obtained, using the standard recursive algoirthm on , obtained from by reordering the outgoing edges of each vertex. An arbitrary DFS can be computed by a randomized parallel algorithm in the complexity class RNC.[11] As of 1997, it remained unknown whether a depth-first traversal could be constructed by a deterministic parallel algorithm, in the complexity class NC.[12] — Preceding unsigned comment added by Arthur MILCHIOR ( talk • contribs) 23:38, 27 June 2017 (UTC)
@ David Eppstein: I noticed that you removed one of the examples that I added here because it was "too specialized." Would this example be suitable for inclusion elsewhere in this article? Jarble ( talk) 00:57, 11 July 2017 (UTC)
It's unclear to me why the iterative algorithm shown here has this difference with BFS, noted after the algorithm, that the test for having been visited is done after pop rather than before push. This does not seem to be intrinsically needed by the algorithm. DFS and BFS should only differ by the underlying structure (stack vs queue). -- 24.148.56.233 ( talk) 01:03, 26 October 2019 (UTC)