Talk:A* search algorithm
This is the talk page for discussing improvements to the A* search algorithm article. This is not a forum for general discussion of the article's subject. |
Article policies
|
Find sources: Google (books · news · scholar · free images · WP refs) · FENS · JSTOR · TWL |
This article is rated B-class on Wikipedia's content assessment scale. It is of interest to the following WikiProjects: | ||||||||||||||||||||||||||||||||||||||
|
Article title
[edit]Someone moved this from A Star Search algorithm, but it should be located at A Star search algorithm since "Star" is part of the title. It is usually written A*, but pronounced like the title of the article. It is "A Star," not "A star" like "A square" or "A circle." It is a specific algorithm—using lowercase makes it sound like it describes a generic sort of algorithm. Anyone else have any input on this? -Frecklefoot
- I think A-Star search algorithm is about right. Better would be A* search algorithm, but apparently that's not an option?
- A* search algorithm as a topic works fine, as you can see. :) --ZeroOne 21:55, 17 Nov 2004 (UTC)
Bogus link
[edit]I removed the link to B* search algorithm. I determined it was bogus--it failed the Google test. —Frecklefoot 15:32, 12 Dec 2003 (UTC)
- Well, it's referenced by the article on Maven (Scrabble), which supposedly uses that search algorithm. An article on how that algorithm works would be enlightening. RSpeer 08:06, Dec 25, 2004 (UTC)
- B* is mentioned in (the classic) Artificial_Intelligence:_A_Modern_Approach, in the GamePlaying chapter, if I recall as an alternative to MinMax-alpha-beta.
- The fact is, you shouldn't use the Google test on AI terms. When you're just doing AI-related stuff, you don't tend to classify exactly what algorithms or ideas you're using using standardized terminology (unless it's a popular buzzword and you want funding); the only place that so much precision is used is in textbooks, which you can't Google. RSpeer 17:53, Dec 26, 2004 (UTC)
- I created B* search algorithm. --Rajah (talk) 00:26, 2 January 2008 (UTC)
Intuition on why A* is admissible and optimal
[edit]I added this section because it reflects exactly how I initially recognized that the search algorithm proposed by Nils Nilsson and Bert Raphael was "the best we'll ever find".
See the original A* paper (which incidentally we had a hard time publishing, leading journals of the day were pleased to reject it as trivial or of no interest):
Hart, P. E., N. J. Nilsson, and B. Raphael, "A Formal Basis for the Heuristic Determination of Minimum Cost Paths in Graphs," IEEE Trans. on Systems Science and Cybernetics, Vol. SSC-4, No. 2, pp 100-107, (July 1968)
It seems to me that this intuitive view is still of interest, I hope that it's more than a purely historical curiousity.
Difficult to extend heuristic
[edit]The intuition section of the article claims that A* does not take into account non-crossing and spatially close nodes, and that such considerations would be difficult or expensive to encode into the heuristic. Why so?! In my latest version distance to the neighbour node is one aspect of the heuristic and spatially close nodes will therefore tend to be prioritised. It was both trivial and cheap to implement. However, the question id if this is preferable, especially in non-symmetric spaces, where one long jump can be cheaper than several short. Mikademus 07:56, 16 September 2006 (UTC)
"Suboptimal" path in picture
[edit]The picture caption is "A* search that uses a heuristic that is 5.0(=ε) times a consistent heuristic, and obtains a suboptimal path." However, as far as I can see, it is not a suboptimal path. 220.255.1.114 (talk) 13:54, 5 January 2015 (UTC)
- (Reproduced the picture here for reference.)
- The step cost is not specified, but I assume Euclidean costs were meant, in which case a diagonal step costs . The path produced by the algorithm takes five diagonal steps that could be replaced by unit-cost horizontal steps. QVVERTYVS (hm?) 16:03, 5 January 2015 (UTC)
- It is possible to "cut the corners" produced by lazy algorithm using a following hack:
- Upon accessing new node, when a check for neighbor node being in pending list haven't failed, and if its path cost is worse than currently followed path cost, instead of omitting processing it, you should re-evaluate its full cost using current path cost instead of its path cost. This hack reliably forces algorithm out of local minimum regardless of ε value, however its laziness isn't alleviated and it would still prefer extremely long paths over shortcuts in "plain sight" provided those are at no point happen to have greater distance to the target than on the short path. The degree to which the algorithm would refuse to gain distance from the target depends on ratio between distance to the target multiplied by ε and cost of the distance it has to travel by the long path.
- This is original research. I use it in video game programming. Raidho36 (talk) 13:03, 13 August 2015 (UTC)
Missing details
[edit]1. Description on path reconstruction from closed priority queue: The article claims that the path can be reconstructed from the closed priority queue, without storing the path so far at each node. The article directs the reader to "see below" for a description of the advertised technique, but that description is nowhere to be found.
2. Check candidate nodes to see if they're already in Closed AND Open: The article mentions that you must check if a candidate node is already in the closed priority queue. I have read other descriptions such as Amit Patel's which is linked at the bottom of the page which states that you should also check if the candidate node is already in the open priority queue. Experimentally I determined that the algorithm runs at unacceptable speeds unless I perform this additional check.
Since I am in no position of authority on A* I have not made any edits to the article. Hopefully someone else can address these two issues.
== Mo
how does it work
[edit]Right now, the article doesn't explain how the algorithm works in detail.
Also, it'll be great to explain why it's called "A*". --Abdull 15:47, 27 November 2005 (UTC)
- In response to this question about why it's called A*:
- The notation is borrowed from the statistical literature. Statisticians use a hat (also called a circumflex) to indicate an estimate for a quantity, and often use a star to indicate an estimate that's optimal with respect to a stated criterion (like, say, a minimum variance criterion). When I (Peter E. Hart) was developing this algorithm and especially the theory behind it with my colleagues Nils Nilsson and Bertram Raphael, we adopted this traditional notation. We used hats to indicate estimates of heuristic functions, and went on to compare any other algorithm, call it A, with our provably-optimal (by the criterion of number of nodes expanded) A*. Hart 02:16, 7 March 2006 (UTC)
I agree. Also it would be nice to know who first developed it. --Kubiak 19:30, 29 November 2005 (UTC)
- Hm, I hope the new article is a bit better in this regard :-) Regnaron 18:59, 15 December 2005 (UTC)
Rewriting of article
[edit]Hi, I just translated my article for the A* search algorithm I wrote for the german wikipedia (which was once founded on the old version of this article) into english, and uploaded it here. But since my mothertounge is german, I am sure there are not just dozens of typos, but also tons of wordings where native speakers just want to run away screeming :-) So I would be very pleased if some of you could correct my choice of words :-) Regnaron 18:58, 15 December 2005 (UTC)
- I think your translated article is very comprehensive, if a bit overly "textbooky" for Wikipedia. (Have you considered Wikibooks?) However, it's full of typos, grammatical atrocities, and just generally things that are wrong in English. It's very difficult to get any sense of what A* is or what it does simply by reading the article from start to finish, because nobody who starts reading will finish! So I'm reverting to the old English article — but with an encouraging word for any native English speaker who wants to try to translate the German article from scratch. (But please, less wordy!) --Quuxplusone 02:50, 16 December 2005 (UTC)
- Hi! Hm, well, in fact I tried to make the article somewhat comprehensive through the big example section and explaining how it works, and I still would say this can help the reader to grasp the idea of "how an A* search works". (Nevertheless, all aspects of the algorithm had been covered in the article) But as I stated in my initial post here: I already assumed that there is (or now was :-)) a lot of wording that would simply be wrong in english. (I just did not expect it to be that much *g*) Getting to the point: I belive you that your comments and critics are correct, so I will not start yelling and hunting for you for reverting the article. Despite, in case anyone still wants to invest some time "translating" my old article: It will stay being acessible via my Playground on my userpage Regnaron 08:32, 16 December 2005 (UTC)
Relation to uniform-cost search
[edit]"Uniform-cost search, aka Dijkstra's algorithm, is a special case of A*..." - even a quick glance at the linked articles shows that "Dijkstra's algorithm" is not the same thing as "uniform-cost search". That said, I'm not knowledgeable enough on this topic to feel comfortable rewriting the sentence. Rizome 20:07, 8 May 2006 (UTC)
- Hi! Hm, if I have a look at what is written at Uniform-cost search (UCS) it quite fits the description of Dijkstras Algorithm. If you just replace root with source you will get a description of Dijkstras Algorithm. I am not absolutely certain why there are two Pages describing the same algorithm, but at the current state of the UCS article it (in a very simplified way) does describe Dijkstras Algorithm. (Always expand the globally cheapest node) Regnaron 18:48, 22 May 2006 (UTC)
Ditto, as far as I can see Dijkstra is the same thing as uniform cost-search. I also have trouble seeing how Dijkstra is a special case of A* considering A* came after Dijkstra published his algorithm, the fact that the earlier can be seen as a derivation of the latter doesn't mean that it is a special case of A*.
Dijkstra's is a special case in the mathematical sense, this has nothing to do with historical derivation. Qwertyus 14:10, 26 May 2006 (UTC)— This old remark of mine is wrong. See below. QVVERTYVS (hm?) 15:25, 6 November 2014 (UTC)
- Well, use A* with the heuristics h(x) = 0 for all x. (meaning using no heuristics at all)
- What you will get is just Dijkstras Algorithm. So in this sense Dijkstra is a special case of A* (namely one where the heuristcs is 0 for all nodes) Regnaron 21:14, 26 May 2006 (UTC)
The article claims that "However, when the heuristics are monotone, it is actually equivalent to running Dijkstra" (apparently entered by user Chire on 09:13, 26 July 2011). Is this really true? I think not... Fabian 11:34, 19 August 2011 (UTC)
- Incorrect as stated. It is properly described in the description, though. From the article: With a monotone heuristic, "A* is equivalent to running Dijkstra's algorithm with the reduced cost ." Dr0b3rts (talk) 08:46, 16 November 2011 (UTC)
- Dijkstra's algorithm is a different algorithm. It computes a single-source shortest-path tree, i.e., a tree indicating the shortest path from a designated vertex to every other vertex in the graph. It can be cut short to compute a single-source, single-destination shortest path, but then it's dubious whether you should call the result Dijkstra's algorithm. I suppose this is why Russell and Norvig call the resulting algorithm "uniform-cost search" (if they coined this term; it's the only place outside Wikipedia where I've seen it), not Dijkstra's.
- A* could similarly be extended to compute the shortest-path tree, but I think (I haven't thought this through completely) that in that case the heuristic steering is useless: the algorithm will traverse the entire graph anyway. QVVERTYVS (hm?) 15:24, 6 November 2014 (UTC)
- Hm, I may have confused Dijkstra's algorithm with uniform-cost search. I didn't think that Dijkstra's produced a shortest-path tree, but that it solved the shortest-path problem, but looking at the pseudocode in the Dijkstra's-article it seems that I was wrong. The Dijkstra's-article states that "The A* algorithm is a generalization of Dijkstra's algorithm" (which would mean that Dijkstra's is a special case of A*), but this seems incorrect since A* search does not produce a shortest-path tree. —Kri (talk) 16:09, 6 November 2014 (UTC)
- The Dijkstra's-article seems to contradict itself—On one hand, it says that it generates a shortest-path tree, and this is also what the pseudocode does; on the other hand, it says that A* is a generalization, and the second animation in the article solves the shortest-path problem but doesn't generate a shortest-path tree as it stops when the goal node has been reached. I think we need to determine what Dijkstra's algorithm actually is and change the article accordingly. —Kri (talk) 16:25, 6 November 2014 (UTC)
- Looking in Russel and Norvig, Artificial Intelligence: A Modern Approach (Third Edition), they never refer to any algorithm called "Dijkstra's algorithm". On the other hand, they note that "The two-point shortest-path algorithm of Dijkstra (1959) is the origin of uniform-cost search," where the reference refers to the article A note on two problems in connexion with graphs.
- This paper discusses two problems. The first one being the problem of constructing the tree of minimum total length between n nodes, given that some or all pairs of the nodes are connected. That is, to find the minimum spanning tree. This is not the same thing as a shortest-path tree.
- The second problem being discussed is the problem of finding the path of minimum total length between two given nodes, i.e. to solve a shortest-path problem in the graph.
- Never does the paper discuss the problem of generating a shortest-path tree, so I don't know how Dijkstra's algorithm has come to mean an algorithm that solves that problem. —Kri (talk) 16:57, 6 November 2014 (UTC)
- Fascinating. The all-targets variant is called Dijkstra's algo by textbooks. I stand corrected, and perhaps we should fix the other article. (Mehlhorn and Sanders, for example, cite the exact same paper but I can't find a discussion of the two-point version now.) QVVERTYVS (hm?) 21:49, 6 November 2014 (UTC)
I think that the attitude above, that there is a single precise thing that is Dijkstra's algorithm, that one should figure out whether it is the one that computes a single path or the one that computes a whole tree, and that whichever you choose the other one is definitely not Dijkstra, is a big mistake. The same basic algorithm can be used for both problems and is still Dijkstra's algorithm either way. It has a slightly different termination condition but is otherwise the same, and the early-termination single-path variation is clearly and obviously the same as what you get from A* with a monotone heuristic. Certainly that's what I teach in my algorithm and graph algorithm courses. As for the recent [citation needed] tagging by the people arguing above: did you even try to find sources or did you just slap a tag on as a way of expressing your personal disagreement with the claims in question? Because searching Google books for the phrases "dijkstra" "special case of" "a*" finds plenty of sources. —David Eppstein (talk) 17:03, 6 November 2014 (UTC)
- That was me. Sorry, not my finest moment. QVVERTYVS (hm?) 21:49, 6 November 2014 (UTC)
- I also contributed to that I guess since I reverted your edit. David, do you mean that "Dijkstra's algorithm" can mean more than just one algorithm? In that case I think that should be mentioned in the article, because right now it feels a bit self-contradicting. —Kri (talk) 22:21, 6 November 2014 (UTC)
- No. I mean that every algorithm, including Dijkstra's, has variations within it. An algorithm is not the same thing as a precise implementation and specification; there are still variations to be decided by any implementor. For Dijkstra, one of the most important parts left undetermined by the basic algorithm is how to implement the priority queue, but another is whether to stop early. Stopping early or not stopping early does not change whether it is Dijkstra's algorithm. Neither does using binary heaps versus using Fibonacci heaps. It is no more confusing than the fact that redwoods and mangroves and oaks are all trees. We do not have to warn readers that not all trees are redwoods in an article that's mostly about something else but happens to mention trees. For the same reason, it seems unnecessary and overly pedantic to warn readers that Dijkstra's algorithm can also be used for a different purpose (single source shortest paths) as well as for the single-path problem for which it gives a special case of A*. —David Eppstein (talk) 22:54, 6 November 2014 (UTC)
- Okay, thank you, that made it a bit clearer. I understand your analogy with trees and redwoods, but I wouldn't say that it is overly pedantic to be clear on that Dijkstra's is not actually such well specified algorithm; me and QVVERTYVS got into an argument about it for example because we both had preconceptions about it that turned out to be a bit off. I'm sure we're not alone on that. —Kri (talk) 01:04, 7 November 2014 (UTC)
- Relatedly, here's another variation in A* that our article touches on briefly: do we use a data structure to keep track of the closed set (as the pseudocode describes) or do we ignore the closed set and do a tree search, allowing repeated rediscovery of the same nodes (as the paragraph after it describes)? The version in which we don't keep track of the closed set is the usual one described in the AI literature, I believe, but the more algorithmic literature on problems like shortest paths for road networks may be different in that respect. Anyway, it's only the variation that keeps track of the closed set that's the special case of Dijkstra. (In the context of Dijkstra, the same set is usually called visited rather than closed, but it's an important part of the algorithm. And again, there's a choice in what type of data structure you use to keep track of the closed set, that makes a difference to the performance of the algorithm but not to its basic nature.) —David Eppstein (talk) 01:47, 7 November 2014 (UTC)
- Okay, thank you, that made it a bit clearer. I understand your analogy with trees and redwoods, but I wouldn't say that it is overly pedantic to be clear on that Dijkstra's is not actually such well specified algorithm; me and QVVERTYVS got into an argument about it for example because we both had preconceptions about it that turned out to be a bit off. I'm sure we're not alone on that. —Kri (talk) 01:04, 7 November 2014 (UTC)
Intuition about why A* is not computationally optimal
[edit]This section points out that A* may not be computationally optimal for non-planar graphs.
The proof of the optimality of A* relies on the fundamental assumption that the triangle inequality is satisfied for every 3-node subset of the graph being searched. A non-planar graph may violate this assumption.
Huw: Actually the way I read this section it states that is not computationally optimal for planar graphs because does not take advantage of certain features of planar graphs. I claim that this is nonsense. I claim that depends for its quality entirely on the heuristic being used. For example, if : then will walk straight to the solution regardless of the type of graph being examined. Typically, for planar graphs, is defined to be the linear planar distance from to the goal. This makes take advantage of precisely the planar features that the author claims it does not. This means that will be computationally optimal for these graphs.
Matt: The previous comments are correct. This section needs to be deleted, or at least retitled and heavily edited! The optimality of A* is a fundamental principle of search algorithms. Every A.I. textbook teaches it, and any debate should be left to researchers in the field. Explaining why it doesn't make sense to you should not be posted in the main article, bring it up in discussion and we can clear up any misunderstandings. People get confused because: a) they think 'optimal' means 'fast'. b) they forget that A*'s optimality proof assumes the heuristic is admissable and consistent.
- I've deleted the section. It should be clear that topology of the graph is something the heuristic should take into consideration, not A*. Here's the original proof, which I'm adding as a reference in the article. falsedef 07:26, 11 December 2006 (UTC)
Martin: I think more clarity should be given to the difference between optimality, ie always finding the best goal state, and efficiency, ie finding it with the fewest computational steps. Reading optimal where efficient should be could set back a rookie quite a way.
- Not sure what the above statement means. A* expands the minimal number nodes for a given informedness (heuristic) compared with other algorithms while guaranteeing finding an optimal least cost path. Thus it is both optimal in its execution and the result. If you have an alternate algorithm that beats A*, then you have provided the alternate algorithm more a priori information about the problem than you have provided A*. Ripe 14:55, 31 August 2007 (UTC)
DMS: I am really concerned about claims that A* produces optimal paths. My car's navigation system (which I'm sure uses A*) develops a path that goes the wrong way around a rather odd, semicircular street whenever I ask for a path to the northwest from my house. Being by nature a greedy algorithm, A* cannot by its nature recover from one bad choice. Dms489 (talk) 14:24, 29 August 2009 (UTC)
- How do you know your car navigation system uses A*? It's actually pretty likely it uses a later derivative. (For instance, the mars rover Curiosity uses one called D*.) --Blogjack (talk) 17:07, 13 October 2014 (UTC)
- I mean, that anecdotal GPS might be using A* with a non-admissible heuristic -- one that sometimes underestimates the cost instead of overestimating. In that case, it can give a suboptimal route. I often suspect that's what Google Maps is doing when it sends me along slow roads with seventeen left turns that inch toward my destination, instead of one fast one that goes around them all. (I suspect sometimes that if someone could come up with a non-trivial admissible heuristic for driving time in Boston, they could get it published.) But I digress. This caveat is well described by the article, so it's not an error or omission in the article. rspεεr (talk) 05:43, 15 October 2014 (UTC)
complexity
[edit]polynomial... what polynomial, n^2 is mutch mutch better than n^3
- I would doubt the correctness of that paragraph as a whole... First of all I do not see any exponential running time of the algorithm. It has to visit each vertex in a graph exactly once, as well as it has to consider every edge once. That is linear in the number of vertices and edges. You get some overhead from maintaining the priority queue, but still nothing exponential. Furthermore: If you have a graph that is a list, you can have the best heuristic, but still will do no better than with no heuristic at all, since you have to go the complete path up to the goal node. E.g. where s is the start node and g the goal node. Here you may have a perfect heuristic, but the running time does not change at all since you cannot skip any nodes. Regnaron 17:24, 14 July 2006 (UTC)
Huw: I agree. This paragraph is simply not true. I'm not even sure what even means in this context since represents a vertex in a graph and should imply that is the dominant term in the cost of an algorithm when .
- is the asymptotic growth of the perfect heuristic . I explained this and inserted a reference. Btw., visiting each edge and each node once is only possible for finite graphs. A* can be used for infinite graphs, or at least graphs that do not fit in any computer memory using an "explicit" representation. Qwertyus 00:44, 21 July 2006 (UTC)
Is that right? Aren't we missing a factor or something in there, since every iteration requires a best-candidate lookup? 2A02:F6C:65E:149D:F611:5FD6:7D3F:D37A (talk) 10:11, 12 July 2022 (UTC)Werner
The Pseudo-code
[edit]hi, all the algoritm is correct, but you have to say that in g_score[] all node-except g_score[start] who is set as 0- musts be init as infinite, if you init them as 0 or dont init the algoritm does not works, please my english is very bad but see this and fix — Preceding unsigned comment added by 62.151.94.238 (talk) 18:27, 28 August 2013 (UTC)
I am rewriting current pseudocode to java implementation for jung library. I am not sure, but I think that in pseudocode is typo. Should not be here heuristic used?: "tentative_g_score := g_score[x] + dist_between(x,y)"... --84.245.72.18 (talk) 09:35, 3 December 2010 (UTC)
I'm not satisfied with the pseduo-code [it is atm]; in attempting to stay close to actual programming it is in fact more confusing than helpful. I'd suggest something more talkative and close to natural-language, like my take below. I'd appreciate any opinions or alternative takes, because I think we do need another pseudo-code.
AStarSearch( start_node, goal_node )
queue open_list
queue closed_list
push start_node on open_list
while open_list not empty
let node be RemoveLowestCostFrom( open_list )
if node is goal_node
then return SUCCESS
else
push node on closed_list
foreach neighbor that is adjancent_to( node )
if neighbor is not in open_list
then if neighbor is not in closed_list
then
EstimateCostFor( neighbor )
push neighbor on open_list
return FAILURE
Mikademus 08:07, 11 August 2006 (UTC)
- This version is clearer, but:
- It returns success or failure, rather than a path. In fact, it doesn't remember paths, unless they're stored in nodes, but then the
if neighbor is not in open_list
is redundant. - The function EstimateCostFor should be called implicitly by the push function.
- closed_list is not a queue, but a set.
- The goal node need not be explicitly given. A* is used for other things than simple pathfinding (such as natural language parsing), where the goal is not even known beforehand.
- It returns success or failure, rather than a path. In fact, it doesn't remember paths, unless they're stored in nodes, but then the
- My suggestion:
procedure AStar(start_node) var open_list := empty queue var closed_list := empty set var initial_path := a partial path containing only start_node push initial_path on open_list while open_list is not empty let partial_path be RemoveItemWithLowestCost(open_list) if partial_path reaches a goal node return partial_path else if the last node of partial_node is not in closed_list add this node to closed_list foreach neighbor of the last node in partial_path var new_path := partial_path extended with neighbor push new_path on open_list return FAILURE
I agree with the points you provided above. I based the style of the pseudocode on that in the "AI for Game Developers" book by Bourgh & Seamann, in which their conceptual outline does not complicate the code with creating a path, only showing how to find the goal node. However, it may be even more confusing to omit the creation of the path itself, so my follow-up suggestion, with some improvements(?)
- Conceptually easier to follow path generation
- Providing a goal node for pedagogical reasons
- The bizarre pleasure of beholding an unholy pseudo-language which is consistent and looks like a mixture of Pascal, Python and Visual Basic
function AStar( Node start_node, Node end_node) returns List of Node type Visited_Node := record of node := Node parent := Node end record var open_list := empty Queue of Visited_Node var closed_list := empty Set of Visited_Node var node_info := Visited_Node( start_node, empty ); push( what:node_info, on:open_list ) while open_list is not empty var node_info := remove_first( from:open_list ) push( what:node_info. on:closed_list ) // The goal node was reached. Trace the path using // the saved parent information and return path if node_info.node equals end_node then return trace_path( from:node_info, in:closed_list ) else foreach neighbor of node_info.node if neighbor is not in open_list and if neighbor is not in closed_list then let node_cost := EstimateCostFor( neighbor ) var new_node := Visited_Node( neighbor, node_info.node ) push( what:new_node, cost:node_cost, on:open_list ) // Pathfinding failed. Return an empty // list to indicate this. return empty
Mikademus 15:41, 11 August 2006 (UTC)
- I don't like the pseudocode on the main page, because it's too short and too unclear. Of all the suggenstions here I like the above code the most. However, a few points in it could need some more clarity: what does the "remove_first" mean? Is it the front or the back element, or the one with lowest cost? And shouldn't the cost be in the node type? --213.118.83.195 14:32, 1 April 2007 (UTC)
- I think this is a bit long and specific, especially with regard to the Visited_Node record type. IMHO, the pseudocode is only there for illustration and reference, and the algorithm should be explained in the text.
- Your pseudolanguage is very, ehm, interesting, indeed. It's quite consistent, but I don't regard that a major quality in pseudocode: the whole point of it is being able to mix programming and natural languages.
- Also, I don't understand why you'd want to check whether neighbors are in open_list. Qwertyus 17:44, 11 August 2006 (UTC)
- Hi! Well, here a third suggestion :-) (copied over from german wikipedia with quick and dirty translation of the comments)
/* * algorithm reconstructing a shortest path * ---------------------------------------- * P: computed shortest path * start: starting node * node: goal node */ reconstruct_shortest_path (n, p) { // Check if start node reached while (π(n) != NIL)) { // add node to path push(n, p); // do the same for the predecessor n := π(n); } // return path found return p; }
/* * Initializing graph * ------------------ * G: graph to be initialized * s: start node */ initialize(G, s) { // set distance to all nodes to infinity forall v do d[v] := ∞ // set distance to start node to zero d[s] := 0; }
/* * relaxin an edge * --------------- * u: current node being explored * v: currently considered sucessor of u * w: weight function for computing weight of edge * f: combination of distance and heuristic (f[v] = d[v] + h[v]) */ relax(u,v,w,h) { // check if going through u results in shorter path if f[v] > d[u] + w(u,v) + h[v] { // new path is shorter // update estimated path lenght to goal f[v] := d[u] + w(u,v) + h[v]; // update predecessor pointer for shortest path π[v] := u; } else ; // no change since already shorter path known }
/* * A*-algorithm * ------------------------------ * G: graph to run algorithm on * s: start node * g: goal node * w: weight function for computing weight of edge * f: combination of distance and heuristic (f[v] = d[v] + h[v]) * Q: priority Queue for nodes of graph */ A-Star (s, g, w, h, G) { 01 initialize(G, s); 02 Q := V[G]; // add all nodes to priority queue 03 while not isEmpty(Q) { 04 // examine node with least distance to start node 05 u := pop(Q); 06 if (u == g) then 07 return reconstructShortestPath(g); 08 else { 09 // examine all neighbours of current node u 10 forall v := sucessors(u) do { 11 // relax edge between u and its sucessor v 12 relax(u, v, w); 13 } 14 } 15 } 15 // no path found 16 return fail; 17 }
So, why this algorithm? Well, first of all I like building extra functions for relax and initialize because it make the algorithm itself cleaner. How the graph is initialized does not have anything to do with how A* works. Furthermore, this makes it *very* similar to Dijkstra, in the way that you just have to adjust the relax function to cope with the heuristic. Furthermore it spares you the question of "should the path generation be in the algorithm or not". It is not directely in the algorithm, but one can find it hidden in the extra relax functionality. The last thing is the part of open/closed lists. If you have a monotone heuristic (which should not be too hard to find in many domains) you do not need to do that extra bit of bookkeeping. So leaving out this part and just assuming a monotone heuristic makes the algorithm smaller. Furthermore I do not know if the algorithm should be too informal, since the code still is code, and maybe should be explained in natural language that accompanies the code. I mean: Regardless how informal you write the code, you still have to expain it. So why not one brief (quite formal) overview and an explanation in natural language english? Regnaron 19:41, 12 August 2006 (UTC)
- IMHO, this version:
- describes everything it does twice, once in code, once in comments, entirely defeating the purpose of pseudocode;
- doesn't translate easily to applicative programming languages, due to the use of destructive assignment;
- can only be understood properly if one knows Dijkstra, which is unnecessary for explaining A*.
- Qwertyus 20:29, 12 August 2006 (UTC)
- Hi!
- Hm, ok, if you understand the code, it does describe things twice, right. But unless there is some explanation in the text I think I would not know instantly what for example the line if f[v] > d[u] + w(u,v) + h[v] does. But if there is some prose explaining the code, you are right that one can just get rid of the comments. :-) The second point is translating into applicative programming languages. Here I do not really understand what you mean by this? For example: Q is a priority queue, and those might have a pop functionality which returns the element on top and removes it from the queue. And this element then is assigned to the variable u. So I think you can quite easily modify this code into java or whatever since it already is more formal. Or did I get you wrong? Third point: I rather think it's the other way round: Dijkstra and A* are (if you let dijkstra stop as soon as it finds a goal) equal. If you expain one of them, you also explained the other one. So if you start from scratch (which an article about an algorithm should IMHO do), explaining dijkstra or explaining A* yields in more or less the same article, simply because (that above mentioned version that just searches for one goal) of dijkstra is nothing more than A* with the heuristic h[n] = 0 for all nodes. Regnaron 08:32, 13 August 2006 (UTC)
Hmm, I'm actually with both you here since there are different purposes of pseudo-code and implementation code: the pseudo-code should provide a bare-bones conceptual overview of the algorithm with all complications peeled away. THe "real" code of course hands out a working implementation. Therefore, I'd suggest that we, similar to the Bellman-Ford pathfinding article, show pseudo-code early in the article and insert an "implementation" section with real code, f.i. Reganon's version above, last in the article. Mikademus 07:41, 13 August 2006 (UTC)
- But is this Idea of peeling away the complications not already done in that code? To me it reads as follows:
- Initialize graph. How? -> There is a magic function that does that. What is "initializing a graph"? -> See prose
- Add all nodes to priority Queue (ok, here the notation indeed might be a bit akwards) Once more: How? -> Just do it.
- Take first node of queue
- check if goal found. If yes, return path (again: How? There is a function that does that.)
- Look at all neighbours of current node and relax them. What is relax? -> Has been explained in text
- If no nodes to expand and goal not found => no path to goal
- So here the "hard" parts are in extra functions that in principle do not need to be understand or can be explained in a "high level" in prose.
- Or do you more aim for something really close to natural language as the above with as few code as possible. Why then not really a list of natural language items that describe the algorithm in english? Like:
- Initialize graph
- Set distances for all nodes to infinity
- Set all predecessor pointers to NIL
- Set distance of start node to zero
- Insert all Vertices of graph into priority queue and queue them by their distances.
- Initialize graph
- ...
Regnaron 08:30, 13 August 2006 (UTC)
The code is somewhat simplified, but as an example, take the pseudocode from the "AI for Game Developers" (Bourg, David & Seemann, Glenn; 2004; O'Reilly Media Inc., Sebastopol) book:
add the starting node to the open list while the open list is not empty { current node = node from the open list with the lowest cost if current node = goal node then path complete else move current node to the closed list examine each node adjacent to the current node for each adjacent node if it isn't on the open list and isn't on the closed list and it isn't an obstacle move it to the open list and calculate cost }
(p. 129) It traces out the pathfinding algorithm only, avoiding any "superfluous" details (like storing the cost or constructing the return path). The bad thing is of course that it isn't directly usable, but the advantage of not immediatly jumping into the gritty implementation details is that it gives the reader the conceptual overview necessary to not be overwhelmed and to gain a structural understanding in which to fit in the discussion and details. Admittably, my two sugestions above fail in this since they lean to much to "functional" implementations. Mikademus 09:20, 13 August 2006 (UTC)
Wow, no talk on this for almost three years?
1) How did this end up in there ???
elseif tentative_g_score < g_score[y] tentative_is_better := true if tentative_is_better = true
2) Isn't the idea to have 'pseudo' code ? This code looks like pascal, the only 'pseudo' parts are the descriptions of set operations, which brings me to,
3) Insinuating that the open and closed lists are actually sets is not wise. The original author called them lists, most authors since have called them lists, and frankly the actual data structure used is up to the implementer. Some are 'better' than others, but this is pseudocode, or is meant to be. There is a section 'Implementation Details' where this could be discussed... of course that section could also use some work...
4) The 'Implementation Details' section discusses path reconstruction, the 'pseudo' code for it should be removed.
Silnarm (talk) 15:12, 21 June 2009 (UTC)
Ok, I just checked the original paper, no references to lists, or sets. They simply 'marked' a node as open or closed, I strongly recommend the same be done here.
NB: Why are there two sections in this talk page for the pseudo code ? Silnarm (talk) 00:41, 22 June 2009 (UTC)
Here's my proposal for the pseudo code:
insert the starting node into the openSet (typically a priority queue)
while the openSet is not empty:
take the lowest node in the openSet as node // value is determined by f(x) and a tie-breaker
mark node as reached (insert into a closedSet)
if node is a goal value:
return a path constructed from previous pointers on the nodes starting with node
for each neighbor of node:
skip if it's already been reached (in the closedSet) // it's guaranteed to have been shorter than whatever f(x) is about to be computed)
compute g(x) and h(x)
if neighbor already in the openSet:
if neighbor has a smaller g(x) than the node already in the openSet:
update g(x) and prev pointers
else:
add to the openSet
Johnb003 (talk) 06:02, 23 April 2014 (UTC)
I did just implement this in C and either I don't understand or there is something wrong with this sentence:
"Remark: In this pseudocode, if a node is reached by one path, removed from openSet, and subsequently reached by a cheaper path, it will be added to openSet again."
If I interpret the pseudocode correctly it's impossible for a node to be removed from openSet without also beeing added to closedSet. And if a node is in closedSet it can never be added to openSet again. This makes me think the pseudocode and that comment differs? ADent~svwiki (talk) 15:18, 17 October 2019 (UTC)
- The new pseudocode you're talking about does not have a
closedSet
variable, which is why it's possible to re-add a node. Jthemphill (talk) 21:25, 4 March 2020 (UTC)
Correctness
[edit]The algorithm psuedo-code proposed in the article (as of now) is incorrect for non-monotonous h(x). A counter-example is available on page 4 of this article: http://www.autonlab.org/tutorials/astar08.pdf. If x is in closed (line 7), a correct algorithm should check whether the g(x) value entered when x was inserted is larger than the g() for the current path and if so - update it's value. Similarly, it is possible that x is in the priority queue already waiting to be popped; this case should be considered by the algorithm as well. An example demonstrating why this is important is on page 5 of the same article. In addition I think the article should include a proof that A* finds an optimal solution assuming an admissable heuristic. I can write one up if no one else volunteers (I'm a 4th year CS student). —The preceding unsigned comment was added by Morph555 (talk • contribs) 16:22, 17 December 2006 (UTC).
- I'm familiar with the version of A* that you are, which requires monotonicity. I'm not familiar with the one in this article, but I'm told that the "open" and "closed" lists are used to avoid the situation that requires monotonicity (at the expense of taking longer to compute). Meanwhile, the paragraph I wrote on monotonicity was removed as "inaccurate" by User:Qwertyus. Does anyone have a text that describes A* both with and without monotonicity which would help clarify this matter? rspeer / ɹəədsɹ 03:08, 26 April 2007 (UTC)
IMHO if h is monotonous, then each node gets expanded at most once (and thus it's legitimate to rule out updating expanded nodes using the "closed" list) (and this means the algorithm is fast, too). If h is not monotonous, the same nodes get expanded multiple times and you need to consider all neighbors when expanding a node. It also means that the algorithm will be slower (since it repeatedly expands the same nodes). So as to react to your comment, it's a different story: If h is monotonous, then you MAY use the "closed" list to save (only a few) machine cycles (only saves you a few tests). If it isn't, you MUST NOT use the "closed" list. — Jiri Svoboda 19:26, 5 June 2007 (UTC)
BFS
[edit]In my opinion BFS as an abreviation is ambiguous. It could mean both best first search and breath first search. Moreover it is first used without introduction. From later passages in the text I assume that it means best first search, but am not experienced enough to make the changes myself. Upon a second thought, dijkstra must be some kind of best first search, so I will just make the changes. If you are happy with the changes in the article, delete this message --Markulf 20:36, 6 June 2007 (UTC)
Pseudo Code
[edit]Good luck with that pseudo code. —The preceding unsigned comment was added by 71.195.200.155 (talk) 05:04, August 22, 2007 (UTC)
I think http://www.geocities.com/jheyesjones/pseudocode.html has the best pseudo code Lio 14:23, 4 November 2007 (UTC)
It's not obvious from the pseudo code how the heuristic function comes into the algorithm at all. There's no mention of ``h or ``heuristic in the pseudo code. If it is implied that the successor set is ordered according to the heuristic, that should be made clear. If not... where? [-rog.] —Preceding unsigned comment added by 78.144.185.184 (talk) 15:38, 1 April 2008 (UTC)
Removal of links
[edit]Please explain your reason for mass removal of links from this article. Please do not simply refer to WP:LINKFARM. Explain why each link you removed was bad.Biophys (talk) 14:56, 9 April 2009 (UTC)
- How about you explain, for each one, why it is justified. For instance the first one, by Tarik Attar, appears to be completely pointless: it just displays a route between two addresses without any explanation or visualization of the algorithm itself. It may be intended to show that some other algorithm used by Google is (unlike A*) non-optimal, but that's not really about A* specifically. What does each link add to the existing article and why is it appropriate to link directly rather than to a link farming site like dmoz? —David Eppstein (talk) 16:14, 9 April 2009 (UTC)
- Why were my links removed ? I have written SWI-Prolog source code for A* and I've added images of the visualization written using XPCE here and here . At my university they teach Prolog,and we are taught the A* algorithm as well,and I think it's in most AI courses curriculum, and Prolog is widely used for teaching AI,so I think the code would be a good resource for someone to learn from.Moreover , at the bottom of the screen in this article , you can read Category:Articles with example code so it's very clear that source code is welcome and Prolog is a didactical language. - Stefan Petrea —Preceding unsigned comment added by Stefan.petrea (talk • contribs) 03:02, 28 April 2009 (UTC)
My site seems to be at the links section, about 10 persons visit the site each day via this page and I have found that someone is actually using my code.
- The linkfarm seems to back. I suspect that some people are simply trying to advertise their code or site trough this article. Offliner (talk) 13:43, 27 April 2009 (UTC)
The Python implementation (see http://kks.cabal.fi/A-star ) is mine. It's Open source, uses hash-table to optimize heap searches, is quite simple to use and it works. Well. If you don't see that it is relevant, take it away then. —Preceding unsigned comment added by Hiihammuk (talk • contribs) 14:27, 27 April 2009 (UTC)
- We really don't need such a huge external links section. I'm especially concerned that many of the links are to sites of completely unknown reliability. Obviously, many links have been added simply for advertisement purposes. Offliner (talk) 18:26, 27 April 2009 (UTC)
- Let me disagree with you,the article is in the category "Articles with example code" so code is welcome.My link for example,has not been added for advertisement purposes,and I am not aware of any Prolog implementation of A* so I really see no reason for removing the links from the article Stefan Petrea —Preceding unsigned comment added by Stefan.petrea (talk • contribs) 03:17, 28 April 2009 (UTC)
- Ok,so I just added back my link and it has been deleted again.I think there are some unwritten rules between contributors which I'm not aware of.I also don't understand what David Eppstein meant by Let's try not rebuilding the link farm back right away ,I'll have to wait for further explanations on your behalf. —Preceding unsigned comment added by Stefan.petrea (talk • contribs) 03:40, 28 April 2009 (UTC)
I am getting really sick of my Flash implementation of A* being removed from the Links section. I have working code posted with a step by step process of how to come up with the code on your own. I Also have a working flash program posted as an example. If this is relevant I dont know what is. Many of us are having this problem with just a few people that seem to think their opinion is more important than everyone else. As for the 4 links that are being left up when "offliner" and "David Eppstein" clear the other... Why leave those? the do not seem to be good links for learning A* at all. If the link removal continues I should think we should all contact Wikipedia about this and have them deal with these 2 users. —Preceding unsigned comment added by 76.175.38.201 (talk) 00:21, 30 April 2009 (UTC)
- I just re-removed the following two links that a persistent not-logged-in linkspammer has been attempting multiple times to add:
- Chris Kridner's A Generic Flash based A* Library - Code and Tutorial for A*
- A SWI-Prolog implementation of the algorithm featuring a GUI
- Kridner's code did not seem helpful to me as part of a general-purpose encyclopedic article on A*: it is too tightly focused on a gaming application and makes many application-specific assumptions (e.g. to begin with, that one is finding paths in a grid rather than in an arbitrary graph). As for the prolog implementation: a mass of uncommented code with variable names in Italian also does not seem particularly informative or helpful. I am tempted to remove also the link to Stentz' home page (if his D* variant is sufficiently important, we should mention it within the main article text and use one of his papers as a real reference rather than merely linking to his home page and hope Wikipedia readers can figure out its relevance from there) but I did not yet do so. —David Eppstein (talk) 00:33, 1 May 2009 (UTC)
- If I will re-comment my code in english do you agree on leaving it in the Link section ? What modifications would you bring to it in order for it to be fit ? Thanks Stefan.petrea (talk) 17:34, 2 May 2009 (UTC)
- In short, no. You and Royalmage are arguing from a position that any relevant link should be included, but that's far from what WP:EL says. I would like to see only a very short set of links, each of which adds significant value that could not be directly included in the article. So what I'd want to see is, not just a useful link, but a convincing argument for why your link is one of the (say) five most-relevant links that we could possibly find on the web concerning this subject. —David Eppstein (talk) 18:15, 2 May 2009 (UTC)
- If I will re-comment my code in english do you agree on leaving it in the Link section ? What modifications would you bring to it in order for it to be fit ? Thanks Stefan.petrea (talk) 17:34, 2 May 2009 (UTC)
I am not a link spammer child... i want my link up, and so do the many people that are already enjoying and using my code. I would appreciate it if you would stop thinking that you know better than everyone else and removing our code links that people use every day... Your not better than anyone here... Also grid based pathing is what A* is most commonly used for. How is its most used purpose not relevant? And if people like what ive done, why would you remove it? Removing links takes away from the community. So step off your high and mighty perch and stop removing links that are both helpful and relevant. —Preceding unsigned comment added by Royalmage (talk • contribs)
- Your condescension makes me laugh, but I have to assume that was not your intent, so: please see our policy on civility. Also on external links (especially the part about adding links to your own sites), on conflicts of interest, and self-published sources. —David Eppstein (talk) 04:06, 1 May 2009 (UTC)
now i know a person with a degree in math from Stanford has been taught not to assume things... really guy you should know better, the link i posted is not for personal intrest. I spent a long time working with A* and found that many people did not understand it, So i boiled it down to as light as i possibly could. Using only basic if's and for's to produce the entire piece of code. From my experience with the piece it was very successful with new programmers. It was only on a recommendation from someone else that i posted it on Wikipedia. Im sorry you feel like you should be the judge and jury on A* i do not know why you feel that your opinion is more important than the many people that enjoy the piece i did on A*. Removing it is nothing more than a hindrance to wikipedia, and the fact that the page is now locked is a very sad example of censorship and u ought to be ashamed as a professor. —Preceding unsigned comment added by Royalmage (talk • contribs) 05:34, 1 May 2009 (UTC)
- Although Wikipedia is not for hosting link farms, it would probably be appropriate to collect the deleted links somewhere else that does host such things (e.g. dmoz) and then make a single link to there from here. If you want to do that, here's another link that I think would also be relevant, but that (for the same reasons I just pointed you to) I haven't added to the article here. By the way, it would be helpful if you signed your comments here by typing four tildes (~~~~) after them. —David Eppstein (talk) 07:23, 1 May 2009 (UTC)
- I tried to put my link to dmoz. One of the reasons, why it was not accepted might be self promotion. Is there any place where self promotion is not an issue? It is possible to make a site, which could automatically run, test and compare the speed, performance and basic functionality of open source algorithms by using a simple interface, wide range of test data and a sandbox such as for example http://codepad.org/ . The thing is that I have not found that kind of site and I guess there are none. thankssorrybye... —Preceding unsigned comment added by 84.249.13.153 (talk) 10:23, 16 May 2009 (UTC)
I just added an article to any-angle path planning. Could someone explain to me how to get it as a term into the "Graph search algorithms and Tree search algorithms" menu box? Antonbharkamsan (talk) 06:21, 6 September 2009 (UTC)
Admissible heuristics
[edit]A bit of pervasive misinformation about A* search is that, as long as the heuristic never overestimates the distance to the goal, then the search is admissible. This is wrong.
As a counterexample: make a graph that you want to search, with a start and goal node, and give it an inadmissible heuristic, one that overestimates the distance somewhere so that A* chooses the wrong path. For the purpose of this counterexample, this heuristic shouldn't have any values over 1000000.
Now extend this graph by adding a new goal node, connecting it to the old goal node with an edge with a cost of 1000000. Suddenly the heuristic isn't an overestimate anywhere; if you're anywhere but the goal, it will cost at least 1000000 to get to the goal, and the heuristic is lower than that everywhere. So by the commonly-stated requirement for A* search, it should find the right path - but it doesn't. It finds the same path it found before, plus the new edge.
The requirement is that the heuristic can't overestimate the distance to anywhere; that is, if there's a path from A to B, then h(A) - h(B) is not greater than the shortest-path distance from A to B.
Course web pages, teaching assistants, and even textbooks leave this part out and make a provably false statement. Should I expand on this more in the article? RSpeer 08:06, Dec 25, 2004 (UTC)
Hmm. I've looked up a little more, and found a possible reason why the second condition is often left out; you can use a less efficient version of A* that always finds the correct path without that condition. However, the usually-stated algorithm requires both conditions. These notes describe both algorithms, and call the second condition "monotonic". RSpeer 18:04, Dec 26, 2004 (UTC)
Blah. I screwed up. The page was describing that less-efficient version all along. I moved the thing about monotonic heuristics to its own section. Sorry. RSpeer 19:23, Dec 27, 2004 (UTC)
- The argument above is incorrect. When A* arrives at the new goal state, the high cost of the solution node (which has a cost over 1000000) will force A* to expand all nodes that could possibly have a better score, which will include the shorter path to the old goal node. Additionally, the monotonicity requirement is required when using a GraphSearch type algorithm, but not when using a TreeSearch (see R&N p99). EdwinOlson (talk) 21:01, 11 September 2008 (UTC)
- *BUMP*! I believe this actually is correct; when A* arrives at the new goal state, the high cost of the edge to the solution node (which has a cost over 1000000) will force A* to expand all nodes that could possibly have a better score, but this will NOT include the shorter path to the old goal node, since the node where the paths meet again will have been added to the closed set. See my example to the right. --Kri (talk) 03:16, 7 November 2009 (UTC)
- Okay, I have now realized that this is not the case. The article mentions that if a closed set of nodes is going to be used, the heuristic estimate must be consistent too, not only admissible. So, I guess my image illustrates what could happen if a closed set is used although the heuristic is not consistent. --Kri (talk) 11:05, 21 November 2009 (UTC)
Better Pseudocode for algorithm
[edit]I have encountered the same problem that a few others have with the openset/closedset issue, where the heuristic function has to be consistent for the algorithm to work. I've done a few changes to the pseudocode, and would love feedback.
function A*(start,goal)
openset := set containing the initial node // The set of tentative nodes to be evaluated.
came_from := the empty map // The map of navigated nodes.
g_score[start] := 0 // Distance from start along optimal path.
h_score[start] := heuristic_estimate_of_distance(start, goal)
f_score[start] := h_score[start] // Estimated total distance from start to goal through y.
while openset is not empty
x := the node in openset having the lowest f_score[] value
if x = goal
return reconstruct_path(came_from, came_from[goal])
remove x from openset
foreach y in neighbor_nodes(x)
tentative_g_score := g_score[x] + dist_between(x,y)
if g_score[y] is not set or tentative_g_score < g_score[y]
add y to openset
came_from[y] := x
g_score[y] := tentative_g_score
h_score[y] := heuristic_estimate_of_distance(y, goal)
f_score[y] := g_score[y] + h_score[y]
return failure
function reconstruct_path(came_from, current_node)
if came_from[current_node] is set
p = reconstruct_path(came_from, came_from[current_node])
return (p + current_node)
else
return current_node
Amitron94 (talk) 08:13, 19 January 2011 (UTC)
I find the code intuitive and easy to understand. However, it seems flawed when it comes to ties. There is no check for tied nodes when checking if x = goal 83.248.221.127 (talk) 22:59, 24 April 2012 (UTC)
It seems like yo use a lot of maps or arrays for g_score, h_score, and f_score. It would seem better to consolidate them into a structure, so that you're not doing so many look-ups or storing so many values. I do like how you check if the neighbor you're about to add is the lowest before adding it though. Since your open set only contains the index, it seems a little disconnected to be maintaining a sorted structure (most likely a min heap) using a look-up to get the value for every index. Johnb003 (talk) 05:31, 23 April 2014 (UTC)
weighted A*
[edit]is there a separate article on weighted A*? I couldn't find one. Weighted A* is important - it lets one "inflate" an admissible heuristics to speed up the search, and yet have a bound on the sub-optimality. If is an admissible heuristics function, in the weighted version of the A* search we use as our heuristics function, and perform the A* search as usual (which happens much faster than using ). The path hence found by the search can have a cost of at most times the least cost path. [Reference: J. Pearl, "Heuristics: Intelligent Search Strategies for Computer Problem Solving.", Addison-Wesley, 1984.] — Preceding unsigned comment added by Subh83 (talk • contribs) 22:06, 11 March 2011 (UTC)
Pseudo-code, again!
[edit]What is the line "Update(openset,y)" doing? The "Update" function is not explained anywhere. Is the purpose to sort the openset? I guess that's not required to mention explicitly since the line where we choose x it's already mentioned "x := the node in openset having the lowest f_score[] value". The maintainance of the openset will then be an implimentational detail. Moreover, the comments at the beginning of the code in individual likes look annoying. Why cant they stay on the same line of the code that they are explaining? - Subh83 (talk | contribs) 02:05, 18 March 2011 (UTC)
- Any thought on this? Otherwise I will clean-up the pseudo-code in a day or so. - Subh83 (talk | contribs) 19:53, 7 April 2011 (UTC)
Animated illustration
[edit]I have recently created a few animations that demonstrate the working of Dijkstra's, A* and weighted A*. They illustrate a more intuitive and realistic example from robot path planning that will be helpful to grasp the high-level concepts of the algorithms and their differences. I am hence planning to place them somewhere in the article. Below is the planned content. Suggestions on the captions and where in the article to place them will be helpful.
The above figures illustrate a typical example from motion planning in robotics using 3 different search algorithms (all variants of A*). The graph is created by uniform square discretization of a 2-dimensional planar region, placing a node in each discretized cell, and connecting each node with its 8 neighbors using bidirectional edges. Cost of edges are same as their Euclidean lengths. The gray shape represents an obstacle. The filled circles in red & green represent expanded nodes (nodes in closed set). The color indicate the g-score (red: lower g-score, green: higher g-score). The empty nodes with blue boundary are the ones in open set. The nodes of the graph are generated on the fly, and nodes falling inside the obstacle are discarded as inaccessible. The objective is to find the least cost path (restricted to the graph) from a given start to a given goal node. A consistent heuristic in this problem is the Euclidean distance of a node to goal. That is, ha(n) = |g - n|2, where n represents the Euclidean coordinate of the node n, g is the goal node, and | . |2 is the 2-norm. |
- Subh83 (talk | contribs) 05:52, 15 April 2011 (UTC)
- Since there was no suggestion, and I personally think that adding all the content above may be distracting, I will add just the animation for A* along with a brief caption along with the present graphics in the process section. The present graphics does not seem to be very helpful. Thus, eventually replacing that with the animated gif may be desirable. Please discuss the possible replacement below. - Subh83 (talk | contribs) 20:38, 17 April 2011 (UTC)
The animations are beautiful. They say a lot more than many words. That would be nice to have them side by side or to have a link to the animation of Dijkstra's algorithm for comparison. Michal Cizmazia (talk) 19:05, 12 June 2011 (UTC)
I too think that the animations are great, but I agree that the middle one above is enough to have on the page. Another thing... does not these animations show that the comments "However, when the heuristics are monotone, it is actually equivalent to running Dijkstra" (apparently entered by user Chire on 09:13, 26 July 2011) is not correct? Surely, the heuristic used in the animation is monotone, but this still does not men that the two algorithms (A* and Dijkstra's) are equivalent. Does it...? Fabian 11:59, 19 August 2011 (UTC)
History
[edit]I check the reference,eecs.berkeley.edu, doesn't have any informations about the history of A*. And, in previous discussion, does the name "A*" came from statistical literature. Also, in the "A Formal Basis for the Heuristic Determination of Minimum Cost Paths" doesn't contain anything about it. --Thiti Watanasrimongkol (talk) 12:40, 13 September 2011 (UTC)
- I have a few questions about the sources for some of the statements in the current section on the History of A*. I wrote them as a separate topic but perhaps I was supposed to put them here (I am brand new to wikipedia editing). I have gathered quite a lot of information about the history of A* so will be updating this section sometime soon. I can't say where the name came from but I doubt it was from the statistical literature. -- 142.244.108.42 (talk) 17:26, 14 October 2017 (UTC)
What does A* do with equal cost values?
[edit]What does A* do with equal cost values?
I don't get which path it would take if depth 1 and 2 are equal in cost but not 3 and 4 of a total of 4 depths. Does it go random or does it search in alphabetical order if the nodes are sorted that way? — Preceding unsigned comment added by AXque (talk • contribs) 02:26, 9 January 2012 (UTC)
- A) Sorting depends on your implementation. I suggest searching for a description of a "Binary Heap." A straight linear sort will be insanely inefficient, and only gets worse as the Open Set increases in size. Binary Heaps are a clever and very efficient sorting method for A*. --162.207.200.246 (talk) 04:58, 9 March 2016 (UTC)
- A* does not define what to do with equal costs. Typically whatever is on front of the queue is removed. This may be "random" if the queue was sorted with an unstable sort. If the queue was sorted with a stable sorting algorithm, then those which were added earlier (but having the same value) would be removed first. Although one may choose to implement a tie-breaker algorithm. For example, using a stable sort, first sort by h(n) then f(n)---where h(n) is the heuristic estimate and f(n) is the cumulative cost + h(n).
Extension to hypergraphs (and-or graphs)?
[edit]I don't currently see the AO* algorithm listed on wiki. Its a useful "extension" of A* to hypergraphs. Not sure if anyone wants to take on that challenge. — Preceding unsigned comment added by 150.135.222.160 (talk) 21:39, 2 October 2012 (UTC)
Seems to be missing heuristic pseudo code
[edit]The pseudo code relies on a function called "heuristic_cost_estimate".
Is this so custom that it is not possible to show an example pseudo code for it?
Maybe for a standard grid (like the grid animation)? — Preceding unsigned comment added by Vaccano (talk • contribs) 23:03, 3 December 2012 (UTC)
The Psudocode once again
[edit]The remark at the psudocode says: "the above pseudocode assumes that the heuristic function is monotonic" which means that we do not need to revisit old states, we already have a optimal path to that node. The above pseudocode does not assume monotonic because it does infact revisit old nodes, the algoritm should imideately check if the node is in the closed set and discard the node if it is.
Even the article on consistent (monotonic) heuristic backs me up: "In the A* search algorithm, using a consistent heuristic means that once a node is expanded, the cost by which it was reached is the lowest possible" — Preceding unsigned comment added by 130.243.154.32 (talk) 10:22, 15 September 2013 (UTC)
Nevermind
More on Psudo-Code
[edit]while openset is not empty
if current is goal
return reconstruct_path(came_from, goal)
The "IS"-es needs to be bold, I don't seem to know how to do it. — Preceding unsigned comment added by 75.145.167.212 (talk) 20:52, 3 March 2014 (UTC)
- Do you mean this:
while openset is not empty if current is goal return reconstruct_path(came_from, goal)
- I don't know why you want them bolded, but that's how you do it. — Frεcklεfσσt | Talk 14:15, 4 March 2014 (UTC)
Why does someone insist on using := instead of = ? := used to confuse me, and confused/confuses others who've I asked to look at the code. They think the := is something special, instead of just equals.
Also, why are some of the if statements in plain English (as I feel they should be) "if X is not Y" while others use the mathematical equals symbol "if X = Y"? For consistency and ease of understanding shouldn't the if statements be in plain english? Shouldn't the psudo-code used be consistent? — Preceding unsigned comment added by 75.145.167.212 (talk) 22:26, 6 March 2014 (UTC)
- The pseudocode is consistent and I'm trying to keep it that way. The use of ":=" to denote definition or assignment is widespread in mathematics as well as programming.
- Using "is" is in fact confusing because
is
means pointer comparison in Python (and note the use of Python'snot in
elsewhere), and well as arithmetic evaluation in Prolog. - Although the pseudocode could be improved, there's no single perfect way to write it. What might be confusing to you is perfectly clear to others, while your suggested improvements actually make it harder to read for me. I didn't see a net improvement, so I reverted. QVVERTYVS (hm?) 11:58, 7 March 2014 (UTC)
- Just a note, IIRC, := is used in Pascal as the assignment operator as well.
is
compares pointers in Python? Makes me not feel so bad for never learning it. — Frεcklεfσσt | Talk 14:45, 7 March 2014 (UTC)
- Just a note, IIRC, := is used in Pascal as the assignment operator as well.
- Actually it compares "object identity", i.e. it circumvents whatever overloaded `==` has been defined on objects. But yes, the use in Pascal was guiding because that's what we use for syntax highlighting. (I like Pascal highlighting for pseudocode—it has a nice and explicit syntax, and so few people program in it that the pseudocode won't get turned into "real" code by editors who are novice programmers ;) QVVERTYVS (hm?) 15:38, 7 March 2014 (UTC)
- If you are confused by := then you should review your sources on common pseudo-code notations. := is an assignment operator to avoid confusion with equal sign ( = ). == is the notation mostly used in C-like languages it doesn't necessarily mean one has to use it. As far as I can remember from my computer science textbooks := is commonly used in CS literature to denote assignment. 172.249.237.81 (talk) 02:38, 24 March 2014 (UTC)
- To be honest I think using = would be perfectly clear there, but though the := looked a little odd to me initially, I don't see how its use could be confused, either. I guess it just depends what languages you're used to: the only language I'm especially familiar with which uses := for assignment is golang, but that's context enough to make it understandable to me. There's no single good way to write pseudocode, so if it uses := for assignment as it stands, there's no reason to change that. 86.179.8.217 (talk) 14:32, 12 April 2014 (UTC)
Implementation Details
[edit]In the implementation details, it is noted that when a path needs to be reconstructed it's important to make sure if a node is about to be inserted and it's already in the open set, that it should update the node in the open set with the lowest cost path. I think this is understated and should be included as part of the process; in fact I think the reason it's important is not for reconstructing a path, but in order to avoid a lot of redundant calculations.
Without this check, it's possible for a node to add a neighbor (call it N) with a high g(x). Then later another node adds the same neighbor (N), with a different g(x) value. Depending on which g(x) was lower, the shorter path will process first and add all the neighbors of N, but it's still possible for the second path to N to be popped even though it's now already in the closed set, and to redundantly add (albeit a more expensive path to) all of the children before they are added to the closed set. This process could easily continue if the h(x) to the neighbor is far from the actual g(x) cost to the neighbor.
Checking the open set before adding neighbors would avoid continued processing of the redundant less optimal path. Johnb003 (talk) 05:10, 23 April 2014 (UTC)
This latest tree version of pseudo code does not seem to work
[edit]I implemented the graph version of pseudo code. This is the one with the explored set. It works wonderfully. I am very interested in this new tree version that will work with non-monotonic heuristics (admissible but not consistent). I cannot get it to work. It seems that there is only one path being stored in the came_from list. For a tree type search I think you would need a separate path for each leaf of the tree to store how you got there. There could be several leaves of the same node with different paths to get there (and different g values). Damon4-com (talk) 02:51, 2 November 2014 (UTC)
External links modified
[edit]Hello fellow Wikipedians,
I have just modified one external link on A* search algorithm. Please take a moment to review my edit. If you have any questions, or need the bot to ignore the links, or the page altogether, please visit this simple FaQ for additional information. I made the following changes:
- Added archive https://web.archive.org/web/20140806200328/http://ijcai.org/Past%20Proceedings/IJCAI-83-VOL-2/PDF/048.pdf to http://ijcai.org/Past%20Proceedings/IJCAI-83-VOL-2/PDF/048.pdf
When you have finished reviewing my changes, you may follow the instructions on the template below to fix any issues with the URLs.
This message was posted before February 2018. After February 2018, "External links modified" talk page sections are no longer generated or monitored by InternetArchiveBot. No special action is required regarding these talk page notices, other than regular verification using the archive tool instructions below. Editors have permission to delete these "External links modified" talk page sections if they want to de-clutter talk pages, but see the RfC before doing mass systematic removals. This message is updated dynamically through the template {{source check}}
(last update: 5 June 2024).
- If you have discovered URLs which were erroneously considered dead by the bot, you can report them with this tool.
- If you found an error with any archives or the URLs themselves, you can fix them with this tool.
Cheers.—InternetArchiveBot (Report bug) 15:24, 23 June 2017 (UTC)
History of A* -- sources?
[edit]In my research into the history of A* I have never seen the names A1 and A2 used for preliminary versions of it. What are the sources of this information? I have the same question about Peter Hart's contribution. My understanding is that he was instrumental in the definitions of admissible and consistent heuristics and the proofs based on them but did not make any changes at all to the algorithm, contrary to what is said in the current article. Again, what are the sources of this information?
I will probably be editing this article sometime soon and of course I would like to leave this information in if it is well documented.
142.244.108.42 (talk) 17:18, 14 October 2017 (UTC)
The pseudo code needs fixing because it misses adjusting open nodes
and reopening closed nodes. Two modifications regarding the pseudocode
are recommended:
1) Remove the statements about "map". The purpose of the algorithm is finding (short) paths, not maps.
2) It may be helpful to mention two different goals with different pseudocodes:
-- Simple, approximate A*: Find any solution Ignore new nodes that are already on Closed or Open.
-- Real A*: Aim at finding a shortest/ shorter solution. Check whether a new node is in Open or Close and check whether a better g-value is found. If so, adjust the parent and the g-value and if found in Closed, reopen it. Explain that in >this< version a guaranteed shortest path will be obtained provided the heuristic is admissible -- as described by Nilsson et al. Still a better path could be obtained when the heuristic is not admissible.
I am sure that someone can write this up as an improvement. 98.248.43.46 (talk) 15:27, 23 May 2019 (UTC) DdC
Why the gargantuan animation?
[edit]The animation demonstrating the algorithm in a random maze is huge, about 25 MB. Surely it doesn't need to be that big. — Preceding unsigned comment added by 157.52.4.163 (talk) 17:42, 13 April 2021 (UTC)
Notion of space and distance
[edit]Currently, the article does not mention that the A* algorithm (and specifically its heuristic function) only works in spaces with clearly defined distance measures. E.g. the introduction reads as if the A* algorithm works on any graph. However, this is not the case, since the “heuristics” comes from having additional knowledge of the structure of the graph and using this knowledge to construct the heuristic function, see further explanation here https://stackoverflow.com/questions/26568552/what-heuristic-functions-can-i-use-for-a-graph-based-non-grid-a Without additional knowledge of the stucture of the graph it is not possible to construct the heuristic function, and hence not possible to use the A* algorithm. A the very least I would mention the space/distance requirement when comparing with the Dijkstra's algorithm. 185.162.138.206 (talk) 10:57, 8 May 2024 (UTC)
- I agree that this is not covered well by the introductory material of our article. But, there are versions of A* that apply to arbitrary weighted graphs, using preprocessing to construct a heuristic that can then be used to make subsequent applications of A* faster. See e.g.: Goldberg, Andrew V.; Harrelson, Chris (2005), "Computing the shortest path: search meets graph theory", Proceedings of the Sixteenth Annual ACM-SIAM Symposium on Discrete Algorithms, SODA 2005, Vancouver, British Columbia, Canada, January 23-25, 2005, Society for Industrial and Applied Mathematics, pp. 156–165 —David Eppstein (talk) 17:33, 8 May 2024 (UTC)
=
[edit]Needs a reference to:
https://en.wikipedia.org/wiki/Bidirectional_search
which is the equivalent bi-directional version. Not clear to me why the recent ref to:
New Bidirectional A* (NBA*)[33]
given the numerous references in the bi-directional version. — Preceding unsigned comment added by Ddccc (talk • contribs) 19:49, 27 July 2024 (UTC)