This
level-5 vital article is rated C-class on Wikipedia's
content assessment scale. It is of interest to the following WikiProjects: | ||||||||||||||||||||||||||||
|
I entered the section 'how to solve an algebraic equation'. For discussion see User talk:Bo Jacoby. The discussion items should perhaps be moved to this place? Note that the Wilkinson illconditioned equation is readily solved by 'my' method. Bo Jacoby 07:27, 13 September 2005 (UTC)
Hello Jitse. Thank you very much for your comment on my article on root-finding algoritm. You request a reference for verifiability and some analysis, and you ask whether the method always converge and what is the speed? I agree that such theoretical stuff would be nice, but alas I have not got it. I have got a lot of practical experience with the method. It was taught in an engineering school in Copenhagen for more than 5 years, and the students implemented it on computer and solved thousands of examples. I have not got any nontrivial reports on nonconvergence. So much for verifiability. Does the method always converge ? The answer is no for topological reasons. This is why. Consider initial guess p,q,r,s converging towards roots P,Q,R,S. For reasons of symmetry the initial guess q,p,r,s will converge towards Q,P,R,S. Both solutions are satisfactory, but they are not the same point in four-dimensional complex space. Consider f(t)=(1-t)(p,q,r,s)+t(q,p,r,s), 0<t<1. This line joins the two initial guesses. Note that the iteration function, g, is continuous, no matter how many times we iterate. We don't iterate an infinite number of times. Let A and B be open disjoint sets such that A contains (P,Q,R,S) and B contains (Q,P,R,S) and such that g(f(0)) is in A and g(f(1)) is in B. But no continuous curve can jump from A to B. So for some value of t, 0<t<1, g(f(t)) is outside both A and B, and so the method does not converge everywhere.
I do not think that this immature argument belongs to a wikipedia article.
However, I believe that the method converges 'almost everywhere' in the sense of Lebesque, but I have no proof. Nevertheless, the question of convergence is not the right question to pose. As you only iterate a finite number of times, you will not necessary get close to a solution even if the method converges eventually. So, the method is good for no good theoretical reason! The solutions are attracting fixpoints for the iteration function. That's all.
End of copy
It is a good idea that the section be moved to an article of its own. It is not a good idea to cut off the method based on prejustices regarding speed and convergence. Many people have good experience by the method. Most people have no experience. Nobody have bad experience. Bo Jacoby 13:12, 9 January 2006 (UTC)
None of the other methods of the article constitute a general method for computing all the roots of any polynomial in a hurry. Newton and Laguerre find only one root which may not be the interesting one. Ruffini makes additional assumptions, and the Eigenvalue_algorithm#Power_iteration is slow. Try the different methods on a simple example like x3−2x = 0 and get some experience. Then you will agree that the method is worth your attention. What name for the method will you suggest? Bo Jacoby 07:49, 10 January 2006 (UTC)
Thanks a lot Jitse! You will find that the method is very fast! Bo Jacoby 13:53, 10 January 2006 (UTC)
I did not expect the method to converge for all initial guesses, though that would be great; in fact, the basic result I was wondering about is whether it converges for initial guesses sufficiently close to the exact solution, which apparently it does because the solutions are attractive fixed points of the iteration map. By the way, the first case is called global convergence, and the second case local convergence; I guess I should have been more precise in my question.
I agree, the really interesting question is not about convergence (after an infinite number of iterations), but things like: how often will the method yield a decent result, when applied to polynomials which typically occur in practice, and with starting guesses that are typically used? However, the convergence question is easier to pose and to answer.
I am thinking of putting 'your' method in a separate article and leave a short summary here, just like other methods like Newton's method. What do you think about this? Of course, we then need a name for the method ... any ideas? Perhaps I'll go to the library and see whether I can find anything similar. -- Jitse Niesen ( talk) 15:02, 13 September 2005 (UTC)
Thank you for your interest. Bo Jacoby 09:53, 19 September 2005 (UTC)
Should we add the method of solving "quadratic" equations using "discriminants" to this article? i.e., ax^2 + bx + c = 0
x1 = (-b + sqrt(b^2 - 4ac)) / 2a
x2 = (-b - sqrt(b^2 - 4ac)) / 2a
The Quadratic Equations isn't an algorithm (it doesn't iterate or contain logical conditionals, so it shouldn't be included as a "root-finding algorithm." Thetwentyone ( talk) 18:43, 24 October 2011 (UTC)
There has been some editing on the condition that there exists a and b such that f(a)<0 and f(b)>0. It has been changed to that f(a) and f(b) have opposite signs. Nobody said that a<b, so the two formulations are logically equivalent. Bo Jacoby 13:19, 31 October 2005 (UTC)
Information on the assumptions made for each method to work needs to improve, The Newton-Raphson method is made to look like it works for all F, wich is hardly the case(The simplest counter example would be a funktion where the curve increases as x approches the rootpoint. //Nobody.
The explanation, while easy to understand, leaves me wondering about one part that may need to be clarified: when performing the substitutions, which values of q,r,s,... should be used? For example, after completing the first substitution and moving on to the second, what p value should be used: the value that just finished being calculated, or the value from before any substitutions began?
Darthmarth37 03:53, 24 January 2006 (UTC)
Ruffini's method is not at all a root-finding algoritm as described in the introduction. I will remove the reference from the article. Any objections ? Bo Jacoby 08:58, 24 January 2006 (UTC)
I don't think it's quite correct to say that "the numerical evaluation of polynomials is subject to round-off error, and so finding roots can be ill-conditioned". Any computation is subject to round-off error, but that does not necessarily imply round-off error. Perhaps what was meant is "the numerical evaluation of polynomials can be ill-conditioned, and so finding roots can be ill-conditioned as well". However, I'm not convinced that this is true. Such a statement would at least need a reference. -- Jitse Niesen ( talk) 14:33, 24 January 2006 (UTC)
"The evaluation of any polynomial close to a root involves the subtraction of almost equal terms" — not necessarily true; consider for instance f(x) = x. In any case, my problem is that the relation between the conditioning of "evaluating a polynomial" and "finding its root" is as straightforward as the word so in "the numerical evaluation of polynomials is subject to round-off error, and so finding roots can be ill-conditioned" implies. -- Jitse Niesen ( talk) 15:52, 24 January 2006 (UTC)
I'm not at all convinced that "[t]he numerical evaluation of the polynomial is the Achilles' heel of root-finding". There may well be some truth in it, but I need to see some arguments. The root of f(x) = x−1.00009 can be determined with full precision. That (a+b)+c ≠ a+(b+c) in floating-point arithmetic is well known, but I don't see where exactly it is relevant here.
Be that as it may, I removed "the numerical evaluation of polynomials is subject to round-off error, and so finding roots can be ill-conditioned" because the word so implies a causal relation, and I cannot see this relation. What precisely do you mean with "finding roots can be ill-conditioned"? -- Jitse Niesen ( talk) 14:00, 25 January 2006 (UTC)
Why don't we remove that section on round-off error and ill-condition all together. I makes the reader confused and unhappy and the exact meaning is unclear. Bo Jacoby 12:44, 26 January 2006 (UTC)
I have made a few minor cosmetic amendments, but in case anybody sees my name as an editor on this article I'd like to go on record as saying it needs serious help. I am not leaving it as it should be, but only as it is. Sorry; maybe another day. -- KSmrq T 14:25, 18 May 2006 (UTC)
I converted the "Finding multiple roots" article into a redirect to this article. I merged the deleted text into this article with minor changes like changing sections into subsections. You can still reach the talk page of the old article via Talk:Finding multiple roots. To view the history of the old article, switch to this article itself and select "What links here", then click on "Finding multiple roots (redirect page)", then click the "history" tab. JRSpriggs 07:01, 20 May 2006 (UTC)
I now add direct links to the histories of "Finding multiple roots" and its talk page:
http://en.wikipedia.org/?title=Finding_multiple_roots&action=history
http://en.wikipedia.org/?title=Talk:Finding_multiple_roots&action=history
And I converted the talk page there into a redirect to this talk page. The contents follow in the next subsection. JRSpriggs 08:56, 23 May 2006 (UTC)
Q1. Why did you decide to start a new article, instead of adding an extra section to Root-finding algorithm? I prefer to have a few long articles with a couple of people working on them and correcting each other, rather than spreading out our works on many small articles (but this is partially a matter of taste).
A1. No particular reason. I just thought of this as just another method of finding roots like the other methods which have their own articles. If you want to merge it into the main article in this category, please feel free to do so.
Q2. Ideally, every article should be supported by a couple of references. Please add one or two. I'm especially interested in references, because it is not clear to me how to compute the gcd in the presence of rounding error. If the example you give in Finding multiple roots is computed with floating-point arithmetic, then the remainder might not be zero.
A2. I am sorry; but this article was based on my memories of a math class I took many decades ago. I do not know of any reference book for it. As for round off errors, I agree that that could be a problem. But I think that it is a problem in determining convergence generally -- you have to make a judgement whether this is close enough to zero that we should assume that it really is zero (or would have been zero, if computations were done with infinite precision). It may mean that the usefulness of this method is restricted -- not only restricted to functions which are presented as polynomials, but to those whose coefficients are given exactly, i.e. symbolically, like (3+sqrt(7))/4.
JRSpriggs 12:25, 30 April 2006 (UTC)
"Finding multiple roots" is not a root-finding algorithm and so it does not belong here. Exact algebra on integers should be finished before the problem is forwarded to approximate root-finding using floating point numbers. Bo Jacoby 10:13, 28 July 2006 (UTC)
Hi, while reading the relevant papers of Schönhage and Pan in the last few days I tried to assemble things I understood or knew already into a short description of the method. An accessible formulation of the main result is still missing. If I find a short way to formulate it, I will add it later.-- LutzL 17:20, 27 July 2006 (UTC)
Perhaps someone might mention Halley's method - a considerable improvement on Newton's. By comparison, the overhead is calculation of the second derivative of the function at the initial approximation point ... and higher order methods in a similar vein are possible too! If noöne else does it, I may add it. (Note: Halley, of comet fame.) Hair Commodore 20:09, 19 January 2007 (UTC)
OK. I copied over the French version and translated it (more or less). There were some errors which I had to fix. Also it needs more detail, especially an explanation of why its convergence is cubic. It is at Halley's method. JRSpriggs 13:10, 22 January 2007 (UTC)
Conversely, any equation can take the canonical form f(x) = 0, so equation solving is the same thing as computing (or finding) a root of a function.
(from the article, 3rd paragraph)
This is only true of equations which have the form f(x) = g(x). Implicit equations don't always have a canonical f(x) = 0 form. —Preceding unsigned comment added by Mgsloan ( talk • contribs) 21:51, 9 November 2007 (UTC)
Lets say we made this into an equation by setting it equal to 0.3. This would be equivalent to tracing out a particular slice of this image. There's no way to write this as f(x) = 0 without leaving out huge swaths of the range. Perhaps the article should specify that they are equations of one variable? Mgsloan 20:59, 12 November 2007 (UTC)
I don't have any references at hand, but in my experience solver (computer science) normally means root-finder. I would re-direct solver to this article, because solver (computer science) is basically listing an arbitrary selection of numerical problems. 205.228.104.143 ( talk) 04:09, 17 September 2009 (UTC)
OK, so basically you are saying that any algorithm is a solver? Could be, but either way we need references. 221.47.185.5 ( talk) 13:19, 19 September 2009 (UTC) Incidentally, I never heard the phrase "logic solver", do you mean "automatic theorem prover"? 221.47.185.5 ( talk) 13:26, 19 September 2009 (UTC)
Thanks, I have a few questions.
This discussion revolves around terminology so we really need some references, otherwise it's a bit pointless. 221.47.185.5 ( talk) 04:11, 20 September 2009 (UTC)
Thanks. I agree it's a hard thing to reference. I'm going to give it a try myself in 2 days' time.
By the way, I noticed that Iterative method seems to cover only root-finding (ironically without referencing this article until I added the link). 221.47.185.5 ( talk) 10:24, 21 September 2009 (UTC)
OK, I can't find any reference to back what I thought was understood to be a solver in numerical analysis. I removed the merge proposals, but I think this discussion was fruitful, in that it identified some links and some (open) questions - see above. Thanks. 122.26.120.146 ( talk) 01:49, 23 September 2009 (UTC)
The Bernoulli referred to is Daniel for clarity. His method is described in the book Mathematics and Computers by Stibitz and Larrivee (1957),pp 77-79.-- Billymac00 ( talk) 03:34, 13 April 2010 (UTC)
Why isn't fixed point iteration mentioned in this article ? — Preceding unsigned comment added by 168.96.204.43 ( talk) 17:45, 29 June 2011 (UTC)
I'm all for giving the Mrs. Budan, Fourier and Vincent, and also Mr. Akrita's work at reconstructing this result, all their due respect. However, the paragraph discussing this history is much too long and detailed for an overview that gives the other methods only one or two sentences. I'm for shortening it to mention that the recent Collins-Akritas method conforms to the original idea of Mr. Vincent in using continued fractions, while the Zimmerman method uses the idea of Möbius transforms for a bisection-exclusion scheme. By the way, what about the bisection-exclusion scheme of Yakoubsohn/Dedieu?-- LutzL ( talk) 19:14, 19 December 2013 (UTC)
{{
cite book}}
: Unknown parameter |coauthor=
ignored (|author=
suggested) (
help)
Please. Notice this edit. In particular:
All of this is codified in WP:MOS and WP:MOSMATH. Michael Hardy ( talk) 20:27, 3 January 2014 (UTC)
Isn't there an algorithm that starts with a similar equation with a known solution and approximates the equation, finding a new solution each step, using the previous one as a guess? I read about that but couldn't find it again.— Preceding unsigned comment added by 187.35.227.178 ( talk) 00:32, 21 November 2014
(...) a limit (the so-called "fixed point") which is a root
This phrase is misleading as the roots of e.g. are not fixed points of the same function (i.e. they are not zero). Helder 23:06, 9 July 2015 (UTC)
I do not understand why the McDougall Wotherspoon modification of Newton's method is not worth mentioning under this section, and moreover why any of the other "thousands" of variants on Newton's method should also not be mentioned. Especially if, as is in the case of the McDougall variant, they are published in a scholarly pier reviewed journal. When looking for a root-finding algorithm to use in practice I for one would appreciate knowing about a method which: has a better convergence rate, the same calculation cost, is more likely to converge, and remains as relatively simple to implement as Newton's method.
Text cut out of the section:
Another method by McDougall and Wotherspoon is a simple modification on Newton's method to achieve 1 + √2 convergence while still only using one function and derivative evaluation per step. [1]
References
199.34.4.20 ( talk) 20:49, 19 November 2015 (UTC)
Would it be worth adding the golden section search below the bisection search? — Preceding unsigned comment added by Jeremyminton ( talk • contribs) 12:22, 18 December 2015 (UTC)
There are a lot of algorithms. Would it make sense to make a table of them clarifying their differences, such as how many roots they find at once, whether they are guaranteed to converge, convergence speed, etc.? — Preceding unsigned comment added by 71.167.63.118 ( talk) 02:02, 21 March 2016 (UTC)
The result of the move request was: no consensus. Both sides have said their pieces but it doesn't look like there is general agreement either way. ( non-admin closure) Eventhorizon51 ( talk) 02:14, 10 July 2016 (UTC)
Root-finding algorithm → Root-finding method – (over redirect) Root-finding is usually numerical method rather than an algorithm. Most named root-finders have "method" rather than "algorithm" in their name. For example, the usual names are Newton's method or Brent's method rather than Newton's algorithm or Brent's algorithm. The typical result of these methods is an approximation rather than an exact result; the methods are terminated when the results are close enough; some root-finders may fail on particular inputs and therefore do not compute a result. This requested move is to acknowledge a subtle distinction between "algorithm" and "method". Compare to graphy theory algorithms such as Dikjstra's algorithm that computes an exact result after a specified time. Glrx ( talk) 15:17, 22 June 2016 (UTC) Relisted. Jenks24 ( talk) 04:26, 1 July 2016 (UTC)
Does the definition of "algorithm" depend on what the intended purpose is?My answer is "no", if talking about the definition of what is an algorithm, but "yes" if talking of a specific algorithm, as the input/output specification is exactly the intended purpose of the algorithm. D.Lazard ( talk) 08:46, 2 July 2016 (UTC)
I have rewritten a large part of the section now called "Roots of polynomials" ( [4]) and particularly the section on real roots ( [5]). The previous version for real roots suffered of WP:COI and non-neutral point of view, and this mess was extended to Vincent's theorem and Budan's theorem. So, I have rewritten Budan's theorem, created a new article Real-root isolation, and redirected Vincent's theorem to this new article.
The section here is now a summary of the new article with template {{ main article}}. Real-root isolation is certainly full of typos and grammar errors. So, please fix those that you encounter.
Also, a reviewer as tagged Real-root isolation as too technical. I cannot see how making this subject less technical. In particular because all sources that I know are much more technical. So, please remove the tag, if you agree that this technicality is inherent to the subject. It is clear that I cannot do it myself.
I hope that you will agree that these edits improve the covering of the subject. D.Lazard ( talk) 19:12, 30 December 2018 (UTC)
This
level-5 vital article is rated C-class on Wikipedia's
content assessment scale. It is of interest to the following WikiProjects: | ||||||||||||||||||||||||||||
|
I entered the section 'how to solve an algebraic equation'. For discussion see User talk:Bo Jacoby. The discussion items should perhaps be moved to this place? Note that the Wilkinson illconditioned equation is readily solved by 'my' method. Bo Jacoby 07:27, 13 September 2005 (UTC)
Hello Jitse. Thank you very much for your comment on my article on root-finding algoritm. You request a reference for verifiability and some analysis, and you ask whether the method always converge and what is the speed? I agree that such theoretical stuff would be nice, but alas I have not got it. I have got a lot of practical experience with the method. It was taught in an engineering school in Copenhagen for more than 5 years, and the students implemented it on computer and solved thousands of examples. I have not got any nontrivial reports on nonconvergence. So much for verifiability. Does the method always converge ? The answer is no for topological reasons. This is why. Consider initial guess p,q,r,s converging towards roots P,Q,R,S. For reasons of symmetry the initial guess q,p,r,s will converge towards Q,P,R,S. Both solutions are satisfactory, but they are not the same point in four-dimensional complex space. Consider f(t)=(1-t)(p,q,r,s)+t(q,p,r,s), 0<t<1. This line joins the two initial guesses. Note that the iteration function, g, is continuous, no matter how many times we iterate. We don't iterate an infinite number of times. Let A and B be open disjoint sets such that A contains (P,Q,R,S) and B contains (Q,P,R,S) and such that g(f(0)) is in A and g(f(1)) is in B. But no continuous curve can jump from A to B. So for some value of t, 0<t<1, g(f(t)) is outside both A and B, and so the method does not converge everywhere.
I do not think that this immature argument belongs to a wikipedia article.
However, I believe that the method converges 'almost everywhere' in the sense of Lebesque, but I have no proof. Nevertheless, the question of convergence is not the right question to pose. As you only iterate a finite number of times, you will not necessary get close to a solution even if the method converges eventually. So, the method is good for no good theoretical reason! The solutions are attracting fixpoints for the iteration function. That's all.
End of copy
It is a good idea that the section be moved to an article of its own. It is not a good idea to cut off the method based on prejustices regarding speed and convergence. Many people have good experience by the method. Most people have no experience. Nobody have bad experience. Bo Jacoby 13:12, 9 January 2006 (UTC)
None of the other methods of the article constitute a general method for computing all the roots of any polynomial in a hurry. Newton and Laguerre find only one root which may not be the interesting one. Ruffini makes additional assumptions, and the Eigenvalue_algorithm#Power_iteration is slow. Try the different methods on a simple example like x3−2x = 0 and get some experience. Then you will agree that the method is worth your attention. What name for the method will you suggest? Bo Jacoby 07:49, 10 January 2006 (UTC)
Thanks a lot Jitse! You will find that the method is very fast! Bo Jacoby 13:53, 10 January 2006 (UTC)
I did not expect the method to converge for all initial guesses, though that would be great; in fact, the basic result I was wondering about is whether it converges for initial guesses sufficiently close to the exact solution, which apparently it does because the solutions are attractive fixed points of the iteration map. By the way, the first case is called global convergence, and the second case local convergence; I guess I should have been more precise in my question.
I agree, the really interesting question is not about convergence (after an infinite number of iterations), but things like: how often will the method yield a decent result, when applied to polynomials which typically occur in practice, and with starting guesses that are typically used? However, the convergence question is easier to pose and to answer.
I am thinking of putting 'your' method in a separate article and leave a short summary here, just like other methods like Newton's method. What do you think about this? Of course, we then need a name for the method ... any ideas? Perhaps I'll go to the library and see whether I can find anything similar. -- Jitse Niesen ( talk) 15:02, 13 September 2005 (UTC)
Thank you for your interest. Bo Jacoby 09:53, 19 September 2005 (UTC)
Should we add the method of solving "quadratic" equations using "discriminants" to this article? i.e., ax^2 + bx + c = 0
x1 = (-b + sqrt(b^2 - 4ac)) / 2a
x2 = (-b - sqrt(b^2 - 4ac)) / 2a
The Quadratic Equations isn't an algorithm (it doesn't iterate or contain logical conditionals, so it shouldn't be included as a "root-finding algorithm." Thetwentyone ( talk) 18:43, 24 October 2011 (UTC)
There has been some editing on the condition that there exists a and b such that f(a)<0 and f(b)>0. It has been changed to that f(a) and f(b) have opposite signs. Nobody said that a<b, so the two formulations are logically equivalent. Bo Jacoby 13:19, 31 October 2005 (UTC)
Information on the assumptions made for each method to work needs to improve, The Newton-Raphson method is made to look like it works for all F, wich is hardly the case(The simplest counter example would be a funktion where the curve increases as x approches the rootpoint. //Nobody.
The explanation, while easy to understand, leaves me wondering about one part that may need to be clarified: when performing the substitutions, which values of q,r,s,... should be used? For example, after completing the first substitution and moving on to the second, what p value should be used: the value that just finished being calculated, or the value from before any substitutions began?
Darthmarth37 03:53, 24 January 2006 (UTC)
Ruffini's method is not at all a root-finding algoritm as described in the introduction. I will remove the reference from the article. Any objections ? Bo Jacoby 08:58, 24 January 2006 (UTC)
I don't think it's quite correct to say that "the numerical evaluation of polynomials is subject to round-off error, and so finding roots can be ill-conditioned". Any computation is subject to round-off error, but that does not necessarily imply round-off error. Perhaps what was meant is "the numerical evaluation of polynomials can be ill-conditioned, and so finding roots can be ill-conditioned as well". However, I'm not convinced that this is true. Such a statement would at least need a reference. -- Jitse Niesen ( talk) 14:33, 24 January 2006 (UTC)
"The evaluation of any polynomial close to a root involves the subtraction of almost equal terms" — not necessarily true; consider for instance f(x) = x. In any case, my problem is that the relation between the conditioning of "evaluating a polynomial" and "finding its root" is as straightforward as the word so in "the numerical evaluation of polynomials is subject to round-off error, and so finding roots can be ill-conditioned" implies. -- Jitse Niesen ( talk) 15:52, 24 January 2006 (UTC)
I'm not at all convinced that "[t]he numerical evaluation of the polynomial is the Achilles' heel of root-finding". There may well be some truth in it, but I need to see some arguments. The root of f(x) = x−1.00009 can be determined with full precision. That (a+b)+c ≠ a+(b+c) in floating-point arithmetic is well known, but I don't see where exactly it is relevant here.
Be that as it may, I removed "the numerical evaluation of polynomials is subject to round-off error, and so finding roots can be ill-conditioned" because the word so implies a causal relation, and I cannot see this relation. What precisely do you mean with "finding roots can be ill-conditioned"? -- Jitse Niesen ( talk) 14:00, 25 January 2006 (UTC)
Why don't we remove that section on round-off error and ill-condition all together. I makes the reader confused and unhappy and the exact meaning is unclear. Bo Jacoby 12:44, 26 January 2006 (UTC)
I have made a few minor cosmetic amendments, but in case anybody sees my name as an editor on this article I'd like to go on record as saying it needs serious help. I am not leaving it as it should be, but only as it is. Sorry; maybe another day. -- KSmrq T 14:25, 18 May 2006 (UTC)
I converted the "Finding multiple roots" article into a redirect to this article. I merged the deleted text into this article with minor changes like changing sections into subsections. You can still reach the talk page of the old article via Talk:Finding multiple roots. To view the history of the old article, switch to this article itself and select "What links here", then click on "Finding multiple roots (redirect page)", then click the "history" tab. JRSpriggs 07:01, 20 May 2006 (UTC)
I now add direct links to the histories of "Finding multiple roots" and its talk page:
http://en.wikipedia.org/?title=Finding_multiple_roots&action=history
http://en.wikipedia.org/?title=Talk:Finding_multiple_roots&action=history
And I converted the talk page there into a redirect to this talk page. The contents follow in the next subsection. JRSpriggs 08:56, 23 May 2006 (UTC)
Q1. Why did you decide to start a new article, instead of adding an extra section to Root-finding algorithm? I prefer to have a few long articles with a couple of people working on them and correcting each other, rather than spreading out our works on many small articles (but this is partially a matter of taste).
A1. No particular reason. I just thought of this as just another method of finding roots like the other methods which have their own articles. If you want to merge it into the main article in this category, please feel free to do so.
Q2. Ideally, every article should be supported by a couple of references. Please add one or two. I'm especially interested in references, because it is not clear to me how to compute the gcd in the presence of rounding error. If the example you give in Finding multiple roots is computed with floating-point arithmetic, then the remainder might not be zero.
A2. I am sorry; but this article was based on my memories of a math class I took many decades ago. I do not know of any reference book for it. As for round off errors, I agree that that could be a problem. But I think that it is a problem in determining convergence generally -- you have to make a judgement whether this is close enough to zero that we should assume that it really is zero (or would have been zero, if computations were done with infinite precision). It may mean that the usefulness of this method is restricted -- not only restricted to functions which are presented as polynomials, but to those whose coefficients are given exactly, i.e. symbolically, like (3+sqrt(7))/4.
JRSpriggs 12:25, 30 April 2006 (UTC)
"Finding multiple roots" is not a root-finding algorithm and so it does not belong here. Exact algebra on integers should be finished before the problem is forwarded to approximate root-finding using floating point numbers. Bo Jacoby 10:13, 28 July 2006 (UTC)
Hi, while reading the relevant papers of Schönhage and Pan in the last few days I tried to assemble things I understood or knew already into a short description of the method. An accessible formulation of the main result is still missing. If I find a short way to formulate it, I will add it later.-- LutzL 17:20, 27 July 2006 (UTC)
Perhaps someone might mention Halley's method - a considerable improvement on Newton's. By comparison, the overhead is calculation of the second derivative of the function at the initial approximation point ... and higher order methods in a similar vein are possible too! If noöne else does it, I may add it. (Note: Halley, of comet fame.) Hair Commodore 20:09, 19 January 2007 (UTC)
OK. I copied over the French version and translated it (more or less). There were some errors which I had to fix. Also it needs more detail, especially an explanation of why its convergence is cubic. It is at Halley's method. JRSpriggs 13:10, 22 January 2007 (UTC)
Conversely, any equation can take the canonical form f(x) = 0, so equation solving is the same thing as computing (or finding) a root of a function.
(from the article, 3rd paragraph)
This is only true of equations which have the form f(x) = g(x). Implicit equations don't always have a canonical f(x) = 0 form. —Preceding unsigned comment added by Mgsloan ( talk • contribs) 21:51, 9 November 2007 (UTC)
Lets say we made this into an equation by setting it equal to 0.3. This would be equivalent to tracing out a particular slice of this image. There's no way to write this as f(x) = 0 without leaving out huge swaths of the range. Perhaps the article should specify that they are equations of one variable? Mgsloan 20:59, 12 November 2007 (UTC)
I don't have any references at hand, but in my experience solver (computer science) normally means root-finder. I would re-direct solver to this article, because solver (computer science) is basically listing an arbitrary selection of numerical problems. 205.228.104.143 ( talk) 04:09, 17 September 2009 (UTC)
OK, so basically you are saying that any algorithm is a solver? Could be, but either way we need references. 221.47.185.5 ( talk) 13:19, 19 September 2009 (UTC) Incidentally, I never heard the phrase "logic solver", do you mean "automatic theorem prover"? 221.47.185.5 ( talk) 13:26, 19 September 2009 (UTC)
Thanks, I have a few questions.
This discussion revolves around terminology so we really need some references, otherwise it's a bit pointless. 221.47.185.5 ( talk) 04:11, 20 September 2009 (UTC)
Thanks. I agree it's a hard thing to reference. I'm going to give it a try myself in 2 days' time.
By the way, I noticed that Iterative method seems to cover only root-finding (ironically without referencing this article until I added the link). 221.47.185.5 ( talk) 10:24, 21 September 2009 (UTC)
OK, I can't find any reference to back what I thought was understood to be a solver in numerical analysis. I removed the merge proposals, but I think this discussion was fruitful, in that it identified some links and some (open) questions - see above. Thanks. 122.26.120.146 ( talk) 01:49, 23 September 2009 (UTC)
The Bernoulli referred to is Daniel for clarity. His method is described in the book Mathematics and Computers by Stibitz and Larrivee (1957),pp 77-79.-- Billymac00 ( talk) 03:34, 13 April 2010 (UTC)
Why isn't fixed point iteration mentioned in this article ? — Preceding unsigned comment added by 168.96.204.43 ( talk) 17:45, 29 June 2011 (UTC)
I'm all for giving the Mrs. Budan, Fourier and Vincent, and also Mr. Akrita's work at reconstructing this result, all their due respect. However, the paragraph discussing this history is much too long and detailed for an overview that gives the other methods only one or two sentences. I'm for shortening it to mention that the recent Collins-Akritas method conforms to the original idea of Mr. Vincent in using continued fractions, while the Zimmerman method uses the idea of Möbius transforms for a bisection-exclusion scheme. By the way, what about the bisection-exclusion scheme of Yakoubsohn/Dedieu?-- LutzL ( talk) 19:14, 19 December 2013 (UTC)
{{
cite book}}
: Unknown parameter |coauthor=
ignored (|author=
suggested) (
help)
Please. Notice this edit. In particular:
All of this is codified in WP:MOS and WP:MOSMATH. Michael Hardy ( talk) 20:27, 3 January 2014 (UTC)
Isn't there an algorithm that starts with a similar equation with a known solution and approximates the equation, finding a new solution each step, using the previous one as a guess? I read about that but couldn't find it again.— Preceding unsigned comment added by 187.35.227.178 ( talk) 00:32, 21 November 2014
(...) a limit (the so-called "fixed point") which is a root
This phrase is misleading as the roots of e.g. are not fixed points of the same function (i.e. they are not zero). Helder 23:06, 9 July 2015 (UTC)
I do not understand why the McDougall Wotherspoon modification of Newton's method is not worth mentioning under this section, and moreover why any of the other "thousands" of variants on Newton's method should also not be mentioned. Especially if, as is in the case of the McDougall variant, they are published in a scholarly pier reviewed journal. When looking for a root-finding algorithm to use in practice I for one would appreciate knowing about a method which: has a better convergence rate, the same calculation cost, is more likely to converge, and remains as relatively simple to implement as Newton's method.
Text cut out of the section:
Another method by McDougall and Wotherspoon is a simple modification on Newton's method to achieve 1 + √2 convergence while still only using one function and derivative evaluation per step. [1]
References
199.34.4.20 ( talk) 20:49, 19 November 2015 (UTC)
Would it be worth adding the golden section search below the bisection search? — Preceding unsigned comment added by Jeremyminton ( talk • contribs) 12:22, 18 December 2015 (UTC)
There are a lot of algorithms. Would it make sense to make a table of them clarifying their differences, such as how many roots they find at once, whether they are guaranteed to converge, convergence speed, etc.? — Preceding unsigned comment added by 71.167.63.118 ( talk) 02:02, 21 March 2016 (UTC)
The result of the move request was: no consensus. Both sides have said their pieces but it doesn't look like there is general agreement either way. ( non-admin closure) Eventhorizon51 ( talk) 02:14, 10 July 2016 (UTC)
Root-finding algorithm → Root-finding method – (over redirect) Root-finding is usually numerical method rather than an algorithm. Most named root-finders have "method" rather than "algorithm" in their name. For example, the usual names are Newton's method or Brent's method rather than Newton's algorithm or Brent's algorithm. The typical result of these methods is an approximation rather than an exact result; the methods are terminated when the results are close enough; some root-finders may fail on particular inputs and therefore do not compute a result. This requested move is to acknowledge a subtle distinction between "algorithm" and "method". Compare to graphy theory algorithms such as Dikjstra's algorithm that computes an exact result after a specified time. Glrx ( talk) 15:17, 22 June 2016 (UTC) Relisted. Jenks24 ( talk) 04:26, 1 July 2016 (UTC)
Does the definition of "algorithm" depend on what the intended purpose is?My answer is "no", if talking about the definition of what is an algorithm, but "yes" if talking of a specific algorithm, as the input/output specification is exactly the intended purpose of the algorithm. D.Lazard ( talk) 08:46, 2 July 2016 (UTC)
I have rewritten a large part of the section now called "Roots of polynomials" ( [4]) and particularly the section on real roots ( [5]). The previous version for real roots suffered of WP:COI and non-neutral point of view, and this mess was extended to Vincent's theorem and Budan's theorem. So, I have rewritten Budan's theorem, created a new article Real-root isolation, and redirected Vincent's theorem to this new article.
The section here is now a summary of the new article with template {{ main article}}. Real-root isolation is certainly full of typos and grammar errors. So, please fix those that you encounter.
Also, a reviewer as tagged Real-root isolation as too technical. I cannot see how making this subject less technical. In particular because all sources that I know are much more technical. So, please remove the tag, if you agree that this technicality is inherent to the subject. It is clear that I cannot do it myself.
I hope that you will agree that these edits improve the covering of the subject. D.Lazard ( talk) 19:12, 30 December 2018 (UTC)