![]() | This article is rated Start-class on Wikipedia's
content assessment scale. It is of interest to the following WikiProjects: | ||||||||||
|
See Talk:Root-finding algorithm for some discussion relating to this method. -- Jitse Niesen ( talk) 12:58, 10 January 2006 (UTC)
This seems to be the same method: The method of Weierstrass (a.k.a the Durand-Kerner method). Fredrik Johansson - talk - contribs 18:21, 25 January 2006 (UTC)
Your good work, Fredrik, and the authority of the name Weierstrass has finally saved this article from the risk of being deleted by Oleg. Thank you ! Bo Jacoby 13:11, 26 January 2006 (UTC)
Thank you Jitse. The new reference talks about how well the method finds multiple roots. Well, I don't think that multiple roots should be found numerically (and approximately), because they can be isolated from the polynomial algebraically (and exactly). The point is this. If f has a multiple root, p, then p is also root in the derivative f' , which can be computed algebraically from f. Use the Euclid algoritm to compute f2 := the greatest common divisor of f and f' . The roots of f2 are exactly the multiple roots of f. Now any triple root of f will be double root of f2, so f3 is calculated as the greatest common divisor between f2 and f2' , and so on until the degree is 1. Let fn have only simple roots. Substitute fm-k:=fm-k / fmk, for k=1..m-1, for m=n..2. All these polynomial divisions give zero remainder. Now all the fi have only simple roots, and f=f1×f22×...×fnn. The polunomials fi are now subject to numerical root-finding. Bo Jacoby 15:15, 31 January 2006 (UTC)
The recent references have expensive initializations.
So stick to the simple suggestion of the article. But the references support the method. Bo Jacoby 13:17, 30 January 2006 (UTC)
I've now been informed that this is a quasi-Newton method, the product in the denominator being an approximation of the derivative. I've also been shown that vanilla Newton can be used for simultaneous root-finding in an identical manner, and received the question of in which way Durand-Kerner is better in this setting. Fredrik Johansson - talk - contribs 16:50, 1 February 2006 (UTC)
The article is not clear whether the new roots should be updated sequentially or in parallel. I've tried both and haven't found any significant advantage either way, but I'm wondering whether someone can give reasonable justification for either. -- njh 00:12, 14 June 2006 (UTC)
I was going to also ask, but felt it perhaps off topic, is there any theoretical knowledge about the computational cost of complex polynomial factorization? For example, we can compute the roots of a linear, quadratic, cubic, quartic and quintic polynomial in a fixed sequence of root operations and basic arithmetic, so if we can perform all of these in roughly linear time we can find the root of an arbitrary such polynomial in linear time. Also note that the 'big O' factor of these increases in a non-linear way with increasing degree (the quartic is only a little more work than the cubic). So is there any known bound on the complexity of single variable polynomial root finding? (or alternatively, eigenvalue computation) There were a few results on undecidability of simultaneous polynomials, and on diophantane quadratics, but I don't know of any results for complex domain, single variable polynomials. I do know that all existing algorithms give no guarantees at all for convergence on higher degree polys.
My personal experiences with global converge have been on relatively low order polynomials (up to around degree 10) used in graphics applications. My program generates thousands of polynomials whilst interacting, and occasionally one of these stops and thinks for a bit. So I'm looking for algorithms that have predictable worst case performance, perhaps being able to estimate the number of steps for a required root tolerance. -- njh 08:46, 16 June 2006 (UTC)
I realised you meant complex number after I had posted and gone to bed :). My reading of your followup is that you are mainly concerned that the csqrt function has discontinuities? Any program that relies on csqrt giving predictable results is thus in for a surprise.
Ok, so continuity of the solution function is impossible. A program can handle discontinuity quite happily. So we accept that and simply require finding all the roots in some order (say lexicographical order). Does that problem then go away? Accuracy is certainly a problem still, but we could give error estimates at the same time as evaluating them. I guess I'm now looking for a practical solution - I'd be happy with an algorithm that could find me the most likely solution with a pessimistic estimate of the condition number, error radius, or similar.-- njh 00:08, 1 September 2006 (UTC)
I do not understand the new subsections relating the Durand-Kerner method to other methods. Is the D-K method a special case of Newton's method, or an approximation to N's method? LutzL's work is hard to understand. Please explain. Bo Jacoby 18:42, 31 August 2006 (UTC)
I fail to understand. The problem solved by the Durand-Kerner method is finding n solutions to one polynomial equation having one variable and degree n, while the problem solved by the n-dimensional Newton's method is finding one solution to n polynomial equations having n variables and any degrees. Bo Jacoby 11:16, 4 September 2006 (UTC)
Thanks. Now I read the new text and I understand. I was not aware of this interpretation of the D-K-method, and it is interesting.
The subscript is used in two ways in the formula
Perhaps it is nicer to write
restricting subscrips to mean indexing by integers.
The alphas and c's in the following formula are neither defined nor used.
The message seems simply to be this.
Bo Jacoby 19:32, 7 September 2006 (UTC)
There is an unnecessary change of notation beginning with this subsection. The general and exact notation is a computer program which is difficult to read. The dot notation, ··· , for et cetera leaves to the reader to guess. A readable special case giving a sufficient clue to the general case is preferable. So the formula should be
for all x, implying
These equations in the unknowns P,Q,R,S are to be solved by Newton's method.
The results regarding speed of convergence depend on assumptions, (that the initial Weierstrass updates are small enough). As the assumptions are hard to verify in practice, the results are not really useful. Bo Jacoby 12:08, 11 September 2006 (UTC)
A small point. As a casual consumer of math wiki pages, I think it would be nice if you could mention that the algorithm as given doesn't work if the polynomial has duplicate roots. It doesn't seem to be mentioned anywhere on the page. Thanks for the exceptional quality of these and similar pages. Robin Davies -- 74.104.47.37 04:19, 2 December 2006 (UTC)
After helpful discussions with Bo Jacoby, I have implemented this algorithm as a generic library package in Ada. My results are here. I'd be grateful for any comments. Thanks! John Matthews JMatthews 18:54, 7 November 2007 (UTC)
Lutz: Thanks for this result. My initial tests were on PowerPC, but my results on Athlon are similar to yours. I'll update my page. John Matthews JMatthews 05:42, 12 November 2007 (UTC)
I've had positive feedback on my Ada implementation, which has moved here. I'm working on a Java implementation. I'm sad that the existing external link is still unfinished; I'll add my Ada implementation and seek feedback on my Java version here. JMatthews ( talk) 16:12, 30 October 2008 (UTC)
I misspoke: the applet in external links works fine. I have posted a Java implementation here. JMatthews ( talk) 17:37, 5 November 2008 (UTC)
Hi LutzL. Your addition reads: "One could also choose the initial values for p,q,r,s randomly in a way that they are inside some circle containing also the roots of f(X), e.g. the circle around the origin with radius , (where 1,a,b,c,d are the coefficients of f(X)) but are not too close to each other, which becomes increasingly a concern as the degree of the polynomial increases". What is the point? Computing random numbers take more time and space than computing the powers of (0.4+0.9*i). Do random numbers have any advantage in general? Random numbers outside the circle can also be used. The high powers of (0.4+0.9*i) are close to 0, because |0.4+0.9*i|<1, but not equal to 0, nor equal to one another, so they are perfectly suitable as initial guesses. Choosing powers of some complex constant, k, where |k|>1, gives the risk of floating point overflow. If you know anything about the roots a priori you can probably make better initial guesses, but the reason why you are using a root-finding algorithm is of course that you do not know the roots. So let's not confuse the readers. Bo Jacoby 00:35, 11 November 2007 (UTC).
Hi LutzL. Starting points must be chosen in order to have an algoritm. They are alien to the iteration, but not to the algorithm. The computation of starting points should be fast, and when you suggest a slower and more complicated computation, such as equally spaced numbers on concentric circles, (the computation of which in itself requires the solution of the algebraic equation xn=an), or even an ill-defined computation such as pseudo-random numbers, you should prove a benefit. So far there is no obvious benefit, so there is simply no reason to explain that one could also choose something else. Bo Jacoby ( talk) 11:59, 6 November 2008 (UTC).
It is interesting for me to read all this discussion because I have used the Weierstrass algorithm since the late 1960's and only recently learned via Wikipedia about it being called Durand-Kerner. I and some coworkers at Stanford Research Institute (later named SRI International) published our investigations regarding apparent global convergence in the Problems and Solutions section of SIAM Review Vol.18, No.3, July 1976, for all but a set of starting vectors of measure zero, and with no other restrictions such as no multiple roots or only real coefficients, etc, as we found in late 1960's articles in the Journal of the CACM where we first found the algorithm. One of our coworkers pointed out to us the algorithm appearing as part two of a constructive proof of the Fundamental Theorem of Algebra published in the collected works of Weierstrass in 1903. Over the years since the late 1960's none of us ever found an example of the algorithm failing to converge. Each of us contributing to the SIAM Review "problem" came up with a different proof at the time that the algorithm with parallel iterations converges in the quadratic case except when the trial roots lie equally spaced away from the right bisector of the line segment joining the true roots in the complex plane. None of us ever managed to prove or disprove global convergence for degree greater than two since that time, except that one of us, Mike Green, told me a few years ago he thought he had later proved convergence for degree 3 but now can't find his paperwork on that. I personally have run vast trials on my PC over the past ten years, using a program I wrote in Win32Forth with randomly changing starting vectors and polynomial coefficients, and I have never come across a case where the algorithm does not eventually converge. It even works for the notorious example p(z) = z^4 - z^2 - 11/36 when starting trial roots are near the so-called superattracting 2-cycle +-1/sqrt(6) for Np(z) = z - p(z)/p'(z), as described in Exercise 9, Sec.1, Ch.2, Using Algebraic Geometry by David Cox, John Little and Donal O'Shea, Springer Graduate Texts in Mathematics 185, 1998. With my Forth program I plot the trial solutions, and it's fun to watch them flail around all over the complex plane sometimes, wildly jumping far out and then always getting their act together somehow as they eventually come flying in from far away towards the true roots. Andrew Korsak, Ph.D. (U.C. Berkeley '66), now retired since Jan'02. Kr6dd ( talk) 06:21, 5 June 2009 (UTC)
The article states that initial values must be " neither a real number nor a root of unity" but (0.4 + 0.9i)^0 is exactly 1 + 0i, isn't it? Should powers of 0.4 + 0.9i start from 1 rather than 0? 178.95.241.37 ( talk) 23:21, 9 December 2010 (UTC)
Hopefully,this is the right place for this question. I have doubts about the example. I programed it in C++ and FORTRAN an get the same roots but in a different order. In the German discussion page, LutzL told me, not to find any error in my code and that I should ask Bo Jacoby. Below my table and the links to my code.
Greetings -- 129.69.55.84 ( talk) 07:16, 11 June 2012 (UTC)
It.-Nr | p | q | r |
---|---|---|---|
0 | 1.0000000000000000+0.0000000000000000*i | 0.40000000000000002 +0.90000000000000002*i | -0.65000000000000002+0.71999999999999997*i |
1 | 1.3607734793516519 +2.0222302921553132*i | -1.3982133295376751-0.69356635962504332*i | 3.0374398501860242-1.3286639325302698*i |
2 | 0.98096328371966823 +1.3474626910848719*i | -0.33525193260127306 -0.64406860772816377*i | 2.3542886488816048-0.70339408335670806*i |
3 | 0.31718054925650652+0.93649454851955727*i | 0.49001572078718691-0.96614107903072544*i | 2.1928037299563061+2.96465305111680566E-002*i |
4 | 0.20901563897345732 +1.5727420147652911*i | 4.12060386626913466E-002-1.5275192097633474*i | 2.7497783223638517-4.52228050019438110E-002*i |
5 | 0.21297050700971853+1.3948274731404162*i | 0.18467846583682393-1.3845653821841168*i | 2.6023510271534578-1.02620909562992635E-002*i |
6 | 0.20653075193800677+1.3748787427714850*i | 0.20600107336130224 -1.3746529207714702*i | 2.5874681747006911-2.25822000014998941E-004*i |
FORTRAN90-Code -- 129.69.55.84 ( talk) 07:16, 11 June 2012 (UTC)
Thanks for asking! The following FORTRAN code is, hopefully correctly, typed in from my article in BYGNINGSSTATISKE MEDDELELSER, Vol 63. No 3-4 . 1992. Page 92-93. It is nice that you are interested in this 20 year old program!
PROGRAM TEST C LØS LIGNINGEN ((X-3)X+3)X-5=0 COMPLEX A(0:100),S(100) N=3 A(3)=1 A(2)=-3 A(1)=3 A(0)=-5 CALL ROOT(N,A,S) END
SUBROUTINE ROOT(N,A,S) COMPLEX A(0:*),S(*) S(1)=1 DO 1 I=2,N 1 S(I)=(.4,.9)*S(I-1) PRINT*,'TESTOUTPUT:' DO 2 I=1,100 PRINT'(SP,T02,2F8.4,i)',S(1) PRINT'(SP,T22,2F8.4,i)',S(2) PRINT'(SP,T42,2F8.4,i)',S(3) 2 IF (ROOT1(N,A,S).LE.1E-6) GOTO 3 3 END
REAL FUNCTION ROOT1(N,A,S) COMPLEX A(0:*),S(*),X,Y,Z REAL EPS EPS=0. DO 2 I=1,N X=S(I) Y=A(N) Z=A(N) DO 1 J=1,N Y=Y*X+A(N-J) 1 IF(J.NE.I) Z=Z*(X-S(J)) Z=Y/Z S(I)=X-Z 2 EPS=MAX(EPS,ABS(REAL(Z)),ABS(AIMAG(Z)) ROOT1=EPS END
The labels in column 1 has been moved to column 2 in order to have the program display in wikipedia. Move them back when testing the program.
The output from the program is this.
TESTOUTPUT: +1.0000 +0.0000i +0.4000 +0.9000i -0.6500 +0.7200i +1.3608 +2.0222i -0.3658 +2.4838i -2.3858 -0.0284i +2.6597 +2.7137i +0.5977 +0.8225i -0.6320 +1.6716i +2.2704 +0.3880i +0.1312 +1.3128i +0.2821 -1.5015i +2.5428 -0.0153i +0.2044 +1.3716i +0.2056 -1.3721i +2.5877 -0.0000i +0.2063 +1.3747i +0.2063 -1.3747i +2.5877 -0.0000i +0.2063 +1.3747i +0.2063 -1.3747i
Bo Jacoby ( talk) 14:02, 11 June 2012 (UTC).
Yes, the improved approximations are used as soon as they become available. That accelerates convergence. I have not investigated parallel computation. Bo Jacoby ( talk) 07:43, 12 June 2012 (UTC).
![]() | This article is rated Start-class on Wikipedia's
content assessment scale. It is of interest to the following WikiProjects: | ||||||||||
|
See Talk:Root-finding algorithm for some discussion relating to this method. -- Jitse Niesen ( talk) 12:58, 10 January 2006 (UTC)
This seems to be the same method: The method of Weierstrass (a.k.a the Durand-Kerner method). Fredrik Johansson - talk - contribs 18:21, 25 January 2006 (UTC)
Your good work, Fredrik, and the authority of the name Weierstrass has finally saved this article from the risk of being deleted by Oleg. Thank you ! Bo Jacoby 13:11, 26 January 2006 (UTC)
Thank you Jitse. The new reference talks about how well the method finds multiple roots. Well, I don't think that multiple roots should be found numerically (and approximately), because they can be isolated from the polynomial algebraically (and exactly). The point is this. If f has a multiple root, p, then p is also root in the derivative f' , which can be computed algebraically from f. Use the Euclid algoritm to compute f2 := the greatest common divisor of f and f' . The roots of f2 are exactly the multiple roots of f. Now any triple root of f will be double root of f2, so f3 is calculated as the greatest common divisor between f2 and f2' , and so on until the degree is 1. Let fn have only simple roots. Substitute fm-k:=fm-k / fmk, for k=1..m-1, for m=n..2. All these polynomial divisions give zero remainder. Now all the fi have only simple roots, and f=f1×f22×...×fnn. The polunomials fi are now subject to numerical root-finding. Bo Jacoby 15:15, 31 January 2006 (UTC)
The recent references have expensive initializations.
So stick to the simple suggestion of the article. But the references support the method. Bo Jacoby 13:17, 30 January 2006 (UTC)
I've now been informed that this is a quasi-Newton method, the product in the denominator being an approximation of the derivative. I've also been shown that vanilla Newton can be used for simultaneous root-finding in an identical manner, and received the question of in which way Durand-Kerner is better in this setting. Fredrik Johansson - talk - contribs 16:50, 1 February 2006 (UTC)
The article is not clear whether the new roots should be updated sequentially or in parallel. I've tried both and haven't found any significant advantage either way, but I'm wondering whether someone can give reasonable justification for either. -- njh 00:12, 14 June 2006 (UTC)
I was going to also ask, but felt it perhaps off topic, is there any theoretical knowledge about the computational cost of complex polynomial factorization? For example, we can compute the roots of a linear, quadratic, cubic, quartic and quintic polynomial in a fixed sequence of root operations and basic arithmetic, so if we can perform all of these in roughly linear time we can find the root of an arbitrary such polynomial in linear time. Also note that the 'big O' factor of these increases in a non-linear way with increasing degree (the quartic is only a little more work than the cubic). So is there any known bound on the complexity of single variable polynomial root finding? (or alternatively, eigenvalue computation) There were a few results on undecidability of simultaneous polynomials, and on diophantane quadratics, but I don't know of any results for complex domain, single variable polynomials. I do know that all existing algorithms give no guarantees at all for convergence on higher degree polys.
My personal experiences with global converge have been on relatively low order polynomials (up to around degree 10) used in graphics applications. My program generates thousands of polynomials whilst interacting, and occasionally one of these stops and thinks for a bit. So I'm looking for algorithms that have predictable worst case performance, perhaps being able to estimate the number of steps for a required root tolerance. -- njh 08:46, 16 June 2006 (UTC)
I realised you meant complex number after I had posted and gone to bed :). My reading of your followup is that you are mainly concerned that the csqrt function has discontinuities? Any program that relies on csqrt giving predictable results is thus in for a surprise.
Ok, so continuity of the solution function is impossible. A program can handle discontinuity quite happily. So we accept that and simply require finding all the roots in some order (say lexicographical order). Does that problem then go away? Accuracy is certainly a problem still, but we could give error estimates at the same time as evaluating them. I guess I'm now looking for a practical solution - I'd be happy with an algorithm that could find me the most likely solution with a pessimistic estimate of the condition number, error radius, or similar.-- njh 00:08, 1 September 2006 (UTC)
I do not understand the new subsections relating the Durand-Kerner method to other methods. Is the D-K method a special case of Newton's method, or an approximation to N's method? LutzL's work is hard to understand. Please explain. Bo Jacoby 18:42, 31 August 2006 (UTC)
I fail to understand. The problem solved by the Durand-Kerner method is finding n solutions to one polynomial equation having one variable and degree n, while the problem solved by the n-dimensional Newton's method is finding one solution to n polynomial equations having n variables and any degrees. Bo Jacoby 11:16, 4 September 2006 (UTC)
Thanks. Now I read the new text and I understand. I was not aware of this interpretation of the D-K-method, and it is interesting.
The subscript is used in two ways in the formula
Perhaps it is nicer to write
restricting subscrips to mean indexing by integers.
The alphas and c's in the following formula are neither defined nor used.
The message seems simply to be this.
Bo Jacoby 19:32, 7 September 2006 (UTC)
There is an unnecessary change of notation beginning with this subsection. The general and exact notation is a computer program which is difficult to read. The dot notation, ··· , for et cetera leaves to the reader to guess. A readable special case giving a sufficient clue to the general case is preferable. So the formula should be
for all x, implying
These equations in the unknowns P,Q,R,S are to be solved by Newton's method.
The results regarding speed of convergence depend on assumptions, (that the initial Weierstrass updates are small enough). As the assumptions are hard to verify in practice, the results are not really useful. Bo Jacoby 12:08, 11 September 2006 (UTC)
A small point. As a casual consumer of math wiki pages, I think it would be nice if you could mention that the algorithm as given doesn't work if the polynomial has duplicate roots. It doesn't seem to be mentioned anywhere on the page. Thanks for the exceptional quality of these and similar pages. Robin Davies -- 74.104.47.37 04:19, 2 December 2006 (UTC)
After helpful discussions with Bo Jacoby, I have implemented this algorithm as a generic library package in Ada. My results are here. I'd be grateful for any comments. Thanks! John Matthews JMatthews 18:54, 7 November 2007 (UTC)
Lutz: Thanks for this result. My initial tests were on PowerPC, but my results on Athlon are similar to yours. I'll update my page. John Matthews JMatthews 05:42, 12 November 2007 (UTC)
I've had positive feedback on my Ada implementation, which has moved here. I'm working on a Java implementation. I'm sad that the existing external link is still unfinished; I'll add my Ada implementation and seek feedback on my Java version here. JMatthews ( talk) 16:12, 30 October 2008 (UTC)
I misspoke: the applet in external links works fine. I have posted a Java implementation here. JMatthews ( talk) 17:37, 5 November 2008 (UTC)
Hi LutzL. Your addition reads: "One could also choose the initial values for p,q,r,s randomly in a way that they are inside some circle containing also the roots of f(X), e.g. the circle around the origin with radius , (where 1,a,b,c,d are the coefficients of f(X)) but are not too close to each other, which becomes increasingly a concern as the degree of the polynomial increases". What is the point? Computing random numbers take more time and space than computing the powers of (0.4+0.9*i). Do random numbers have any advantage in general? Random numbers outside the circle can also be used. The high powers of (0.4+0.9*i) are close to 0, because |0.4+0.9*i|<1, but not equal to 0, nor equal to one another, so they are perfectly suitable as initial guesses. Choosing powers of some complex constant, k, where |k|>1, gives the risk of floating point overflow. If you know anything about the roots a priori you can probably make better initial guesses, but the reason why you are using a root-finding algorithm is of course that you do not know the roots. So let's not confuse the readers. Bo Jacoby 00:35, 11 November 2007 (UTC).
Hi LutzL. Starting points must be chosen in order to have an algoritm. They are alien to the iteration, but not to the algorithm. The computation of starting points should be fast, and when you suggest a slower and more complicated computation, such as equally spaced numbers on concentric circles, (the computation of which in itself requires the solution of the algebraic equation xn=an), or even an ill-defined computation such as pseudo-random numbers, you should prove a benefit. So far there is no obvious benefit, so there is simply no reason to explain that one could also choose something else. Bo Jacoby ( talk) 11:59, 6 November 2008 (UTC).
It is interesting for me to read all this discussion because I have used the Weierstrass algorithm since the late 1960's and only recently learned via Wikipedia about it being called Durand-Kerner. I and some coworkers at Stanford Research Institute (later named SRI International) published our investigations regarding apparent global convergence in the Problems and Solutions section of SIAM Review Vol.18, No.3, July 1976, for all but a set of starting vectors of measure zero, and with no other restrictions such as no multiple roots or only real coefficients, etc, as we found in late 1960's articles in the Journal of the CACM where we first found the algorithm. One of our coworkers pointed out to us the algorithm appearing as part two of a constructive proof of the Fundamental Theorem of Algebra published in the collected works of Weierstrass in 1903. Over the years since the late 1960's none of us ever found an example of the algorithm failing to converge. Each of us contributing to the SIAM Review "problem" came up with a different proof at the time that the algorithm with parallel iterations converges in the quadratic case except when the trial roots lie equally spaced away from the right bisector of the line segment joining the true roots in the complex plane. None of us ever managed to prove or disprove global convergence for degree greater than two since that time, except that one of us, Mike Green, told me a few years ago he thought he had later proved convergence for degree 3 but now can't find his paperwork on that. I personally have run vast trials on my PC over the past ten years, using a program I wrote in Win32Forth with randomly changing starting vectors and polynomial coefficients, and I have never come across a case where the algorithm does not eventually converge. It even works for the notorious example p(z) = z^4 - z^2 - 11/36 when starting trial roots are near the so-called superattracting 2-cycle +-1/sqrt(6) for Np(z) = z - p(z)/p'(z), as described in Exercise 9, Sec.1, Ch.2, Using Algebraic Geometry by David Cox, John Little and Donal O'Shea, Springer Graduate Texts in Mathematics 185, 1998. With my Forth program I plot the trial solutions, and it's fun to watch them flail around all over the complex plane sometimes, wildly jumping far out and then always getting their act together somehow as they eventually come flying in from far away towards the true roots. Andrew Korsak, Ph.D. (U.C. Berkeley '66), now retired since Jan'02. Kr6dd ( talk) 06:21, 5 June 2009 (UTC)
The article states that initial values must be " neither a real number nor a root of unity" but (0.4 + 0.9i)^0 is exactly 1 + 0i, isn't it? Should powers of 0.4 + 0.9i start from 1 rather than 0? 178.95.241.37 ( talk) 23:21, 9 December 2010 (UTC)
Hopefully,this is the right place for this question. I have doubts about the example. I programed it in C++ and FORTRAN an get the same roots but in a different order. In the German discussion page, LutzL told me, not to find any error in my code and that I should ask Bo Jacoby. Below my table and the links to my code.
Greetings -- 129.69.55.84 ( talk) 07:16, 11 June 2012 (UTC)
It.-Nr | p | q | r |
---|---|---|---|
0 | 1.0000000000000000+0.0000000000000000*i | 0.40000000000000002 +0.90000000000000002*i | -0.65000000000000002+0.71999999999999997*i |
1 | 1.3607734793516519 +2.0222302921553132*i | -1.3982133295376751-0.69356635962504332*i | 3.0374398501860242-1.3286639325302698*i |
2 | 0.98096328371966823 +1.3474626910848719*i | -0.33525193260127306 -0.64406860772816377*i | 2.3542886488816048-0.70339408335670806*i |
3 | 0.31718054925650652+0.93649454851955727*i | 0.49001572078718691-0.96614107903072544*i | 2.1928037299563061+2.96465305111680566E-002*i |
4 | 0.20901563897345732 +1.5727420147652911*i | 4.12060386626913466E-002-1.5275192097633474*i | 2.7497783223638517-4.52228050019438110E-002*i |
5 | 0.21297050700971853+1.3948274731404162*i | 0.18467846583682393-1.3845653821841168*i | 2.6023510271534578-1.02620909562992635E-002*i |
6 | 0.20653075193800677+1.3748787427714850*i | 0.20600107336130224 -1.3746529207714702*i | 2.5874681747006911-2.25822000014998941E-004*i |
FORTRAN90-Code -- 129.69.55.84 ( talk) 07:16, 11 June 2012 (UTC)
Thanks for asking! The following FORTRAN code is, hopefully correctly, typed in from my article in BYGNINGSSTATISKE MEDDELELSER, Vol 63. No 3-4 . 1992. Page 92-93. It is nice that you are interested in this 20 year old program!
PROGRAM TEST C LØS LIGNINGEN ((X-3)X+3)X-5=0 COMPLEX A(0:100),S(100) N=3 A(3)=1 A(2)=-3 A(1)=3 A(0)=-5 CALL ROOT(N,A,S) END
SUBROUTINE ROOT(N,A,S) COMPLEX A(0:*),S(*) S(1)=1 DO 1 I=2,N 1 S(I)=(.4,.9)*S(I-1) PRINT*,'TESTOUTPUT:' DO 2 I=1,100 PRINT'(SP,T02,2F8.4,i)',S(1) PRINT'(SP,T22,2F8.4,i)',S(2) PRINT'(SP,T42,2F8.4,i)',S(3) 2 IF (ROOT1(N,A,S).LE.1E-6) GOTO 3 3 END
REAL FUNCTION ROOT1(N,A,S) COMPLEX A(0:*),S(*),X,Y,Z REAL EPS EPS=0. DO 2 I=1,N X=S(I) Y=A(N) Z=A(N) DO 1 J=1,N Y=Y*X+A(N-J) 1 IF(J.NE.I) Z=Z*(X-S(J)) Z=Y/Z S(I)=X-Z 2 EPS=MAX(EPS,ABS(REAL(Z)),ABS(AIMAG(Z)) ROOT1=EPS END
The labels in column 1 has been moved to column 2 in order to have the program display in wikipedia. Move them back when testing the program.
The output from the program is this.
TESTOUTPUT: +1.0000 +0.0000i +0.4000 +0.9000i -0.6500 +0.7200i +1.3608 +2.0222i -0.3658 +2.4838i -2.3858 -0.0284i +2.6597 +2.7137i +0.5977 +0.8225i -0.6320 +1.6716i +2.2704 +0.3880i +0.1312 +1.3128i +0.2821 -1.5015i +2.5428 -0.0153i +0.2044 +1.3716i +0.2056 -1.3721i +2.5877 -0.0000i +0.2063 +1.3747i +0.2063 -1.3747i +2.5877 -0.0000i +0.2063 +1.3747i +0.2063 -1.3747i
Bo Jacoby ( talk) 14:02, 11 June 2012 (UTC).
Yes, the improved approximations are used as soon as they become available. That accelerates convergence. I have not investigated parallel computation. Bo Jacoby ( talk) 07:43, 12 June 2012 (UTC).