![]() | This is an archive of past discussions. Do not edit the contents of this page. If you wish to start a new discussion or revive an old one, please do so on the current talk page. |
Archive 1 | ← | Archive 3 | Archive 4 | Archive 5 |
There are contradictory documents about the size and the significand (mantissa) size of the floating-point format of Zuse's Z3. According to Pr Horst Zuse, this is 22 bits, with a 15-bit significand (implicit bit + 14 represented bits). There has been a recent anonymous change of the article, based on unpublished Raúl Rojas's work, but I wonder whether this is reliable. Raúl Rojas was already wrong in the Bulletin of the Computer Conservation Society Number 37, 2006 about single precision (he said 22 bits for the mantissa). Vincent Lefèvre ( talk) 14:44, 21 September 2013 (UTC)
The image "Float mantissa exponent.png" erroneously shows that 10e-4 is the exponent, while the exponent actually is only -4 and the base is 10. — Preceding unsigned comment added by 109.85.65.228 ( talk) 12:14, 22 January 2014 (UTC)
This article states in section http://en.wikipedia.org/wiki/Floating_point#Incidents that the Failure at Dhahran was caused by Loss of significance. However, the article "MIM-104 Patriot" makes it sound like it was rather simply clock drift. This should be cleared up. — Preceding unsigned comment added by 82.198.218.209 ( talk) 14:01, 3 December 2014 (UTC)
Should there be a link to John McLaughlin's album at the top in case someone was trying to go there but went here? 2602:306:C591:4D0:AD55:E334:4141:98FA ( talk) 05:49, 7 January 2015 (UTC)
put it this way, I'm an IT guy and I can't understand this article, there need to be a much simpler summery for non tech people, using simple English. Right now every other word is another tech term I don't fully understand. -- thanks, Wikipedia Lover & Supporter
The C program intpow.c at www.civilized.com/files/intpow.c may be a suitable link for this topic. If the principal author agrees, please feel free to add it. (Don't assume this is just exponentiation by repeated doubling - it deals with optimal output in the presence of overflow or denormal intermediate results.) — Preceding unsigned comment added by Garyknott ( talk • contribs) 23:31, 27 August 2015 (UTC)
What does "formulaic representation" in the lead sentence mean?
In general, I think we could simplify the lead. I may give it a try over the weekend.... -- Macrakis ( talk) 18:52, 23 February 2016 (UTC)
Any integer with absolute value less than 224 can be exactly represented in the single precision format, and any integer with absolute value less than 253
These ought to say "less than or equal" instead of "less than", because the powers of two themselves can be exactly represented in single-precision and double-precision IEEE-754 numbers respectively. They are the last such consecutive integers. -- Myria ( talk) 00:12, 16 June 2016 (UTC)
Deep in section Minimizing the effect of accuracy problems there is a sentence
wherein 'epsilon' is linked to Machine epsilon. Unfortunately this is not the same 'epsilon'. Epsilon as a general term for a minimum acceptable error is not the same as Machine epsilon which is a limitation of some hardware floating point implementation.
As used in the sentence it would be perfectly appropriate to set that constant 'epsilon' to 0.00001. Whereas Machine epsilon is derivable based on the hardware to be something like 2.22e-16. The latter is a fixed value. The former is something chosen as a "good enough" guard limit for a particular programming problem.
I'm going to unlink that use of epsilon. I hope that won't be considered an error of sufficiently large magnitude. ;-) Shenme ( talk) 08:00, 25 June 2016 (UTC)
The title and first section say "floating point". But elsewhere in the article "floating-point" is used. The article should be consistent in spelling. In IEEE 754 they use "floating-point" with hyphen. I think that should be the correct spelling. JHBonarius ( talk) 14:18, 18 January 2017 (UTC)
The article Hidden bit redirects to this article, but there is no definition of this term here (there are two usages, but they are unclear in context unless you already know what the term is referring to). Either there should be a definition here, or the redirection should be removed and a stub created. JulesH ( talk) 05:43, 1 June 2017 (UTC)
There is a discussion with Vincent Lefèvre seeking consensus on the deletion of the "Causes of Floating Point Error" from this article on whether this change should be reverted.
Softtest123 ( talk) 20:16, 19 April 2018 (UTC)
[ https://www.analog.com/media/en/technical-documentation/application-notes/EE.185.Rev.4.08.07.pdf ]
Is this a separate floating point format or another name for an existing format? -- Guy Macon ( talk) 11:32, 20 September 2020 (UTC)
![]() | This is an archive of past discussions. Do not edit the contents of this page. If you wish to start a new discussion or revive an old one, please do so on the current talk page. |
Archive 1 | ← | Archive 3 | Archive 4 | Archive 5 |
There are contradictory documents about the size and the significand (mantissa) size of the floating-point format of Zuse's Z3. According to Pr Horst Zuse, this is 22 bits, with a 15-bit significand (implicit bit + 14 represented bits). There has been a recent anonymous change of the article, based on unpublished Raúl Rojas's work, but I wonder whether this is reliable. Raúl Rojas was already wrong in the Bulletin of the Computer Conservation Society Number 37, 2006 about single precision (he said 22 bits for the mantissa). Vincent Lefèvre ( talk) 14:44, 21 September 2013 (UTC)
The image "Float mantissa exponent.png" erroneously shows that 10e-4 is the exponent, while the exponent actually is only -4 and the base is 10. — Preceding unsigned comment added by 109.85.65.228 ( talk) 12:14, 22 January 2014 (UTC)
This article states in section http://en.wikipedia.org/wiki/Floating_point#Incidents that the Failure at Dhahran was caused by Loss of significance. However, the article "MIM-104 Patriot" makes it sound like it was rather simply clock drift. This should be cleared up. — Preceding unsigned comment added by 82.198.218.209 ( talk) 14:01, 3 December 2014 (UTC)
Should there be a link to John McLaughlin's album at the top in case someone was trying to go there but went here? 2602:306:C591:4D0:AD55:E334:4141:98FA ( talk) 05:49, 7 January 2015 (UTC)
put it this way, I'm an IT guy and I can't understand this article, there need to be a much simpler summery for non tech people, using simple English. Right now every other word is another tech term I don't fully understand. -- thanks, Wikipedia Lover & Supporter
The C program intpow.c at www.civilized.com/files/intpow.c may be a suitable link for this topic. If the principal author agrees, please feel free to add it. (Don't assume this is just exponentiation by repeated doubling - it deals with optimal output in the presence of overflow or denormal intermediate results.) — Preceding unsigned comment added by Garyknott ( talk • contribs) 23:31, 27 August 2015 (UTC)
What does "formulaic representation" in the lead sentence mean?
In general, I think we could simplify the lead. I may give it a try over the weekend.... -- Macrakis ( talk) 18:52, 23 February 2016 (UTC)
Any integer with absolute value less than 224 can be exactly represented in the single precision format, and any integer with absolute value less than 253
These ought to say "less than or equal" instead of "less than", because the powers of two themselves can be exactly represented in single-precision and double-precision IEEE-754 numbers respectively. They are the last such consecutive integers. -- Myria ( talk) 00:12, 16 June 2016 (UTC)
Deep in section Minimizing the effect of accuracy problems there is a sentence
wherein 'epsilon' is linked to Machine epsilon. Unfortunately this is not the same 'epsilon'. Epsilon as a general term for a minimum acceptable error is not the same as Machine epsilon which is a limitation of some hardware floating point implementation.
As used in the sentence it would be perfectly appropriate to set that constant 'epsilon' to 0.00001. Whereas Machine epsilon is derivable based on the hardware to be something like 2.22e-16. The latter is a fixed value. The former is something chosen as a "good enough" guard limit for a particular programming problem.
I'm going to unlink that use of epsilon. I hope that won't be considered an error of sufficiently large magnitude. ;-) Shenme ( talk) 08:00, 25 June 2016 (UTC)
The title and first section say "floating point". But elsewhere in the article "floating-point" is used. The article should be consistent in spelling. In IEEE 754 they use "floating-point" with hyphen. I think that should be the correct spelling. JHBonarius ( talk) 14:18, 18 January 2017 (UTC)
The article Hidden bit redirects to this article, but there is no definition of this term here (there are two usages, but they are unclear in context unless you already know what the term is referring to). Either there should be a definition here, or the redirection should be removed and a stub created. JulesH ( talk) 05:43, 1 June 2017 (UTC)
There is a discussion with Vincent Lefèvre seeking consensus on the deletion of the "Causes of Floating Point Error" from this article on whether this change should be reverted.
Softtest123 ( talk) 20:16, 19 April 2018 (UTC)
[ https://www.analog.com/media/en/technical-documentation/application-notes/EE.185.Rev.4.08.07.pdf ]
Is this a separate floating point format or another name for an existing format? -- Guy Macon ( talk) 11:32, 20 September 2020 (UTC)