This article is rated Start-class on Wikipedia's
content assessment scale. It is of interest to the following WikiProjects: | ||||||||||||||||||||||||
|
Didn't you want to talk about big floating-point numbers ? Are some people interessed ?
I moved the following HTML comments from the article source over to this talk page. They may be more useful here:
— Herbee 20:57, 23 April 2006 (UTC)
Shouldn't this be arbitrary precision as well? I'm not familiar with the work being referenced, but I'm fairly certain that there is no such animal. Trying to get infinite precision off of 4/3 * pi just isn't happening on computers with finite memory any time soon. SnowFire 16:16, 22 May 2006 (UTC)
I agree with all of the above given suitable context, but not with the text in the article which reads "...infinite-precision arithmetic, which is something of a misnomer: the number of digits of precision always remains finite...". I would argue that any representation for which there is a bijective mapping onto the set being modelled is precise. It's not infinite precision that's the problem, it's trying to represent elements of an infinite set on a finite state machine. IMHO, it is not an error to describe bignum arithmetic as "infinite precision" if it is operating over a finite set, an obvious example being the integers modulo N, for not-extremely-large N. --Will
You are wrong. 4/3*pi is a computable number, thus it can be represented in finite manner in the memory of the computer. There is an example in the references section. Sounds like you don't know what you're talking about. Grue 06:17, 28 May 2006 (UTC)
The table of arbitrary precision libraries needs improvement. For one thing, "Number Type" for several of the libraries does not actually list what number types are provided. It would also be useful to include a short overview of what features each library provides in addition to just the number types. For example, GMP implementing highly optimized low level operations, MPFR providing correctly rounded transcendental functions, etc. Fredrik Johansson 13:46, 3 October 2008 (UTC)
I have an arbitrary-precision library I'd like to add to the list of arbitrary-precision libraries, but it would be a conflict of interest to add it myself, so I'm proposing that someone else add it, per the instructions here:
The list I'm referring to is the table toward the end, where the first item in the list is "apfloat".
My understanding is that it's in the general Wikipedia interest for the list to be more complete than less, and that therefore any useful arbitrary-precision library should be included.
Also, the library I'd like to add to the list, xlPrecision, is unique in that it is designed to be easy and intuitive for Office VBA programmers (especially Excel VBA programmers) who may have no knowledge of C, C++, Java, or any of the other languages reperesented in the list, and who therefore might have difficulty using the other libraries, at least in the short term.
Here are the lines I'd like to see added to the list:
| xlPrecision |Decimal | VBA |Commercial
(Sorry, I'm not sure how to format those lines to appear correctly. They way they appear in "edit this page" is the way they would be added to the table.)
Thanks, and let me know if you have any questions for me.
Greg ( talk) 02:20, 22 February 2009 (UTC)
W3bbo just now changed several mentions of "floats" to "decimals". This is incorrect, as most arbitrary-precision libraries use binary, not decimal, floats. Binary and decimal arithmetic have very different characteristics and purposes. Using "decimal" generically for floating-point numbers is sloppy, especially so for an article on a topic like this one. Fredrik Johansson 11:26, 4 September 2009 (UTC)
I found reading this article very confusing, since no distinction seems to be made between integer and floating point computations. Unbounded integers can of course be used to implement floating point arithmetic with a mantissa of varying length, but the two are not the same. Often one needs unbounded integers without any need to do floating point arithmetic; the factorial example is an illustration. In fact much of this article seems to be directed mostly at the case of integers, biut it never says so clearly.
Personally I find the notion of "precision" not suited to apply to integers, although I can see that the precision article says it does apply (and implies that a machine integer representing the number 17 on a 64-bit machine does so with more precision than one on a 32-bit machine). But even if I accept that the current article should apply to the case of representing integers of arbitrary length, then I insist that is should talk about that and about representing real numbers with extensible precision separately. The integer case falls into the category of exact algebraic computation, but the real number case fundamentally cannot. The latter case involves problems that do not apply to the former, such as deciding just where to stop the development of decimals if a number sqrt(2) is used within a computation. The article would in my eyes improve a lot if these issues were addressed. Marc van Leeuwen ( talk) 13:42, 6 September 2009 (UTC)
References
What's the n variable for in the example pseudo code? There's a loop, for n = 1 to 365, with no comment. One pass per day for a year? or 365! ? TomCerul ( talk) 21:25, 6 July 2010 (UTC)
Everyone involved is advised to read Wikipedia:Edit warring before this gets out of hand. Thanks! Guy Macon ( talk) 21:55, 10 May 2011 (UTC)
Text says "dc: the POSIX desk calculator". But dc doesn't seem to be part of POSIX standard. See http://pubs.opengroup.org/onlinepubs/9699919799/idx/id.html for a list of everything in POSIX starting with 'd', like dd or df. 18:13, 12 September 2011 (UTC) — Preceding unsigned comment added by 187.23.87.80 ( talk)
bc
, which is part of the posix standard.
AJRobbins (
talk) 16:35, 21 May 2014 (UTC)An editor has restored his link showing many systems doing one calculation without supporting it (which is his burden). The link does not add reasonable content; there is little doubt that different systems should compute the same answer; that they do it with slightly different presentations is not of interest here. Calling the examples a case study does not increase the value to WP. Furthermore, the article would not include such multiple examples; the link is not about something the article should have at some point in the future. Glrx ( talk) 22:22, 25 November 2011 (UTC)
This article focuses more on implementations than APA. The list of implementations should be spun off to something like List of arbitrary-precision arithmetic implementations.
I would delete the Arbitrary-precision arithmetic#Pre-set precision section; it's about implementation choices; it sounds more in extended precision rather than arbitrary precision.
The history section is computer-centric. The first algorithms (before computers) were about how to perform calculations -- stuff we were taught in grade school. You don't need a computer to do long division, but it does make sense to have a good trial divisor.
Knuth may have published The Art of Computer Programming (Vol 2 1969?) before White and Bill Gosper implemented bignums for Maclisp (Maclisp manual is 1979, but the implementation was earlier; arbitrary precision arithmetic was needed for Macsyma in the late 1960s). William A. Martin's 1967 thesis may have used bignums.
There's nothing about rationals.
Glrx ( talk) 23:00, 25 November 2011 (UTC)
There have been continuing additions/reverts of C-Y Wang's paper.
The insertions have not argued the merit of the citation. This revert characterized the paper as a "Low relevance undergraduate project".
My talk page had the following discussion about this paper some time ago. Here is the text:
My take is the various 220.128.221.x editors have a WP:COI with the paper. Glrx ( talk) 23:12, 24 December 2011 (UTC)
OK, it is correct that software/list entries which don't have any article can be listed in such lists/comparisons, but then please add a) a independent, third party, reliable reference to show me that is software exists and is notable and b) remove that spammy external links! Wikipedia is not a directory! mabdul 21:09, 17 January 2012 (UTC)
I quote: "A common application is public-key cryptography (such as that in every modern Web browser), whose algorithms commonly employ arithmetic with integers having hundreds or thousands of digits."
My understanding of encryption was that it actually didn't use this, and this statement sounds like an assumption. Please prove me wrong I would very much like to find usage examples. DarkShroom ( talk) 16:20, 30 March 2012 (UTC)
Package, library name | Number type | Language | License |
---|---|---|---|
apfloat | Decimal floats, integers, rationals, and complex | Java and C++ | LGPL and Freeware |
BeeCrypt Cryptography Library | Integers | Assembly, C, C++, Java | LGPL |
ARPREC and MPFUN | Integers, binary floats, complex binary floats | C++ with C++ and Fortran bindings | BSD |
Base One Number Class | Decimal floats | C++ | Proprietary |
bbnum library | Integers and floats | Assembler and C++ | New BSD |
phpseclib | Decimal floats | PHP | LGPL |
BCMath Arbitrary Precision Mathematics | Decimal floats | PHP | PHP License |
BigDigits | Naturals | C | Freeware [2] |
BigFloat | Binary Floats | C++ | GPL |
BigNum | Binary Integers, Floats (with math functions) | C#, .NET | Freeware |
C++ Big Integer Library | Integers | C++ | Public domain |
Class Library for Numbers (CLN) | Integers, rationals, floats and complex | C and C++ | GPL |
Computable Real Numbers | Reals | Common Lisp | |
dbl<n>, Git repo | n x 53 bits precision compact & fast floating point numbers (n=2,3,4,5) | C++ template | Proprietary or GPL |
IMSL | C | Proprietary | |
decNumber | Decimals | C | ICU licence ( MIT licence) [3] |
FMLIB | Floats | Fortran | |
GNU Multi-Precision Library (and MPFR) | Integers, rationals and floats | C and C++ with bindings ( GMPY,...) | LGPL |
MPCLI | Integers | C#, .NET | MIT |
C# Bindings for MPIR ( MPIR is a fork of the GNU Multi-Precision Library)] | Integers, rationals and floats | C#, .NET | LGPL |
GNU Multi-Precision Library for .NET | Integers | C#, .NET | LGPL |
Eiffel Arbitrary Precision Mathematics Library | Integers | Eiffel | LGPL |
HugeCalc | Integers | C++ and Assembler | Proprietary |
IMath | Integers and rationals | C | MIT |
IntX | Integers | C#, .NET | New BSD |
JScience LargeInteger | Integers | Java | |
libgcrypt | Integers | C | LGPL |
libmpdec (and cdecimal) | Decimals | C, C++ and Python | Simplified BSD |
LibTomMath, Git repo | Integers | C and C++ | Public domain |
LiDIA | Integers, floats, complex floats and rationals | C and C++ | Free for non-commercial use |
MAPM | Integers and decimal floats | C (bindings for C++ and Lua) | Freeware |
MIRACL | Integers and rationals | C and C++ | Free for non-commercial use |
MPI | Integers | C | LGPL |
MPArith | Integers, floats, and rationals | Pascal, Delphi | zlib |
mpmath | Floats, complex floats | Python | New BSD |
NTL | Integers, floats | C and C++ | GPL |
bigInteger (and bigRational) | Integers and rationals | C and Seed7 | LGPL |
TTMath library | Integers and binary floats | Assembler and C++ | New BSD |
vecLib.framework | Integers | C | Proprietary |
W3b.Sine | Decimal floats | C#, .NET | New BSD |
Eiffel Arbitrary Precision Mathematics Library (GMP port) | Integers | Eiffel | LGPL |
BigInt | Integers | JavaScript | Public domain |
javascript-bignum | Scheme-compatible decimal integers, rationals, and complex | JavaScript | MIT |
MathX | Integers, floats | C++ | Boost |
ArbitraryPrecisionFloat | floats (Decimals, Integer and Rational are built in) | Smalltalk | MIT |
vlint | Integers | C++ | BSD |
hapint | Integers | JavaScript | MIT or GPL |
An editor just added a comment about more extensive floating point packages with arbitrary precision that include trig functions.
I have no problem with the notion of arbitrary precision integers/bignums -- the integers take the space they need. The article explains how integer calculations can remain exact; division may require using rationals.
That cannot be the same notion with floating point and typical FP operations: sqrt(2) would immediately run out of space; π has the same problem. Arbitrary-precision in the floating point context would be the user setting the sizes of the exponent and mantissa. For example,
MPFR has a target precision. (And there's an awkward topic about how to compute sin(x)
to a desired precision.)
The article never addresses the floating point issue. Symbolic algebra systems use symbols and rationals to avoid problems with division; they keep irrationals as symbolic quantities; the results are exact/have infinite precision. Floating point is not usually exact. The article mentions some packages have a pre-set precision. Even Fortran has REAL*8
.
There should be a clear notion of arbitrary precision arithmetic. I distinguish between arbitrary precision (integers/rationals that grow as needed; unbounded size; results exact), multiple precision (integers; fixed upper bound set by user; bounded size; division has a remainder; results exact; may overflow), and extended precision (floating point where user sets limits on exponent and mantissa; bounded size; results inexact).
I would restrict the current article to integers/rationals. Floating point belongs elsewhere.
It's not clear that some packages are appropriate even now (under definition that states "calculations are performed on numbers which digits of precision are limited only by the available memory of the host system"):
Although I can accept a notion of "arbitrary" where the user can set the precision parameters to any value he chooses, I think that is better fits the notion of multiple or extended precision.
Glrx ( talk) 15:58, 12 September 2012 (UTC)
There are at least three uses of "we" that should be eliminated. Bubba73 You talkin' to me? 03:31, 16 June 2013 (UTC)
In Arbitrary-precision arithmetic#Applications, this phrase appears: "for example the √⅓ that appears in Gaussian integration." That article does not confirm this claimed fact. It does not appear to contain "√⅓" at all. David Spector ( talk) 03:23, 8 September 2013 (UTC)
Huh? Try Gauss-Legendre (i.e. Gaussian but with with weight constant) and N = 2. NickyMcLean ( talk) 09:09, 16 November 2014 (UTC)
Hello fellow Wikipedians,
I have just modified one external link on Arbitrary-precision arithmetic. Please take a moment to review my edit. If you have any questions, or need the bot to ignore the links, or the page altogether, please visit this simple FaQ for additional information. I made the following changes:
When you have finished reviewing my changes, please set the checked parameter below to true or failed to let others know (documentation at {{
Sourcecheck}}
).
This message was posted before February 2018.
After February 2018, "External links modified" talk page sections are no longer generated or monitored by InternetArchiveBot. No special action is required regarding these talk page notices, other than
regular verification using the archive tool instructions below. Editors
have permission to delete these "External links modified" talk page sections if they want to de-clutter talk pages, but see the
RfC before doing mass systematic removals. This message is updated dynamically through the template {{
source check}}
(last update: 18 January 2022).
Cheers.— InternetArchiveBot ( Report bug) 02:30, 17 October 2016 (UTC)
User User:Glrx keeps reverting edits on Big-O-Notation while clearly not understanding how Big-O works and furthermore tries to delete a perfectly fine example of application of arbitrary-precision arithmetic (Contrary to what he thinks, I indeed know the difference between arbitray and extended precision, since I am an active researcher in this field). Since he does not seem to understand my point, I would like to request a third opinion. -- Cerotidinon ( talk) 22:39, 28 August 2019 (UTC)
There are two types of edits in question.
One of the edits is about using arbitrary precision calculations to find optimal approximations. That is a good statement, but the specific example value of √⅓ may mislead the reader into believing the value can be represented with "infinite precision". Arbitrary precision cannot exactly represent √⅓ because it is irrational. What does "the optimal value" for √⅓ mean? Isn't the optimal value for √⅓ the closest value to √⅓ for the number representation that being used? Instead of "optimal approximation" the issue is calculating a value to enough precision and then rounding it. The example is also a bit screwy because higher precision numbers are not needed to converge a square root. Newton's method can find an accurate root; one does not need to do the calculation with higher precision and then round the result. The specific example was also for
Gaussian integration.
Gaussian integration is a method of approximating the value of a definite integral. Why does one need to compute ultraprecise values for something that computes an approximation anyway? The value that one gets from calling sqrt(1/3)
should be good enough for an approximate integration – especially if all the quadrature calculations will be done in that precision. The example may confuse readers, and the given use may not require extraordinary precision.
It's not a good example. I wish I had a better one. Arbitrary precision routines are used in symbolic mathematics (where calculations must be exact), cryptology (where numbers may be 1000s of bits long), and for calculating values to extreme precision (such as thousands of digits of pi).
The second type of edit is about order notation and involves two statements; Cerotidinon wants to use Θ() rather than O() notation. By and large, if readers know a little about order notation, they will know that O() describes an upper bound and (if tight) shows the worst case time. Practically speaking, O() bounds are assumed to be tight; we don't say O(N3) when we know the algorithm is O(N2). But of those readers familiar with order notation, fewer will know what Ω() or Θ() notation means. Most practical applications focus on the upper bound O() with the presumption the bound is tight. We serve the readers by using notation that is familiar. WP is an encyclopedia; it is not an academic journal.
Arbitrary precision arithmetic worries about complexity, but the article is not focused on it. (The multiplication algorithm article is interested in fast multiplication algorithms.) Looking at sorting algorithm (which is concerned with complexity), most of the order statistics there use O() notation. The heapsort article uses O() notation in the infobox even when specifically describing worst case performance. That entry is where Cormen wants to use Θ(). O() notation is common and familiar and heavily used on WP; Θ() is less common and less familiar.
Using Θ() notation is a bit more precise because it is only appropriate if the bounds are tight, but most O() notation bounds are tight.
Using Θ() notation can be confusing to readers. I'm not sure how to express this well. I can say the running time for comparison is O(n). I can also say the worst case running time is O(n). There are no surprises, but look what happens when using Θ(). I can say the worst case running time is Θ(n), but I cannot say the running time for comparison is Θ(n).
I reverted to the comparison runtime statement:
Cerotidinon wants to use
The last sentence requires the "worst case" modifier, and the sentence is jarring. The first clause is giving an upper and lower bound for the algorithm's worst case performance, and the second clause is immediately saying the worst case lower bound does not apply in the usual case. Why raise the worst case lower bound at all? The first sentence (with O(n)) does not state a lower bound, so there is no double take. A plainer statement would be
The whole statement is
Here's the second statement (using O()):
Cerotidinon wants the Θ() version:
Here, the Θ() statement is false. Multiplication as taught in primary school requires N2 in the general case, but even primary students are taught that multiplication by 100 is just appending two zeros to the multiplicand. The word "require" does not flag the statement as a worst case statistic but rather a lower bound. Compare
So the second statement is both unfamiliar notation and false.
Cerotidinon claims O(), Ω(), and Θ() are always statements about an upper bound: "Big-O-Notation consists of three notations O for upper bounds". I do not know what he means by that statement. O() is an upper bound, but Ω() is a lower bound, and Θ() is both an upper and lower bound.
So I disagree with Cerotidinon's edits.
So what is the meaning of "arbitrary precision"? That's something that falls to definition, and there may be different definitions. The term "bignum" is more specific. Early Lisp implementations (such as Maclisp) had bignums, and they were numbers that were "limited only by the available memory". One could take any number and continually square it until the memory ran out. The integer result was always exact. The numbers never overflowed/wrapped around. Modern lisp implementations (such as Common Lisp), have both bignums and bignum rationals. If one uses such numbers, the results are always exact; one has "infinite precision". Rational numbers can be represented, but irrational number cannot. There's a price to be paid for exact values.
There's also the term "multiple precision", but it can mean two different things. it may be a bignum implementation where numbers may be arbitrarily large, or it may be operations on fixed-width numbers but where the fixed width may be much larger than a machine native ints. The width is set before the operations are made. One may be able to set the width to an arbitrarily large value (say 200,000 digits), but there is the risk of overflow. You get more precision, but you keep all the ugly features such as overflow.
Cerotidinon referred to a GNU floating point package GNU MPFR. It does not have an exact representation for values such as 1/3. The precision is not infinite. The package corresponds to increasing the floating point widths beyond a machine's single-, double-, and perhaps quadruple-precision floating point representations. It has more digits, but it still has all the ugly problems of floating point numbers: underflow, overflow, cancelation, and rounding.
Glrx ( talk) 21:28, 29 August 2019 (UTC)
123 × 100 ------ 000 (123 × 0) 000 (123 × 10) 123 (123 × 100) ------- 12300
References
The article definitely needs some cleanup and structuring. Maybe it should be split into two: One for large integers, and one for large reals. Right now this article is classified as describing a floating point format, whereas most of the text is about unbounded integers. The most important application of big integers, cryptography, typically uses unsigned unbounded integers. Jost Riedel ( talk) 23:00, 5 December 2022 (UTC)
Looking at just the begining there are some uncited sources and later thare is an N word joke. 69.40.114.216 ( talk) 15:16, 19 November 2023 (UTC)
I think this whole section is problematic and should be removed. Bubba73 You talkin' to me? 06:16, 16 January 2024 (UTC)
An editor just removed the mention of Java's BigInteger class, because its numbers are limited to 2^2147483647 and not the computer's available memory. That's still about 646,392,578 digits, which is a lot less than the memory of computers today. But how many times do you need integers with more than 646 million digits? Bubba73 You talkin' to me? 19:59, 13 April 2024 (UTC)
This article is rated Start-class on Wikipedia's
content assessment scale. It is of interest to the following WikiProjects: | ||||||||||||||||||||||||
|
Didn't you want to talk about big floating-point numbers ? Are some people interessed ?
I moved the following HTML comments from the article source over to this talk page. They may be more useful here:
— Herbee 20:57, 23 April 2006 (UTC)
Shouldn't this be arbitrary precision as well? I'm not familiar with the work being referenced, but I'm fairly certain that there is no such animal. Trying to get infinite precision off of 4/3 * pi just isn't happening on computers with finite memory any time soon. SnowFire 16:16, 22 May 2006 (UTC)
I agree with all of the above given suitable context, but not with the text in the article which reads "...infinite-precision arithmetic, which is something of a misnomer: the number of digits of precision always remains finite...". I would argue that any representation for which there is a bijective mapping onto the set being modelled is precise. It's not infinite precision that's the problem, it's trying to represent elements of an infinite set on a finite state machine. IMHO, it is not an error to describe bignum arithmetic as "infinite precision" if it is operating over a finite set, an obvious example being the integers modulo N, for not-extremely-large N. --Will
You are wrong. 4/3*pi is a computable number, thus it can be represented in finite manner in the memory of the computer. There is an example in the references section. Sounds like you don't know what you're talking about. Grue 06:17, 28 May 2006 (UTC)
The table of arbitrary precision libraries needs improvement. For one thing, "Number Type" for several of the libraries does not actually list what number types are provided. It would also be useful to include a short overview of what features each library provides in addition to just the number types. For example, GMP implementing highly optimized low level operations, MPFR providing correctly rounded transcendental functions, etc. Fredrik Johansson 13:46, 3 October 2008 (UTC)
I have an arbitrary-precision library I'd like to add to the list of arbitrary-precision libraries, but it would be a conflict of interest to add it myself, so I'm proposing that someone else add it, per the instructions here:
The list I'm referring to is the table toward the end, where the first item in the list is "apfloat".
My understanding is that it's in the general Wikipedia interest for the list to be more complete than less, and that therefore any useful arbitrary-precision library should be included.
Also, the library I'd like to add to the list, xlPrecision, is unique in that it is designed to be easy and intuitive for Office VBA programmers (especially Excel VBA programmers) who may have no knowledge of C, C++, Java, or any of the other languages reperesented in the list, and who therefore might have difficulty using the other libraries, at least in the short term.
Here are the lines I'd like to see added to the list:
| xlPrecision |Decimal | VBA |Commercial
(Sorry, I'm not sure how to format those lines to appear correctly. They way they appear in "edit this page" is the way they would be added to the table.)
Thanks, and let me know if you have any questions for me.
Greg ( talk) 02:20, 22 February 2009 (UTC)
W3bbo just now changed several mentions of "floats" to "decimals". This is incorrect, as most arbitrary-precision libraries use binary, not decimal, floats. Binary and decimal arithmetic have very different characteristics and purposes. Using "decimal" generically for floating-point numbers is sloppy, especially so for an article on a topic like this one. Fredrik Johansson 11:26, 4 September 2009 (UTC)
I found reading this article very confusing, since no distinction seems to be made between integer and floating point computations. Unbounded integers can of course be used to implement floating point arithmetic with a mantissa of varying length, but the two are not the same. Often one needs unbounded integers without any need to do floating point arithmetic; the factorial example is an illustration. In fact much of this article seems to be directed mostly at the case of integers, biut it never says so clearly.
Personally I find the notion of "precision" not suited to apply to integers, although I can see that the precision article says it does apply (and implies that a machine integer representing the number 17 on a 64-bit machine does so with more precision than one on a 32-bit machine). But even if I accept that the current article should apply to the case of representing integers of arbitrary length, then I insist that is should talk about that and about representing real numbers with extensible precision separately. The integer case falls into the category of exact algebraic computation, but the real number case fundamentally cannot. The latter case involves problems that do not apply to the former, such as deciding just where to stop the development of decimals if a number sqrt(2) is used within a computation. The article would in my eyes improve a lot if these issues were addressed. Marc van Leeuwen ( talk) 13:42, 6 September 2009 (UTC)
References
What's the n variable for in the example pseudo code? There's a loop, for n = 1 to 365, with no comment. One pass per day for a year? or 365! ? TomCerul ( talk) 21:25, 6 July 2010 (UTC)
Everyone involved is advised to read Wikipedia:Edit warring before this gets out of hand. Thanks! Guy Macon ( talk) 21:55, 10 May 2011 (UTC)
Text says "dc: the POSIX desk calculator". But dc doesn't seem to be part of POSIX standard. See http://pubs.opengroup.org/onlinepubs/9699919799/idx/id.html for a list of everything in POSIX starting with 'd', like dd or df. 18:13, 12 September 2011 (UTC) — Preceding unsigned comment added by 187.23.87.80 ( talk)
bc
, which is part of the posix standard.
AJRobbins (
talk) 16:35, 21 May 2014 (UTC)An editor has restored his link showing many systems doing one calculation without supporting it (which is his burden). The link does not add reasonable content; there is little doubt that different systems should compute the same answer; that they do it with slightly different presentations is not of interest here. Calling the examples a case study does not increase the value to WP. Furthermore, the article would not include such multiple examples; the link is not about something the article should have at some point in the future. Glrx ( talk) 22:22, 25 November 2011 (UTC)
This article focuses more on implementations than APA. The list of implementations should be spun off to something like List of arbitrary-precision arithmetic implementations.
I would delete the Arbitrary-precision arithmetic#Pre-set precision section; it's about implementation choices; it sounds more in extended precision rather than arbitrary precision.
The history section is computer-centric. The first algorithms (before computers) were about how to perform calculations -- stuff we were taught in grade school. You don't need a computer to do long division, but it does make sense to have a good trial divisor.
Knuth may have published The Art of Computer Programming (Vol 2 1969?) before White and Bill Gosper implemented bignums for Maclisp (Maclisp manual is 1979, but the implementation was earlier; arbitrary precision arithmetic was needed for Macsyma in the late 1960s). William A. Martin's 1967 thesis may have used bignums.
There's nothing about rationals.
Glrx ( talk) 23:00, 25 November 2011 (UTC)
There have been continuing additions/reverts of C-Y Wang's paper.
The insertions have not argued the merit of the citation. This revert characterized the paper as a "Low relevance undergraduate project".
My talk page had the following discussion about this paper some time ago. Here is the text:
My take is the various 220.128.221.x editors have a WP:COI with the paper. Glrx ( talk) 23:12, 24 December 2011 (UTC)
OK, it is correct that software/list entries which don't have any article can be listed in such lists/comparisons, but then please add a) a independent, third party, reliable reference to show me that is software exists and is notable and b) remove that spammy external links! Wikipedia is not a directory! mabdul 21:09, 17 January 2012 (UTC)
I quote: "A common application is public-key cryptography (such as that in every modern Web browser), whose algorithms commonly employ arithmetic with integers having hundreds or thousands of digits."
My understanding of encryption was that it actually didn't use this, and this statement sounds like an assumption. Please prove me wrong I would very much like to find usage examples. DarkShroom ( talk) 16:20, 30 March 2012 (UTC)
Package, library name | Number type | Language | License |
---|---|---|---|
apfloat | Decimal floats, integers, rationals, and complex | Java and C++ | LGPL and Freeware |
BeeCrypt Cryptography Library | Integers | Assembly, C, C++, Java | LGPL |
ARPREC and MPFUN | Integers, binary floats, complex binary floats | C++ with C++ and Fortran bindings | BSD |
Base One Number Class | Decimal floats | C++ | Proprietary |
bbnum library | Integers and floats | Assembler and C++ | New BSD |
phpseclib | Decimal floats | PHP | LGPL |
BCMath Arbitrary Precision Mathematics | Decimal floats | PHP | PHP License |
BigDigits | Naturals | C | Freeware [2] |
BigFloat | Binary Floats | C++ | GPL |
BigNum | Binary Integers, Floats (with math functions) | C#, .NET | Freeware |
C++ Big Integer Library | Integers | C++ | Public domain |
Class Library for Numbers (CLN) | Integers, rationals, floats and complex | C and C++ | GPL |
Computable Real Numbers | Reals | Common Lisp | |
dbl<n>, Git repo | n x 53 bits precision compact & fast floating point numbers (n=2,3,4,5) | C++ template | Proprietary or GPL |
IMSL | C | Proprietary | |
decNumber | Decimals | C | ICU licence ( MIT licence) [3] |
FMLIB | Floats | Fortran | |
GNU Multi-Precision Library (and MPFR) | Integers, rationals and floats | C and C++ with bindings ( GMPY,...) | LGPL |
MPCLI | Integers | C#, .NET | MIT |
C# Bindings for MPIR ( MPIR is a fork of the GNU Multi-Precision Library)] | Integers, rationals and floats | C#, .NET | LGPL |
GNU Multi-Precision Library for .NET | Integers | C#, .NET | LGPL |
Eiffel Arbitrary Precision Mathematics Library | Integers | Eiffel | LGPL |
HugeCalc | Integers | C++ and Assembler | Proprietary |
IMath | Integers and rationals | C | MIT |
IntX | Integers | C#, .NET | New BSD |
JScience LargeInteger | Integers | Java | |
libgcrypt | Integers | C | LGPL |
libmpdec (and cdecimal) | Decimals | C, C++ and Python | Simplified BSD |
LibTomMath, Git repo | Integers | C and C++ | Public domain |
LiDIA | Integers, floats, complex floats and rationals | C and C++ | Free for non-commercial use |
MAPM | Integers and decimal floats | C (bindings for C++ and Lua) | Freeware |
MIRACL | Integers and rationals | C and C++ | Free for non-commercial use |
MPI | Integers | C | LGPL |
MPArith | Integers, floats, and rationals | Pascal, Delphi | zlib |
mpmath | Floats, complex floats | Python | New BSD |
NTL | Integers, floats | C and C++ | GPL |
bigInteger (and bigRational) | Integers and rationals | C and Seed7 | LGPL |
TTMath library | Integers and binary floats | Assembler and C++ | New BSD |
vecLib.framework | Integers | C | Proprietary |
W3b.Sine | Decimal floats | C#, .NET | New BSD |
Eiffel Arbitrary Precision Mathematics Library (GMP port) | Integers | Eiffel | LGPL |
BigInt | Integers | JavaScript | Public domain |
javascript-bignum | Scheme-compatible decimal integers, rationals, and complex | JavaScript | MIT |
MathX | Integers, floats | C++ | Boost |
ArbitraryPrecisionFloat | floats (Decimals, Integer and Rational are built in) | Smalltalk | MIT |
vlint | Integers | C++ | BSD |
hapint | Integers | JavaScript | MIT or GPL |
An editor just added a comment about more extensive floating point packages with arbitrary precision that include trig functions.
I have no problem with the notion of arbitrary precision integers/bignums -- the integers take the space they need. The article explains how integer calculations can remain exact; division may require using rationals.
That cannot be the same notion with floating point and typical FP operations: sqrt(2) would immediately run out of space; π has the same problem. Arbitrary-precision in the floating point context would be the user setting the sizes of the exponent and mantissa. For example,
MPFR has a target precision. (And there's an awkward topic about how to compute sin(x)
to a desired precision.)
The article never addresses the floating point issue. Symbolic algebra systems use symbols and rationals to avoid problems with division; they keep irrationals as symbolic quantities; the results are exact/have infinite precision. Floating point is not usually exact. The article mentions some packages have a pre-set precision. Even Fortran has REAL*8
.
There should be a clear notion of arbitrary precision arithmetic. I distinguish between arbitrary precision (integers/rationals that grow as needed; unbounded size; results exact), multiple precision (integers; fixed upper bound set by user; bounded size; division has a remainder; results exact; may overflow), and extended precision (floating point where user sets limits on exponent and mantissa; bounded size; results inexact).
I would restrict the current article to integers/rationals. Floating point belongs elsewhere.
It's not clear that some packages are appropriate even now (under definition that states "calculations are performed on numbers which digits of precision are limited only by the available memory of the host system"):
Although I can accept a notion of "arbitrary" where the user can set the precision parameters to any value he chooses, I think that is better fits the notion of multiple or extended precision.
Glrx ( talk) 15:58, 12 September 2012 (UTC)
There are at least three uses of "we" that should be eliminated. Bubba73 You talkin' to me? 03:31, 16 June 2013 (UTC)
In Arbitrary-precision arithmetic#Applications, this phrase appears: "for example the √⅓ that appears in Gaussian integration." That article does not confirm this claimed fact. It does not appear to contain "√⅓" at all. David Spector ( talk) 03:23, 8 September 2013 (UTC)
Huh? Try Gauss-Legendre (i.e. Gaussian but with with weight constant) and N = 2. NickyMcLean ( talk) 09:09, 16 November 2014 (UTC)
Hello fellow Wikipedians,
I have just modified one external link on Arbitrary-precision arithmetic. Please take a moment to review my edit. If you have any questions, or need the bot to ignore the links, or the page altogether, please visit this simple FaQ for additional information. I made the following changes:
When you have finished reviewing my changes, please set the checked parameter below to true or failed to let others know (documentation at {{
Sourcecheck}}
).
This message was posted before February 2018.
After February 2018, "External links modified" talk page sections are no longer generated or monitored by InternetArchiveBot. No special action is required regarding these talk page notices, other than
regular verification using the archive tool instructions below. Editors
have permission to delete these "External links modified" talk page sections if they want to de-clutter talk pages, but see the
RfC before doing mass systematic removals. This message is updated dynamically through the template {{
source check}}
(last update: 18 January 2022).
Cheers.— InternetArchiveBot ( Report bug) 02:30, 17 October 2016 (UTC)
User User:Glrx keeps reverting edits on Big-O-Notation while clearly not understanding how Big-O works and furthermore tries to delete a perfectly fine example of application of arbitrary-precision arithmetic (Contrary to what he thinks, I indeed know the difference between arbitray and extended precision, since I am an active researcher in this field). Since he does not seem to understand my point, I would like to request a third opinion. -- Cerotidinon ( talk) 22:39, 28 August 2019 (UTC)
There are two types of edits in question.
One of the edits is about using arbitrary precision calculations to find optimal approximations. That is a good statement, but the specific example value of √⅓ may mislead the reader into believing the value can be represented with "infinite precision". Arbitrary precision cannot exactly represent √⅓ because it is irrational. What does "the optimal value" for √⅓ mean? Isn't the optimal value for √⅓ the closest value to √⅓ for the number representation that being used? Instead of "optimal approximation" the issue is calculating a value to enough precision and then rounding it. The example is also a bit screwy because higher precision numbers are not needed to converge a square root. Newton's method can find an accurate root; one does not need to do the calculation with higher precision and then round the result. The specific example was also for
Gaussian integration.
Gaussian integration is a method of approximating the value of a definite integral. Why does one need to compute ultraprecise values for something that computes an approximation anyway? The value that one gets from calling sqrt(1/3)
should be good enough for an approximate integration – especially if all the quadrature calculations will be done in that precision. The example may confuse readers, and the given use may not require extraordinary precision.
It's not a good example. I wish I had a better one. Arbitrary precision routines are used in symbolic mathematics (where calculations must be exact), cryptology (where numbers may be 1000s of bits long), and for calculating values to extreme precision (such as thousands of digits of pi).
The second type of edit is about order notation and involves two statements; Cerotidinon wants to use Θ() rather than O() notation. By and large, if readers know a little about order notation, they will know that O() describes an upper bound and (if tight) shows the worst case time. Practically speaking, O() bounds are assumed to be tight; we don't say O(N3) when we know the algorithm is O(N2). But of those readers familiar with order notation, fewer will know what Ω() or Θ() notation means. Most practical applications focus on the upper bound O() with the presumption the bound is tight. We serve the readers by using notation that is familiar. WP is an encyclopedia; it is not an academic journal.
Arbitrary precision arithmetic worries about complexity, but the article is not focused on it. (The multiplication algorithm article is interested in fast multiplication algorithms.) Looking at sorting algorithm (which is concerned with complexity), most of the order statistics there use O() notation. The heapsort article uses O() notation in the infobox even when specifically describing worst case performance. That entry is where Cormen wants to use Θ(). O() notation is common and familiar and heavily used on WP; Θ() is less common and less familiar.
Using Θ() notation is a bit more precise because it is only appropriate if the bounds are tight, but most O() notation bounds are tight.
Using Θ() notation can be confusing to readers. I'm not sure how to express this well. I can say the running time for comparison is O(n). I can also say the worst case running time is O(n). There are no surprises, but look what happens when using Θ(). I can say the worst case running time is Θ(n), but I cannot say the running time for comparison is Θ(n).
I reverted to the comparison runtime statement:
Cerotidinon wants to use
The last sentence requires the "worst case" modifier, and the sentence is jarring. The first clause is giving an upper and lower bound for the algorithm's worst case performance, and the second clause is immediately saying the worst case lower bound does not apply in the usual case. Why raise the worst case lower bound at all? The first sentence (with O(n)) does not state a lower bound, so there is no double take. A plainer statement would be
The whole statement is
Here's the second statement (using O()):
Cerotidinon wants the Θ() version:
Here, the Θ() statement is false. Multiplication as taught in primary school requires N2 in the general case, but even primary students are taught that multiplication by 100 is just appending two zeros to the multiplicand. The word "require" does not flag the statement as a worst case statistic but rather a lower bound. Compare
So the second statement is both unfamiliar notation and false.
Cerotidinon claims O(), Ω(), and Θ() are always statements about an upper bound: "Big-O-Notation consists of three notations O for upper bounds". I do not know what he means by that statement. O() is an upper bound, but Ω() is a lower bound, and Θ() is both an upper and lower bound.
So I disagree with Cerotidinon's edits.
So what is the meaning of "arbitrary precision"? That's something that falls to definition, and there may be different definitions. The term "bignum" is more specific. Early Lisp implementations (such as Maclisp) had bignums, and they were numbers that were "limited only by the available memory". One could take any number and continually square it until the memory ran out. The integer result was always exact. The numbers never overflowed/wrapped around. Modern lisp implementations (such as Common Lisp), have both bignums and bignum rationals. If one uses such numbers, the results are always exact; one has "infinite precision". Rational numbers can be represented, but irrational number cannot. There's a price to be paid for exact values.
There's also the term "multiple precision", but it can mean two different things. it may be a bignum implementation where numbers may be arbitrarily large, or it may be operations on fixed-width numbers but where the fixed width may be much larger than a machine native ints. The width is set before the operations are made. One may be able to set the width to an arbitrarily large value (say 200,000 digits), but there is the risk of overflow. You get more precision, but you keep all the ugly features such as overflow.
Cerotidinon referred to a GNU floating point package GNU MPFR. It does not have an exact representation for values such as 1/3. The precision is not infinite. The package corresponds to increasing the floating point widths beyond a machine's single-, double-, and perhaps quadruple-precision floating point representations. It has more digits, but it still has all the ugly problems of floating point numbers: underflow, overflow, cancelation, and rounding.
Glrx ( talk) 21:28, 29 August 2019 (UTC)
123 × 100 ------ 000 (123 × 0) 000 (123 × 10) 123 (123 × 100) ------- 12300
References
The article definitely needs some cleanup and structuring. Maybe it should be split into two: One for large integers, and one for large reals. Right now this article is classified as describing a floating point format, whereas most of the text is about unbounded integers. The most important application of big integers, cryptography, typically uses unsigned unbounded integers. Jost Riedel ( talk) 23:00, 5 December 2022 (UTC)
Looking at just the begining there are some uncited sources and later thare is an N word joke. 69.40.114.216 ( talk) 15:16, 19 November 2023 (UTC)
I think this whole section is problematic and should be removed. Bubba73 You talkin' to me? 06:16, 16 January 2024 (UTC)
An editor just removed the mention of Java's BigInteger class, because its numbers are limited to 2^2147483647 and not the computer's available memory. That's still about 646,392,578 digits, which is a lot less than the memory of computers today. But how many times do you need integers with more than 646 million digits? Bubba73 You talkin' to me? 19:59, 13 April 2024 (UTC)