This article is rated C-class on Wikipedia's
content assessment scale. It is of interest to the following WikiProjects: | ||||||||||||||||||||||||||||
|
This page is basically redundant with Significant figures#Rounding, but I feel that this is better written, so warrants merging instead of deleting. ~ Irrel 17:33, 28 September 2005 (UTC)
"but if the five is succeeded by nothing, odd last reportable digits are decreased by one" Shouldn't this be increased? 68.227.80.79 05:44, 13 November 2005 (UTC)
I had a question about whether everyone in the world rounds the same way. In the US, there are two common methods. The first needs little explanation and is even programmed into things like MS Excel. The other is sometimes called scientific rounding or rounding to even on 5. I'm asking because I run into a lot of foreign professionals who round differently, so I didn't know if they were out of practice or taught differently. From a philosophical view, it would actually make sense to round to significant figures by simple truncation, because anything beyond the "best guess" digit is considered noise and should have no impact on the reported value. I once asked myself which of the two rounding methods used in the US is best. I approached the question with the assumption that the noise digits were random and equally likely to appear. If I have a set of values with all possible noise digits and calculate the mean and standard deviation, and then repeat the mean and standard deviation calculation after rounding all of the values, the best method would not deviate from the true mean and standard deviation. It turned out that the round to even on 5 method actually gave a more accurate mean, but the most popular rounding method gave a more accurate standard deviation. I don't know if any of this would be of interest for this lesson on rounding, but thought that I'd mention it. John Leach ( talk) 21:52, 6 July 2020 (UTC)
How are negative numbers handled? According to the article round(-1.5) = -2, which is wrong, right? round(-1.5) = -1 I believe.
The Rounding of ALL numbers, requires the rounding of the absolute value of the number, and then replace the original sign (+ or -). The above answer is "-2", and not "-1". If the number preceding a "5" is ODD, then make it EVEN by adding "1". If the number preceding a "5" is EVEN, then leave it alone. As there are an equal amount of ODD numbers and EVEN numbers in the counting system.
Example; "1.6 - 1.6 = 0", "1.6 + (-1.6) = 0", rounding to the 1's unit is "2.0 + (-2.0) = 0" as |-1.6| is 1.6, rounded gives us "2.0". Example; "1.4 - 1.4 = 0", "1.4 + (-1.4) = 0", rounding to the 1's unit is "1.0 + (-1.0) = 0" as |-1.4| is 1.4, rounded gives us "1.0". Example; "1.5 - 1.5 = 0", "1.5 + (-1.5) = 0", rounding to the 1's unit is "2.0 + (-2.0) = 0" as |-1.5| is 1.5, rounded gives us "2.0". Example; "2.5 - 2.5 = 0", "2.5 + (-2.5) = 0", rounding to the 1's unit is "2.0 + (-2.0) = 0" as |-2.5| is 2.5, rounded gives us "2.0".
The Rounding of a number can never give a value of "0.0".
I don't know if it's a good idea to merge with floor function, but they are related. A sort of "family tree" of rounding functions:
From the "Round-to-even method" section in this article, as of right now:
"When dealing with large sets of scientific or statistical data, where trends are important, traditional rounding on average biases the data upwards slightly. Over a large set of data, or when many subsequent rounding operations are performed as in digital signal processing, the round-to-even rule tends to reduce the total rounding error, with (on average) an equal portion of numbers rounding up as rounding down."
Huh? Doesn't traditional rounding have an equal portion of numbers rounding up and down? In traditional rounding, numbers between 0 and <5 are rounded to 0, while numbers between 5 and <10 are rounded to 10, if 10 is an increase in the next highest digit of the number being rounded. The difference of (<10 - 5) equals the difference of (<5 - 0), doesn't it? Am I missing something? 4.242.147.47 21:13, 19 September 2006 (UTC)
In four cases (1, 2, 3 and 4) the value is rounded down. In five cases (5,6,7,8,9) the value is rounded up. In one case (0) the value is left unchanged. This may be what you were missing. 194.109.22.149 15:14, 28 September 2006 (UTC)
But "unchanged" is not entirely correct, as there may be further digits after the 0. For example, rounded to one decimal place, 2.304 would round to 2.3; it is not unchanged under the traditional rounding scheme, but rather rounded down, thus making five cases for rounding down and five for rounding up.
I am amazed that any serious scientist could argue that there are either are either x+1 or x-1 numbers between integers in a base 10 system. When fully discreet, there are an equal amount of numbers that go to the lower integer as the higher integer. 2.0, 2.1, 2.2, 2.3 and 2.4 go to 2. 2.5, 2.6, 2.7, 2.8 and 2.9 go to 3. 5 each. The same would have been true from 1.0 - 1.9 and then from 3.0 t- 3.9. Every single integer will have 10 numbers that can result in that rounding. The tie-breaking rule is silly, and as engineers have known for years, wrong. If you want the true answer, you never analyze rounded numbers! You must use the actual values to beyond at least one significant digit to what you are trying to analyze. This is confusing, unnecessarily, kids in school today. Go back to what we have known for years, and can be proven to be correct. And never, ever analyze rounded numbers to the same significant digit if you know the data set is more discreet. — Preceding unsigned comment added by 24.237.80.31 ( talk) 04:24, 3 November 2013 (UTC)
This depends on the model and the context. You can assume a uniform distribution in the interval [0,1], in which case the midpoint 0.5 appears with a null probability, so that under this assumption, the choice for half-points doesn't matter. In practice, this is not true because one has specific inputs and algorithms, but then, the best choice depends on the actual distribution of the inputs (either true inputs of a program or intermediate results). Vincent Lefèvre ( talk) 13:37, 20 April 2015 (UTC)
There seems to be at least one other method; rounding up or down with probability given by nearness.
It is given in javascript by
R = Math.round(X + Math.random() - 0.5)
It has the possible advantage that the average value of R for a large set of roundings of constant X is X itself.
82.163.24.100 11:51, 29 January 2007 (UTC)
Can somebody please tell me, why is it - and why does it make sense - that we round, say, 1.5 to 2 and not to 1? Here's my thinking: 1.1, 1.2, 1.3 and 1.4 go to 1 - 1.6, 1.7, 1.8, 1.9 go to 2. So 1.5 is of course right in the middle. Why should we presume it's more 2ish than 1ish, if it's equally close? Do you think it's just because teachers are nice, and when they do grade averages, they want to give you a little boost, rather than cut you down? Is there some philosophical or mathematical justification? Wik idea 23:38, 6 February 2008 (UTC)
You forgot 1.0. —Preceding
unsigned comment added by
207.245.46.103 (
talk) 19:08, 7 February 2008 (UTC)
Refer to the discussion above, the "Which method introduces the most error?" question. I guess I retract my 1.0 statement. —Preceding unsigned comment added by 207.245.46.103 ( talk) 18:30, 8 February 2008 (UTC)
The floating point unit on the common PC works with IEEE-754, floating binary point numbers. I have not seen addressed in this page or this talk page the problem of doing rounding of the binary fraction numbers to a specified number of decimal fraction digits. The problem results from fact that variables with binary point fractions cannot generally exactly represent decimal fraction numbers. One can see this from the following table. This table is an attempt to represent the numbers from 1.23 to 1.33 in 0.005 increments.
ExactFrac Approx. Closest IEEE-754 (64-bit floating binary point number) 1230/1000 1.230 1.229999999999999982236431605997495353221893310546875 1235/1000 1.235 1.2350000000000000976996261670137755572795867919921875 1240/1000 1.240 1.2399999999999999911182158029987476766109466552734375 1245/1000 1.245 1.24500000000000010658141036401502788066864013671875 1250/1000 1.250 1.25 1255/1000 1.255 1.25499999999999989341858963598497211933135986328125 1260/1000 1.260 1.2600000000000000088817841970012523233890533447265625 1265/1000 1.265 1.2649999999999999023003738329862244427204132080078125 1270/1000 1.270 1.270000000000000017763568394002504646778106689453125 1275/1000 1.275 1.274999999999999911182158029987476766109466552734375 1280/1000 1.280 1.2800000000000000266453525910037569701671600341796875 1285/1000 1.285 1.2849999999999999200639422269887290894985198974609375 1290/1000 1.290 1.29000000000000003552713678800500929355621337890625 1295/1000 1.295 1.2949999999999999289457264239899814128875732421875 1300/1000 1.300 1.3000000000000000444089209850062616169452667236328125 1305/1000 1.305 1.3049999999999999378275106209912337362766265869140625 1310/1000 1.310 1.310000000000000053290705182007513940334320068359375 1315/1000 1.315 1.314999999999999946709294817992486059665679931640625 1320/1000 1.320 1.3200000000000000621724893790087662637233734130859375 1325/1000 1.325 1.3249999999999999555910790149937383830547332763671875 1330/1000 1.330 1.3300000000000000710542735760100185871124267578125
The exact result of doing the division indicated in the first column is shown as the long number in the third column. The intended result is in the second column. The thing to note is that only one of the quotients is exact; the others are only approximate. Thus when we are trying to round our numbers up or down, we cannot use rules based on simple greater-than, less-than, or at some exact decimal fraction value, because, in general, these decimal fraction values have no exact representation in binary fraction numbers. JohnHD7 ( talk) 22:08, 28 June 2008 (UTC)
I've seen two sources that claim that it's correct to round (e.g.) 2.459 to 2.4 rather than 2.5, though they both give the same bogus logic: that 2.45 might as well be rounded to 2.4 as to 2.5, and so this also applies to 2.459, even though this is a different number and is obviously closer to 2.5. But are we sure this isn't standard practice in some crazy field or other? Evercat ( talk) 20:51, 17 September 2008 (UTC)
Please comment: rounding 2.50001 to the nearest integer. Is it 2 or 3? Redding7 ( talk) 01:42, 5 January 2010 (UTC)
Please edit ‘Rounding functions in programming languages’ for naming consistency. I'm not sure which names you prefer, so I'll let you do it, but please be consistent. Shinobu ( talk) 07:07, 8 November 2008 (UTC)
The section "Common method" does not explain negative numbers. The first example of negative just says "one rounds up as well" but shows different behavior for the case of 5, otherwise the same as positive. The second example is supposed to be the opposite but shows the same thing! It calls it "down" instead of "up", but gives the same answer. — Długosz ( talk) 19:55, 18 February 2009 (UTC)
The article claims that
I am bothered by this paragraph because (1) evidence should be provided that meteorologists actualy do this; it may well be an original invention by the editor. Also (2) tallying days with negative temperature seems a rather unscientific idea, since even small errors in measurement could lead to a large error in the result. If the station's thermometer says -0.4C or -0.1C, it is not certain that street puddles are freezing. On the other hand, if the thermometer says +0.1C or +0.4C, the puddles may be freezing nonetheless. To compare the harshness of weather, one shoudl use a more robust statistic, such as the mean temperature. Therefore, there does not seem to be a good excuse for preserving the minus sign. All the best, -- Jorge Stolfi ( talk) 06:03, 1 August 2009 (UTC)
I recall reading about a method (I believe on Wikipedia) some months and/or years ago, allegedly used in some banking systems to not acrue (or lose) pennies or cents over time, it was rounded as follows, if I recall correctly:
$XX.XXY was rounded 'up' if Y was odd, and 'down' id Y was even. (5 of the 10 options going in each direction)
Eg:
$10.012 was rounded to $10.00 |
$10.013 was rounded to $10.01 |
$10.014 was rounded to $10.00 |
$10.015 was rounded to $10.01 |
$10.016 was rounded to $10.00 |
$10.017 was rounded to $10.01 |
I forget how it handled negative cases... The 'bankers round' (Round half to even) given, does perform one of the same purposes of symetric rounding, but in a different way, potentially with a bias that's relevant to finances, and I'm wondering if the article is missing this method (and if so, if it's worth someone who knows the details accurately adding it), if it's documented elsewhere on wikipedia, or if it's even been used at all, or was wrong in my source previously! (Possibly Wikipedia, iirc). The advantages of it seemed to basically be 'random' with even distribution in the sense that things are not moving to the closest number at the required level of precision, where identifiable, but with repeatable results. FleckerMan ( talk) 01:03, 18 August 2009 (UTC)
This is not Bankers/Accountants rounding, as I was taught to do it in accounting class, and as I have applied it in a number of business systems over the years.
I've done some google searching, but I have not yet found anything close to a decent reference for it.
Briefly, the "difference" caused by rounding one value (a fraction of a cent) is carried forward and added to the next result, right before rounding. That carries through all the way to the end. The objective is that if you're adding "p%" of interest to each one of a long line of accounts, then each one will get "p%" of interest (within a penny), and the final result will show that (total before interest) * (1+p%) = (total of the final amounts computed for each of the accounts). And yes, when done right, the final totals always do match.
So we should be looking for sources that say "start with 0.5 in your 'carry'. Add the 'carry' before rounding. Set the 'carry' to the amount 'gained' or 'lost' by this rounding."
-- Jeff Grigg 68.15.44.5 ( talk) 00:44, 20 October 2017 (UTC)
I deleted the long list of rounding functions in language X, Y, Z, ... It was not useful to readers (programmers will look it up in the language's article, not here), it was waaaaay too long (and still far from complete), and extremely repetitive (most languages now simply provide the four basic IEEE rounding functions -- trunc,round,ceil,floor). The lists were not discarded but merely moved to the respective language articles (phew!). All the best, -- Jorge Stolfi ( talk) 22:31, 6 December 2009 (UTC)
I was considering adding this:
This is explained in The Art of Computer Programming chapter 4, I think, which I'd cite if my copy were not hidden in a box somewhere. — Tamfang ( talk) 03:04, 19 July 2010 (UTC)
I was taught that "round half to even" is also called the "7-up" rule. Does anybody else know about this? 216.58.55.247 ( talk) 00:36, 26 December 2010 (UTC)
I've cut down the sections on dithering and scaled rounding. The dithering is better dealt with elsewhere. The scaled rounding had no citations and was long and rambling and dragged in floating point for no good reason. Dmcq ( talk) 23:30, 10 December 2011 (UTC)
With particular reference to
Rounding § Double rounding, it seems significant that when an odd radix is used, AFAICT multiple rounding is stable provided that we start with finite precision (actually a broader class of values that excludes only a limited set of recurring values). Surely work has been published on this, and it would be relevant for mentioning in this article? —
Quondum 06:17, 20 November 2012 (UTC)
I wonder whether a paragraph should be added on Von Neumann rounding and sticky rounding, or this is regarded as original research (not really used in practice except internally, in particular no public API, no standard, no standard terminology...). In short: Von Neumann rounding (introduced by A. W. Burks, H. H. Goldstine, and J. von Neumann, Preliminary discussion of the logical design of an electronic computing instrument, 1963, taken from report to U.S. Army Ordnance Department, 1946) consists in replacing the least significant bit of the truncation by a 1 (in the binary representation); the goal was to get a statistically unbiased rounding (as claimed in this paper) without carry propagation. In On the precision attainable with various floating-point number systems, 1972, Brent suggested not to do that for the exactly representable results. This corresponds to the sticky rounding mode (term used by J. S. Moore, T. Lynch, and M. Kaufmann, A Mechanically Checked Proof of the Correctness of the Kernel of the AMD5K86™ Floating-Point Division Algorithm, 1996), a.k.a. rounding to odd (term used by S. Boldo and G. Melquiond, Emulation of a FMA and correctly-rounded sums: proved algorithms using rounding to odd, 2006); it can be used to avoid the double-rounding problem. Vincent Lefèvre ( talk) 13:58, 20 April 2015 (UTC)
> Round to prepare for shorter precision: For a BFP or HFP permissible set, the candidate selected is the one whose voting digit has an odd value. For a DFP permissible set, the candidate that is smaller in magnitude is selected, unless its voting digit has a value of either 0 or 5; in that case, the candidate that is greater in magnitude is selected.
Here, BFP is IEEE754 binary, DFP is IEEE754 decimal, and HFP is old-style S/360 binary/hexadecimal floating. I think it is worth listing. Netch 06:42, 07 June 2021 (UTC)
I think this article needs to explain that 'Half rounds up' is NOT asymmetric for things like stop watches, whereas 'Half rounds down' is just plain wrong in such cases. This may (or may not) be part of the reason for 'Half rounds up' being the more common rule.
The point is that when a stopwatch says, for instance, 0.4, this really means 'at least 0.4 but less than 0.5', which averages to 0.45. So rounding becomes symmetric - 0.0 is really 0.05, which loses 0.05, matched by the gain of 0.05 when 0.9 (which is really 0.95) rounds up, and similarly 0.1 (which is really 0.15) matches 0.8 (which is really 0.85), 0.2 (which is really 0.25) matches 0.7 (which is really 0.75), 0.3 (which is really 0.35) matches 0.6 (which is really 0.65), and finally 0.4 (which is really 0.45) matches 0.5 (which is really 0.55).
Presumably quite a lot of other measurement processes have relevant similarities to stopwatches.
I assume there are Reliable Sources out there which say this better than I can, but I'm not the right person to go looking for them, partly thru lack of interest and partly thru lack of knowledge of where to look. So I prefer to just raise the topic here and let others more interested and more competent than me take the matter further.
Incidentally a good case can be made for putting much of the above in the article straight away without looking for reliable sources to back it up (as most of the article has no such sources - the self-evident truth doesn't need backing sources), but, if so, at least for now I prefer to leave it up to somebody else to try to do that, as such a person is less at risk of being accused of 'bias in favor of his own inadmissible original research' (the self-evident truth is not original research, but can always be labelled as such in an environment like Wikipedia). Tlhslobus ( talk) 10:13, 12 April 2016 (UTC)
On second thoughts, I decided to just put in a bit of that self-evident truth, and see how it fares. Tlhslobus ( talk) 10:43, 12 April 2016 (UTC)
A deletion discussion on a related topic is occurring at Wikipedia:Articles for deletion/Mathcad rounding syntax. One potential outcome that I intend to suggest is a merge to the "Rounding functions in programming languages" section here. Please participate if you have an opinion. — David Eppstein ( talk) 19:21, 28 June 2016 (UTC)
There's something I'm not getting right here. The text says -23.5 should round to -24 but when I try to apply the formula I get -23.
Can somebody tell me what's wrong? Maybe I'm not fully understanding the part. -- 181.16.134.10 ( talk) 19:42, 26 December 2016 (UTC)
Hi David Eppstein,
I am leaving this as I am curious as to why you undid my revisions to the rounding Wikipedia page. I am not sure what you mean by "correctness needs analysis of overflow conditions" in this context. Help would be appreciated.
Thanks!
Elyisgreat ( talk) 02:10, 15 January 2017 (UTC)
The following (original research) formula implements truncate(y) without the singularity at y=0. Requires abs and floor functions:
70.190.166.108 ( talk) 17:39, 15 January 2017 (UTC)
I've done some article-wide cleanup of style and markup, especially semantically distinguishing different uses of what visually renders as italics, with {{
var}}
(template wrapper for <var>...</var>
) and {{
em}}
(<em>...</em>
) where appropriate, and by occasionally removing some pointless, brow-beating emphasis. More could be done. For example, it seems unnecessary and reader-annoying to keep italicizing every single mention of a rounding algorithm/approach after the first instance (which is already boldfaced), and except where we're talking about them as
words-as-words (as in "The term banker's rounding ..."). Also did some other
MOS:NUM-related cleanup, such as use non-breaking spaces in "y × q = ...", not "y×q=...", nor using line-breakable regular spaces. Might have missed a couple of instances, but I did this in an external text editor and was pretty thorough.
However, I did not touch anything in <math>...</math>
or {{
math}}
markup; I'm not sure whether those support {{
var}}
(or raw HTML <var>...</var>
). I will note that the presentation of variables inside math-markup code blocks is wildly inconsistent, and should be normalized to var markup (if possible) or at least to non-semantic italics, for consistency and to avoid confusing the reader.
—
SMcCandlish ☺
☏
¢ ≽ʌⱷ҅ᴥⱷʌ≼ 02:21, 11 September 2017 (UTC)
The Round half to odd section currently contains:
This variant is almost never used in computations, except in situations where one wants to avoid rounding 0.5 or −0.5 to zero; or to avoid increasing the scale of floating point numbers, which have a limited exponent range. With round half to even, a non-infinite number would round to infinity, and a small denormal value would round to a normal non-zero value. Effectively, this mode prefers preserving the existing scale of tie numbers, avoiding out-of-range results when possible for even-based number systems (such as binary and decimal).
But this tie-breaking rule concerns only halfway numbers, so that I don't see how "increasing the scale of floating point numbers" (or returning infinity) can be avoided. And the next sentence, citing an article, seems dubious too:
This system is rarely used because it never rounds to zero, yet "rounding to zero is often a desirable attribute for rounding algorithms".
Again, this tie-breaking rule concerns only halfway numbers, so that "it never rounds to zero" is incorrect. Moreover, this sentence would also imply that "round half away from zero" and "round away from zero" would also be rarely used, which is not true. Vincent Lefèvre ( talk) 14:47, 11 September 2017 (UTC)
@Vincent Lefèvre: I guess that "never to zero" means that the applicable rounding case (half-number) will not round to zero. However, whether or not this is desirable is context dependent. For example, in fixed point, rounding towards uneven will only produce a single bit-change, while rounding to even may trigger a full add. Paamand ( talk) 09:04, 21 September 2017 (UTC)
The VAT Guide cited states:
Note: The concession in this paragraph to round down amounts of VAT is designed for invoice traders and applies only where the VAT charged to customers and the VAT paid to Customs and Excise is the same. As a general rule, the concession to round down is not appropriate to retailers, who should see paragraph 17.6.
and paragraph 17.6 says the rounding down only is not allowed. So I don't think it is "quite clearly" stated. -- 81.178.31.210 17:27, 2 September 2006 (UTC)
Independent from the preceding question, VAT on items in an invoice is an example for the need to round each item such that the sum of the rounded items equals the rounded sum of the items. A particular solution for positive items only is the Largest_remainder_method, a more general one is Curve_fitting. Perhaps someone who reads German may include (and expand?) in the article material from https://de.wikipedia.org/wiki/Rundung#Summenerhaltendes_Runden. -- Wegner8 07:04, 18 September 2017 (UTC)
I'm not sure if it really merits coverage in this article, but this blog page [1] describes "Argentina" and "Swiss" rounding.
"Argentina Rounding" is (roughly but not exactly) rounding to halves, rather than whole digits. "Roughly" because (in my view) they only look at one digit.
"Swiss Rounding" is (if I understand it correctly) rounding to quarters. Like rounding to 0.0, 0.25, 0.5, 0.75, and 1.0 as rounded results.
-- Jeff Grigg 68.15.44.5 ( talk) 00:32, 20 October 2017 (UTC)
The article has just stocastic rounding for ties, however the term stochastic rounding is applied in for instance
Gupta, Suyog; Angrawl, Ankur; Gopalakrishnan, Kailash; Narayanan, Pritish (9 February 2016). "Deep Learning with Limited Numerical Precision". ArXiv. p. 3.
where we have the probability of rounding to is proportional to the proximity of to :
Rounding in Monte Carlo rounding is random, the above can be considered as one form of Monte Carlo rounding, but others can be used and can be used with multiple runs to test the stability of a result. The stochastic rounding above has the property that addition is unbiased. There's a lot about Moonte Carlo arithmetic at Monte Carlo Arithmetic. Dmcq ( talk) 16:17, 21 October 2017 (UTC)
In my opinion it would be nice to add a starting section which says something about the axioms of rounding operations, possibly starting with "what rounding is" in a most general sense and then adding some more specializing axioms for the various types of rounding in use. However it's not yet clear to me how much of such an "axiomatic rounding theory" already has been developed in the research community. So it might also be a bit too early to discuss it here in the context of a WP article. I'll check some of the resources I know that exist on such an axiomatization and post it here for further discussion. Any other input is highly welcome, thanks! Axiom0 ( talk) 14:37, 7 March 2018 (UTC)
{{
cite journal}}
: Cite journal requires |journal=
(
help) But note that this is already more specific (floating-point only) than what is considered in the WP article.
Vincent Lefèvre (
talk) 16:00, 7 March 2018 (UTC)
The text presently sayssaid that the round-half-to-even "rule will introduce a towards-zero bias when y − 0.5 is even".
I assumes this means that with a set such as 2.5, 2.5, 4.5, 10.5, the result of rounding will be 2, 2, 4, 10, which has a towards-zero bias. But −1.5, −1.5, −11.5, −7.5 are also of the form "y − 0.5 is even", and they round to −2, −2, −12, −8. So really this should read:
"rule will introduce a towards-negative-infinity bias when y − 0.5 is even".
So I've fixed this. Boud ( talk) 16:41, 24 April 2018 (UTC)
I think it's actually round half away from zero that is widely used in many disciplines. StephenJohns00 ( talk) 04:56, 13 August 2018 (UTC)
Here is my proposed text, simplified once more. Wegner8 08:06, 28 October 2018 (UTC)
=== Rounding of summands preserving the total: VAT rounding ===
Rounding preserving the total means rounding each summand in a way that the total of the rounded numbers equals their rounded total. The [[Largest remainder method]] is the special case with positive summands only.
Among other purposes, this procedure is practised (a) for the [[proportional representation]] in a legislative body with a fixed number of members and (b) if in an invoice the total [[VAT]] is to be distributed to the items keeping the addition of each column correct.
If, after rounding each summand as usual, their sum is too large or too small, one rounds the nesessary number of summands away from their closest rounded values towards the second closest such that (a) the desired total is achieved and (b) the [[absolute value]] of the total of all rounding differences becomes minimal.
Can you please, instead of simply deleting potentially useful material, help finding the right place and wording for it? Where would someone with the VAT problem look for a solution? -- Thanks for the hint to the article Party-list proportional representation; this link should replace the link to the Largest remainder method in the proposed text above. -- Wegner8 06:49, 29 October 2018 (UTC)
The article begins as follows: "Rounding a number means replacing it with a different number that is approximately equal to the original ...". Isn't this exactly what VAT rounding does? Everyone talks about fake news; denying facts is a twin of fake news. Please restore a paragraph on VAT rounding. -- Wegner8 08:19, 10 January 2019 (UTC) — Preceding unsigned comment added by Wegner8 ( talk • contribs)
There is a proposal on Talk:Nearest integer function to merge that article into this one; see Talk:Nearest_integer_function#Merge_into_Rounding. -- JBL ( talk) 22:16, 18 April 2019 (UTC)
In the history section there is a short bit on age heaping at the end which is a kind of rounding people do that seems to be more about a psychological thing and way of fixing statistics than anything to do with rounding as described in the rest of the article. Does that rally belong in this article? Dmcq ( talk) 11:44, 24 September 2019 (UTC)
No mention of faithful rounding and why it's a good idea. I think it means logic can be smaller and generally the rounding/overflow/underflow is quicker, but I came here for more info and didn't find any. — Preceding unsigned comment added by ChippendaleMupp ( talk • contribs) 15:50, 13 February 2020 (UTC)
My apologies Dr. Lefèvre, I honestly thought I was correcting a typo with my edit.
I found this example fascinating, but when I tried to work it out by substituting different values for , it didn't seem to add up.
Here is the original text before my edit:
For instance, if Goldbach's conjecture is true but unprovable, then the result of rounding the following value up to the next integer cannot be determined: 1+10−n where n is the first even number greater than 4 which is not the sum of two primes, or 1 if there is no such number. The rounded result is 2 if such a number n exists and 1 otherwise.
Let's first take the case where the number doesn't exist, so we substitute in the formula:
Now let's take the other case, where an even number greater than 4 exists that is not a sum of two primes. That number would presumably be very large, but let's start with :
Clearly, the larger the , the closer the resulting value will be to 1. For a very large we would get 1.000...0001 with very many zeroes in between.
So the values we get in both cases of the example, 1.1 and 1.000...0001, will both end up being rounded to the same number, regardless of which rounding mode we use. So how do we get this formula to round to 1 or 2, depending on the provability of Goldbach's Conjecture?
The only way I can think of getting different rounded values is to use for the case when no such number exists, which would cause the above formula to round to 2 in that case and to 1 in the other case (when rounding to nearest integer).
Clearly I have misunderstood something here, but I can't seem to figure out what. I would really appreciate if someone could elaborate this example.
Grnch ( talk) 21:51, 27 March 2021 (UTC)
1+10−n where n is the first even number greater than 4 which is not the sum of two primes, or 1 if there is no such number
<math display="block">
as everything in it appears as an image, which is bad for accessibility or if one wants to copy-paste. I don't know whether there is wikicode to solve that. Alternatively, adding "either" before "1+10−n" would make the sentence unambiguous, IMHO. —
Vincent Lefèvre (
talk) 11:01, 31 March 2021 (UTC)I didn't find any description for rounding to the nearest 1/2 integer value, i.e. to values of 0, 0.5, 1, 1.5, etc.-- Fkbreitl ( talk) 12:00, 8 May 2021 (UTC)
I thought it would be useful to include the formula for rounding up and down a number to the nearest multiple of another positive number, since there's already a section for rounding to a specific multiple ( /info/en/?search=Rounding#Rounding_to_a_specified_multiple), and a section for rounding up and down to nearest integer ( /info/en/?search=Rounding#Rounding_down and /info/en/?search=Rounding#Rounding_up). The formulas would be:
So, I added them to the article, but the user Vincent Lefèvre removed them. He said that "This is a particular case of what is described in this section (which is not restricted to rounding to nearest)". While that's true, I don't see why that's a problem or a reason to remove them. I mean, rounding up or down (using the ceiling and floor functions) is a particular case of rounding, so why don't we also remove the sections "Rounding down" and "Rounding up"? -- Alej27 ( talk) 19:24, 3 March 2022 (UTC)
Hi @ Boh39083, I notice you've gotten into a bit of a revert war about your recent changes. Maybe you want to discuss your rationale here a bit? (cf. "bold–revert–discuss".) Trying to make arguments in edit summaries is not the most effective in my experience. – jacobolus (t) 16:04, 19 November 2023 (UTC)
In JS, I tested a simple conversion of 1/3 into a percentage (JS number is double-precision floating point), and if you divide (round) first before multiplying, will have a larger discrepancy than if you to multiply first than divide (as divide can result in landing in an non-representable value so a rounding must occur):
(1/3*100).toFixed(16) '33.3333333333333286' (1*100/3).toFixed(16) '33.3333333333333357'
This is because the former, did 1/3, which cannot be exactly represented in double-precision floating point format, so that number gets rounded, and then gets multiplied by 100, which increases this error. The latter did 1*100, which can be exactly represented, then divided by 3, which is then rounded, at the final step. Joeleoj123 ( talk) 02:04, 18 December 2023 (UTC)
This article is rated C-class on Wikipedia's
content assessment scale. It is of interest to the following WikiProjects: | ||||||||||||||||||||||||||||
|
This page is basically redundant with Significant figures#Rounding, but I feel that this is better written, so warrants merging instead of deleting. ~ Irrel 17:33, 28 September 2005 (UTC)
"but if the five is succeeded by nothing, odd last reportable digits are decreased by one" Shouldn't this be increased? 68.227.80.79 05:44, 13 November 2005 (UTC)
I had a question about whether everyone in the world rounds the same way. In the US, there are two common methods. The first needs little explanation and is even programmed into things like MS Excel. The other is sometimes called scientific rounding or rounding to even on 5. I'm asking because I run into a lot of foreign professionals who round differently, so I didn't know if they were out of practice or taught differently. From a philosophical view, it would actually make sense to round to significant figures by simple truncation, because anything beyond the "best guess" digit is considered noise and should have no impact on the reported value. I once asked myself which of the two rounding methods used in the US is best. I approached the question with the assumption that the noise digits were random and equally likely to appear. If I have a set of values with all possible noise digits and calculate the mean and standard deviation, and then repeat the mean and standard deviation calculation after rounding all of the values, the best method would not deviate from the true mean and standard deviation. It turned out that the round to even on 5 method actually gave a more accurate mean, but the most popular rounding method gave a more accurate standard deviation. I don't know if any of this would be of interest for this lesson on rounding, but thought that I'd mention it. John Leach ( talk) 21:52, 6 July 2020 (UTC)
How are negative numbers handled? According to the article round(-1.5) = -2, which is wrong, right? round(-1.5) = -1 I believe.
The Rounding of ALL numbers, requires the rounding of the absolute value of the number, and then replace the original sign (+ or -). The above answer is "-2", and not "-1". If the number preceding a "5" is ODD, then make it EVEN by adding "1". If the number preceding a "5" is EVEN, then leave it alone. As there are an equal amount of ODD numbers and EVEN numbers in the counting system.
Example; "1.6 - 1.6 = 0", "1.6 + (-1.6) = 0", rounding to the 1's unit is "2.0 + (-2.0) = 0" as |-1.6| is 1.6, rounded gives us "2.0". Example; "1.4 - 1.4 = 0", "1.4 + (-1.4) = 0", rounding to the 1's unit is "1.0 + (-1.0) = 0" as |-1.4| is 1.4, rounded gives us "1.0". Example; "1.5 - 1.5 = 0", "1.5 + (-1.5) = 0", rounding to the 1's unit is "2.0 + (-2.0) = 0" as |-1.5| is 1.5, rounded gives us "2.0". Example; "2.5 - 2.5 = 0", "2.5 + (-2.5) = 0", rounding to the 1's unit is "2.0 + (-2.0) = 0" as |-2.5| is 2.5, rounded gives us "2.0".
The Rounding of a number can never give a value of "0.0".
I don't know if it's a good idea to merge with floor function, but they are related. A sort of "family tree" of rounding functions:
From the "Round-to-even method" section in this article, as of right now:
"When dealing with large sets of scientific or statistical data, where trends are important, traditional rounding on average biases the data upwards slightly. Over a large set of data, or when many subsequent rounding operations are performed as in digital signal processing, the round-to-even rule tends to reduce the total rounding error, with (on average) an equal portion of numbers rounding up as rounding down."
Huh? Doesn't traditional rounding have an equal portion of numbers rounding up and down? In traditional rounding, numbers between 0 and <5 are rounded to 0, while numbers between 5 and <10 are rounded to 10, if 10 is an increase in the next highest digit of the number being rounded. The difference of (<10 - 5) equals the difference of (<5 - 0), doesn't it? Am I missing something? 4.242.147.47 21:13, 19 September 2006 (UTC)
In four cases (1, 2, 3 and 4) the value is rounded down. In five cases (5,6,7,8,9) the value is rounded up. In one case (0) the value is left unchanged. This may be what you were missing. 194.109.22.149 15:14, 28 September 2006 (UTC)
But "unchanged" is not entirely correct, as there may be further digits after the 0. For example, rounded to one decimal place, 2.304 would round to 2.3; it is not unchanged under the traditional rounding scheme, but rather rounded down, thus making five cases for rounding down and five for rounding up.
I am amazed that any serious scientist could argue that there are either are either x+1 or x-1 numbers between integers in a base 10 system. When fully discreet, there are an equal amount of numbers that go to the lower integer as the higher integer. 2.0, 2.1, 2.2, 2.3 and 2.4 go to 2. 2.5, 2.6, 2.7, 2.8 and 2.9 go to 3. 5 each. The same would have been true from 1.0 - 1.9 and then from 3.0 t- 3.9. Every single integer will have 10 numbers that can result in that rounding. The tie-breaking rule is silly, and as engineers have known for years, wrong. If you want the true answer, you never analyze rounded numbers! You must use the actual values to beyond at least one significant digit to what you are trying to analyze. This is confusing, unnecessarily, kids in school today. Go back to what we have known for years, and can be proven to be correct. And never, ever analyze rounded numbers to the same significant digit if you know the data set is more discreet. — Preceding unsigned comment added by 24.237.80.31 ( talk) 04:24, 3 November 2013 (UTC)
This depends on the model and the context. You can assume a uniform distribution in the interval [0,1], in which case the midpoint 0.5 appears with a null probability, so that under this assumption, the choice for half-points doesn't matter. In practice, this is not true because one has specific inputs and algorithms, but then, the best choice depends on the actual distribution of the inputs (either true inputs of a program or intermediate results). Vincent Lefèvre ( talk) 13:37, 20 April 2015 (UTC)
There seems to be at least one other method; rounding up or down with probability given by nearness.
It is given in javascript by
R = Math.round(X + Math.random() - 0.5)
It has the possible advantage that the average value of R for a large set of roundings of constant X is X itself.
82.163.24.100 11:51, 29 January 2007 (UTC)
Can somebody please tell me, why is it - and why does it make sense - that we round, say, 1.5 to 2 and not to 1? Here's my thinking: 1.1, 1.2, 1.3 and 1.4 go to 1 - 1.6, 1.7, 1.8, 1.9 go to 2. So 1.5 is of course right in the middle. Why should we presume it's more 2ish than 1ish, if it's equally close? Do you think it's just because teachers are nice, and when they do grade averages, they want to give you a little boost, rather than cut you down? Is there some philosophical or mathematical justification? Wik idea 23:38, 6 February 2008 (UTC)
You forgot 1.0. —Preceding
unsigned comment added by
207.245.46.103 (
talk) 19:08, 7 February 2008 (UTC)
Refer to the discussion above, the "Which method introduces the most error?" question. I guess I retract my 1.0 statement. —Preceding unsigned comment added by 207.245.46.103 ( talk) 18:30, 8 February 2008 (UTC)
The floating point unit on the common PC works with IEEE-754, floating binary point numbers. I have not seen addressed in this page or this talk page the problem of doing rounding of the binary fraction numbers to a specified number of decimal fraction digits. The problem results from fact that variables with binary point fractions cannot generally exactly represent decimal fraction numbers. One can see this from the following table. This table is an attempt to represent the numbers from 1.23 to 1.33 in 0.005 increments.
ExactFrac Approx. Closest IEEE-754 (64-bit floating binary point number) 1230/1000 1.230 1.229999999999999982236431605997495353221893310546875 1235/1000 1.235 1.2350000000000000976996261670137755572795867919921875 1240/1000 1.240 1.2399999999999999911182158029987476766109466552734375 1245/1000 1.245 1.24500000000000010658141036401502788066864013671875 1250/1000 1.250 1.25 1255/1000 1.255 1.25499999999999989341858963598497211933135986328125 1260/1000 1.260 1.2600000000000000088817841970012523233890533447265625 1265/1000 1.265 1.2649999999999999023003738329862244427204132080078125 1270/1000 1.270 1.270000000000000017763568394002504646778106689453125 1275/1000 1.275 1.274999999999999911182158029987476766109466552734375 1280/1000 1.280 1.2800000000000000266453525910037569701671600341796875 1285/1000 1.285 1.2849999999999999200639422269887290894985198974609375 1290/1000 1.290 1.29000000000000003552713678800500929355621337890625 1295/1000 1.295 1.2949999999999999289457264239899814128875732421875 1300/1000 1.300 1.3000000000000000444089209850062616169452667236328125 1305/1000 1.305 1.3049999999999999378275106209912337362766265869140625 1310/1000 1.310 1.310000000000000053290705182007513940334320068359375 1315/1000 1.315 1.314999999999999946709294817992486059665679931640625 1320/1000 1.320 1.3200000000000000621724893790087662637233734130859375 1325/1000 1.325 1.3249999999999999555910790149937383830547332763671875 1330/1000 1.330 1.3300000000000000710542735760100185871124267578125
The exact result of doing the division indicated in the first column is shown as the long number in the third column. The intended result is in the second column. The thing to note is that only one of the quotients is exact; the others are only approximate. Thus when we are trying to round our numbers up or down, we cannot use rules based on simple greater-than, less-than, or at some exact decimal fraction value, because, in general, these decimal fraction values have no exact representation in binary fraction numbers. JohnHD7 ( talk) 22:08, 28 June 2008 (UTC)
I've seen two sources that claim that it's correct to round (e.g.) 2.459 to 2.4 rather than 2.5, though they both give the same bogus logic: that 2.45 might as well be rounded to 2.4 as to 2.5, and so this also applies to 2.459, even though this is a different number and is obviously closer to 2.5. But are we sure this isn't standard practice in some crazy field or other? Evercat ( talk) 20:51, 17 September 2008 (UTC)
Please comment: rounding 2.50001 to the nearest integer. Is it 2 or 3? Redding7 ( talk) 01:42, 5 January 2010 (UTC)
Please edit ‘Rounding functions in programming languages’ for naming consistency. I'm not sure which names you prefer, so I'll let you do it, but please be consistent. Shinobu ( talk) 07:07, 8 November 2008 (UTC)
The section "Common method" does not explain negative numbers. The first example of negative just says "one rounds up as well" but shows different behavior for the case of 5, otherwise the same as positive. The second example is supposed to be the opposite but shows the same thing! It calls it "down" instead of "up", but gives the same answer. — Długosz ( talk) 19:55, 18 February 2009 (UTC)
The article claims that
I am bothered by this paragraph because (1) evidence should be provided that meteorologists actualy do this; it may well be an original invention by the editor. Also (2) tallying days with negative temperature seems a rather unscientific idea, since even small errors in measurement could lead to a large error in the result. If the station's thermometer says -0.4C or -0.1C, it is not certain that street puddles are freezing. On the other hand, if the thermometer says +0.1C or +0.4C, the puddles may be freezing nonetheless. To compare the harshness of weather, one shoudl use a more robust statistic, such as the mean temperature. Therefore, there does not seem to be a good excuse for preserving the minus sign. All the best, -- Jorge Stolfi ( talk) 06:03, 1 August 2009 (UTC)
I recall reading about a method (I believe on Wikipedia) some months and/or years ago, allegedly used in some banking systems to not acrue (or lose) pennies or cents over time, it was rounded as follows, if I recall correctly:
$XX.XXY was rounded 'up' if Y was odd, and 'down' id Y was even. (5 of the 10 options going in each direction)
Eg:
$10.012 was rounded to $10.00 |
$10.013 was rounded to $10.01 |
$10.014 was rounded to $10.00 |
$10.015 was rounded to $10.01 |
$10.016 was rounded to $10.00 |
$10.017 was rounded to $10.01 |
I forget how it handled negative cases... The 'bankers round' (Round half to even) given, does perform one of the same purposes of symetric rounding, but in a different way, potentially with a bias that's relevant to finances, and I'm wondering if the article is missing this method (and if so, if it's worth someone who knows the details accurately adding it), if it's documented elsewhere on wikipedia, or if it's even been used at all, or was wrong in my source previously! (Possibly Wikipedia, iirc). The advantages of it seemed to basically be 'random' with even distribution in the sense that things are not moving to the closest number at the required level of precision, where identifiable, but with repeatable results. FleckerMan ( talk) 01:03, 18 August 2009 (UTC)
This is not Bankers/Accountants rounding, as I was taught to do it in accounting class, and as I have applied it in a number of business systems over the years.
I've done some google searching, but I have not yet found anything close to a decent reference for it.
Briefly, the "difference" caused by rounding one value (a fraction of a cent) is carried forward and added to the next result, right before rounding. That carries through all the way to the end. The objective is that if you're adding "p%" of interest to each one of a long line of accounts, then each one will get "p%" of interest (within a penny), and the final result will show that (total before interest) * (1+p%) = (total of the final amounts computed for each of the accounts). And yes, when done right, the final totals always do match.
So we should be looking for sources that say "start with 0.5 in your 'carry'. Add the 'carry' before rounding. Set the 'carry' to the amount 'gained' or 'lost' by this rounding."
-- Jeff Grigg 68.15.44.5 ( talk) 00:44, 20 October 2017 (UTC)
I deleted the long list of rounding functions in language X, Y, Z, ... It was not useful to readers (programmers will look it up in the language's article, not here), it was waaaaay too long (and still far from complete), and extremely repetitive (most languages now simply provide the four basic IEEE rounding functions -- trunc,round,ceil,floor). The lists were not discarded but merely moved to the respective language articles (phew!). All the best, -- Jorge Stolfi ( talk) 22:31, 6 December 2009 (UTC)
I was considering adding this:
This is explained in The Art of Computer Programming chapter 4, I think, which I'd cite if my copy were not hidden in a box somewhere. — Tamfang ( talk) 03:04, 19 July 2010 (UTC)
I was taught that "round half to even" is also called the "7-up" rule. Does anybody else know about this? 216.58.55.247 ( talk) 00:36, 26 December 2010 (UTC)
I've cut down the sections on dithering and scaled rounding. The dithering is better dealt with elsewhere. The scaled rounding had no citations and was long and rambling and dragged in floating point for no good reason. Dmcq ( talk) 23:30, 10 December 2011 (UTC)
With particular reference to
Rounding § Double rounding, it seems significant that when an odd radix is used, AFAICT multiple rounding is stable provided that we start with finite precision (actually a broader class of values that excludes only a limited set of recurring values). Surely work has been published on this, and it would be relevant for mentioning in this article? —
Quondum 06:17, 20 November 2012 (UTC)
I wonder whether a paragraph should be added on Von Neumann rounding and sticky rounding, or this is regarded as original research (not really used in practice except internally, in particular no public API, no standard, no standard terminology...). In short: Von Neumann rounding (introduced by A. W. Burks, H. H. Goldstine, and J. von Neumann, Preliminary discussion of the logical design of an electronic computing instrument, 1963, taken from report to U.S. Army Ordnance Department, 1946) consists in replacing the least significant bit of the truncation by a 1 (in the binary representation); the goal was to get a statistically unbiased rounding (as claimed in this paper) without carry propagation. In On the precision attainable with various floating-point number systems, 1972, Brent suggested not to do that for the exactly representable results. This corresponds to the sticky rounding mode (term used by J. S. Moore, T. Lynch, and M. Kaufmann, A Mechanically Checked Proof of the Correctness of the Kernel of the AMD5K86™ Floating-Point Division Algorithm, 1996), a.k.a. rounding to odd (term used by S. Boldo and G. Melquiond, Emulation of a FMA and correctly-rounded sums: proved algorithms using rounding to odd, 2006); it can be used to avoid the double-rounding problem. Vincent Lefèvre ( talk) 13:58, 20 April 2015 (UTC)
> Round to prepare for shorter precision: For a BFP or HFP permissible set, the candidate selected is the one whose voting digit has an odd value. For a DFP permissible set, the candidate that is smaller in magnitude is selected, unless its voting digit has a value of either 0 or 5; in that case, the candidate that is greater in magnitude is selected.
Here, BFP is IEEE754 binary, DFP is IEEE754 decimal, and HFP is old-style S/360 binary/hexadecimal floating. I think it is worth listing. Netch 06:42, 07 June 2021 (UTC)
I think this article needs to explain that 'Half rounds up' is NOT asymmetric for things like stop watches, whereas 'Half rounds down' is just plain wrong in such cases. This may (or may not) be part of the reason for 'Half rounds up' being the more common rule.
The point is that when a stopwatch says, for instance, 0.4, this really means 'at least 0.4 but less than 0.5', which averages to 0.45. So rounding becomes symmetric - 0.0 is really 0.05, which loses 0.05, matched by the gain of 0.05 when 0.9 (which is really 0.95) rounds up, and similarly 0.1 (which is really 0.15) matches 0.8 (which is really 0.85), 0.2 (which is really 0.25) matches 0.7 (which is really 0.75), 0.3 (which is really 0.35) matches 0.6 (which is really 0.65), and finally 0.4 (which is really 0.45) matches 0.5 (which is really 0.55).
Presumably quite a lot of other measurement processes have relevant similarities to stopwatches.
I assume there are Reliable Sources out there which say this better than I can, but I'm not the right person to go looking for them, partly thru lack of interest and partly thru lack of knowledge of where to look. So I prefer to just raise the topic here and let others more interested and more competent than me take the matter further.
Incidentally a good case can be made for putting much of the above in the article straight away without looking for reliable sources to back it up (as most of the article has no such sources - the self-evident truth doesn't need backing sources), but, if so, at least for now I prefer to leave it up to somebody else to try to do that, as such a person is less at risk of being accused of 'bias in favor of his own inadmissible original research' (the self-evident truth is not original research, but can always be labelled as such in an environment like Wikipedia). Tlhslobus ( talk) 10:13, 12 April 2016 (UTC)
On second thoughts, I decided to just put in a bit of that self-evident truth, and see how it fares. Tlhslobus ( talk) 10:43, 12 April 2016 (UTC)
A deletion discussion on a related topic is occurring at Wikipedia:Articles for deletion/Mathcad rounding syntax. One potential outcome that I intend to suggest is a merge to the "Rounding functions in programming languages" section here. Please participate if you have an opinion. — David Eppstein ( talk) 19:21, 28 June 2016 (UTC)
There's something I'm not getting right here. The text says -23.5 should round to -24 but when I try to apply the formula I get -23.
Can somebody tell me what's wrong? Maybe I'm not fully understanding the part. -- 181.16.134.10 ( talk) 19:42, 26 December 2016 (UTC)
Hi David Eppstein,
I am leaving this as I am curious as to why you undid my revisions to the rounding Wikipedia page. I am not sure what you mean by "correctness needs analysis of overflow conditions" in this context. Help would be appreciated.
Thanks!
Elyisgreat ( talk) 02:10, 15 January 2017 (UTC)
The following (original research) formula implements truncate(y) without the singularity at y=0. Requires abs and floor functions:
70.190.166.108 ( talk) 17:39, 15 January 2017 (UTC)
I've done some article-wide cleanup of style and markup, especially semantically distinguishing different uses of what visually renders as italics, with {{
var}}
(template wrapper for <var>...</var>
) and {{
em}}
(<em>...</em>
) where appropriate, and by occasionally removing some pointless, brow-beating emphasis. More could be done. For example, it seems unnecessary and reader-annoying to keep italicizing every single mention of a rounding algorithm/approach after the first instance (which is already boldfaced), and except where we're talking about them as
words-as-words (as in "The term banker's rounding ..."). Also did some other
MOS:NUM-related cleanup, such as use non-breaking spaces in "y × q = ...", not "y×q=...", nor using line-breakable regular spaces. Might have missed a couple of instances, but I did this in an external text editor and was pretty thorough.
However, I did not touch anything in <math>...</math>
or {{
math}}
markup; I'm not sure whether those support {{
var}}
(or raw HTML <var>...</var>
). I will note that the presentation of variables inside math-markup code blocks is wildly inconsistent, and should be normalized to var markup (if possible) or at least to non-semantic italics, for consistency and to avoid confusing the reader.
—
SMcCandlish ☺
☏
¢ ≽ʌⱷ҅ᴥⱷʌ≼ 02:21, 11 September 2017 (UTC)
The Round half to odd section currently contains:
This variant is almost never used in computations, except in situations where one wants to avoid rounding 0.5 or −0.5 to zero; or to avoid increasing the scale of floating point numbers, which have a limited exponent range. With round half to even, a non-infinite number would round to infinity, and a small denormal value would round to a normal non-zero value. Effectively, this mode prefers preserving the existing scale of tie numbers, avoiding out-of-range results when possible for even-based number systems (such as binary and decimal).
But this tie-breaking rule concerns only halfway numbers, so that I don't see how "increasing the scale of floating point numbers" (or returning infinity) can be avoided. And the next sentence, citing an article, seems dubious too:
This system is rarely used because it never rounds to zero, yet "rounding to zero is often a desirable attribute for rounding algorithms".
Again, this tie-breaking rule concerns only halfway numbers, so that "it never rounds to zero" is incorrect. Moreover, this sentence would also imply that "round half away from zero" and "round away from zero" would also be rarely used, which is not true. Vincent Lefèvre ( talk) 14:47, 11 September 2017 (UTC)
@Vincent Lefèvre: I guess that "never to zero" means that the applicable rounding case (half-number) will not round to zero. However, whether or not this is desirable is context dependent. For example, in fixed point, rounding towards uneven will only produce a single bit-change, while rounding to even may trigger a full add. Paamand ( talk) 09:04, 21 September 2017 (UTC)
The VAT Guide cited states:
Note: The concession in this paragraph to round down amounts of VAT is designed for invoice traders and applies only where the VAT charged to customers and the VAT paid to Customs and Excise is the same. As a general rule, the concession to round down is not appropriate to retailers, who should see paragraph 17.6.
and paragraph 17.6 says the rounding down only is not allowed. So I don't think it is "quite clearly" stated. -- 81.178.31.210 17:27, 2 September 2006 (UTC)
Independent from the preceding question, VAT on items in an invoice is an example for the need to round each item such that the sum of the rounded items equals the rounded sum of the items. A particular solution for positive items only is the Largest_remainder_method, a more general one is Curve_fitting. Perhaps someone who reads German may include (and expand?) in the article material from https://de.wikipedia.org/wiki/Rundung#Summenerhaltendes_Runden. -- Wegner8 07:04, 18 September 2017 (UTC)
I'm not sure if it really merits coverage in this article, but this blog page [1] describes "Argentina" and "Swiss" rounding.
"Argentina Rounding" is (roughly but not exactly) rounding to halves, rather than whole digits. "Roughly" because (in my view) they only look at one digit.
"Swiss Rounding" is (if I understand it correctly) rounding to quarters. Like rounding to 0.0, 0.25, 0.5, 0.75, and 1.0 as rounded results.
-- Jeff Grigg 68.15.44.5 ( talk) 00:32, 20 October 2017 (UTC)
The article has just stocastic rounding for ties, however the term stochastic rounding is applied in for instance
Gupta, Suyog; Angrawl, Ankur; Gopalakrishnan, Kailash; Narayanan, Pritish (9 February 2016). "Deep Learning with Limited Numerical Precision". ArXiv. p. 3.
where we have the probability of rounding to is proportional to the proximity of to :
Rounding in Monte Carlo rounding is random, the above can be considered as one form of Monte Carlo rounding, but others can be used and can be used with multiple runs to test the stability of a result. The stochastic rounding above has the property that addition is unbiased. There's a lot about Moonte Carlo arithmetic at Monte Carlo Arithmetic. Dmcq ( talk) 16:17, 21 October 2017 (UTC)
In my opinion it would be nice to add a starting section which says something about the axioms of rounding operations, possibly starting with "what rounding is" in a most general sense and then adding some more specializing axioms for the various types of rounding in use. However it's not yet clear to me how much of such an "axiomatic rounding theory" already has been developed in the research community. So it might also be a bit too early to discuss it here in the context of a WP article. I'll check some of the resources I know that exist on such an axiomatization and post it here for further discussion. Any other input is highly welcome, thanks! Axiom0 ( talk) 14:37, 7 March 2018 (UTC)
{{
cite journal}}
: Cite journal requires |journal=
(
help) But note that this is already more specific (floating-point only) than what is considered in the WP article.
Vincent Lefèvre (
talk) 16:00, 7 March 2018 (UTC)
The text presently sayssaid that the round-half-to-even "rule will introduce a towards-zero bias when y − 0.5 is even".
I assumes this means that with a set such as 2.5, 2.5, 4.5, 10.5, the result of rounding will be 2, 2, 4, 10, which has a towards-zero bias. But −1.5, −1.5, −11.5, −7.5 are also of the form "y − 0.5 is even", and they round to −2, −2, −12, −8. So really this should read:
"rule will introduce a towards-negative-infinity bias when y − 0.5 is even".
So I've fixed this. Boud ( talk) 16:41, 24 April 2018 (UTC)
I think it's actually round half away from zero that is widely used in many disciplines. StephenJohns00 ( talk) 04:56, 13 August 2018 (UTC)
Here is my proposed text, simplified once more. Wegner8 08:06, 28 October 2018 (UTC)
=== Rounding of summands preserving the total: VAT rounding ===
Rounding preserving the total means rounding each summand in a way that the total of the rounded numbers equals their rounded total. The [[Largest remainder method]] is the special case with positive summands only.
Among other purposes, this procedure is practised (a) for the [[proportional representation]] in a legislative body with a fixed number of members and (b) if in an invoice the total [[VAT]] is to be distributed to the items keeping the addition of each column correct.
If, after rounding each summand as usual, their sum is too large or too small, one rounds the nesessary number of summands away from their closest rounded values towards the second closest such that (a) the desired total is achieved and (b) the [[absolute value]] of the total of all rounding differences becomes minimal.
Can you please, instead of simply deleting potentially useful material, help finding the right place and wording for it? Where would someone with the VAT problem look for a solution? -- Thanks for the hint to the article Party-list proportional representation; this link should replace the link to the Largest remainder method in the proposed text above. -- Wegner8 06:49, 29 October 2018 (UTC)
The article begins as follows: "Rounding a number means replacing it with a different number that is approximately equal to the original ...". Isn't this exactly what VAT rounding does? Everyone talks about fake news; denying facts is a twin of fake news. Please restore a paragraph on VAT rounding. -- Wegner8 08:19, 10 January 2019 (UTC) — Preceding unsigned comment added by Wegner8 ( talk • contribs)
There is a proposal on Talk:Nearest integer function to merge that article into this one; see Talk:Nearest_integer_function#Merge_into_Rounding. -- JBL ( talk) 22:16, 18 April 2019 (UTC)
In the history section there is a short bit on age heaping at the end which is a kind of rounding people do that seems to be more about a psychological thing and way of fixing statistics than anything to do with rounding as described in the rest of the article. Does that rally belong in this article? Dmcq ( talk) 11:44, 24 September 2019 (UTC)
No mention of faithful rounding and why it's a good idea. I think it means logic can be smaller and generally the rounding/overflow/underflow is quicker, but I came here for more info and didn't find any. — Preceding unsigned comment added by ChippendaleMupp ( talk • contribs) 15:50, 13 February 2020 (UTC)
My apologies Dr. Lefèvre, I honestly thought I was correcting a typo with my edit.
I found this example fascinating, but when I tried to work it out by substituting different values for , it didn't seem to add up.
Here is the original text before my edit:
For instance, if Goldbach's conjecture is true but unprovable, then the result of rounding the following value up to the next integer cannot be determined: 1+10−n where n is the first even number greater than 4 which is not the sum of two primes, or 1 if there is no such number. The rounded result is 2 if such a number n exists and 1 otherwise.
Let's first take the case where the number doesn't exist, so we substitute in the formula:
Now let's take the other case, where an even number greater than 4 exists that is not a sum of two primes. That number would presumably be very large, but let's start with :
Clearly, the larger the , the closer the resulting value will be to 1. For a very large we would get 1.000...0001 with very many zeroes in between.
So the values we get in both cases of the example, 1.1 and 1.000...0001, will both end up being rounded to the same number, regardless of which rounding mode we use. So how do we get this formula to round to 1 or 2, depending on the provability of Goldbach's Conjecture?
The only way I can think of getting different rounded values is to use for the case when no such number exists, which would cause the above formula to round to 2 in that case and to 1 in the other case (when rounding to nearest integer).
Clearly I have misunderstood something here, but I can't seem to figure out what. I would really appreciate if someone could elaborate this example.
Grnch ( talk) 21:51, 27 March 2021 (UTC)
1+10−n where n is the first even number greater than 4 which is not the sum of two primes, or 1 if there is no such number
<math display="block">
as everything in it appears as an image, which is bad for accessibility or if one wants to copy-paste. I don't know whether there is wikicode to solve that. Alternatively, adding "either" before "1+10−n" would make the sentence unambiguous, IMHO. —
Vincent Lefèvre (
talk) 11:01, 31 March 2021 (UTC)I didn't find any description for rounding to the nearest 1/2 integer value, i.e. to values of 0, 0.5, 1, 1.5, etc.-- Fkbreitl ( talk) 12:00, 8 May 2021 (UTC)
I thought it would be useful to include the formula for rounding up and down a number to the nearest multiple of another positive number, since there's already a section for rounding to a specific multiple ( /info/en/?search=Rounding#Rounding_to_a_specified_multiple), and a section for rounding up and down to nearest integer ( /info/en/?search=Rounding#Rounding_down and /info/en/?search=Rounding#Rounding_up). The formulas would be:
So, I added them to the article, but the user Vincent Lefèvre removed them. He said that "This is a particular case of what is described in this section (which is not restricted to rounding to nearest)". While that's true, I don't see why that's a problem or a reason to remove them. I mean, rounding up or down (using the ceiling and floor functions) is a particular case of rounding, so why don't we also remove the sections "Rounding down" and "Rounding up"? -- Alej27 ( talk) 19:24, 3 March 2022 (UTC)
Hi @ Boh39083, I notice you've gotten into a bit of a revert war about your recent changes. Maybe you want to discuss your rationale here a bit? (cf. "bold–revert–discuss".) Trying to make arguments in edit summaries is not the most effective in my experience. – jacobolus (t) 16:04, 19 November 2023 (UTC)
In JS, I tested a simple conversion of 1/3 into a percentage (JS number is double-precision floating point), and if you divide (round) first before multiplying, will have a larger discrepancy than if you to multiply first than divide (as divide can result in landing in an non-representable value so a rounding must occur):
(1/3*100).toFixed(16) '33.3333333333333286' (1*100/3).toFixed(16) '33.3333333333333357'
This is because the former, did 1/3, which cannot be exactly represented in double-precision floating point format, so that number gets rounded, and then gets multiplied by 100, which increases this error. The latter did 1*100, which can be exactly represented, then divided by 3, which is then rounded, at the final step. Joeleoj123 ( talk) 02:04, 18 December 2023 (UTC)