This article has not yet been rated on Wikipedia's
content assessment scale. It is of interest to the following WikiProjects: | ||||||||||||||||||
|
This page defines the machine epsilon as "the smallest floating point number such that". What I have seen more commonly is the definition to be the largest floating point number such that . In fact the equation provided gives the latter definition. Granted the two definitions lead to numbers adjacent on the floating point number line, but I would like to see this article either switch to the other definition or else discuss the presence of two definitions in use. Any thoughts? -- Jlenthe 01:01, 9 October 2006 (UTC)
The definition now appears to have changed away from either of those in the first comment in this note, to "an upper bound on the relative error due to rounding". That definition, assuming rounding to nearest, gives us half the value that is generally used, so for example 2^(-24) for IEEE binary32, instead of 2^(-23). The larger value appears as FLT_EPSILON in float.h for C/C++. Then in the table below, the standard definition is used in the last three columns for binary32 and binary64, in contradiction with the header. (As if to make the result match FLT_EPSILSON and DBL_EPSILON??) 86.24.142.189 ( talk) 17:21, 5 January 2013 (UTC)
The variant definition section said, "smallest number that, when added to one, yields a result different from one", that is, . But that , in double, is , i.e., the next floating point number greater than . To get the value that was intended, , the C & C++ standards, Matlab, etc. refer to distance between 1 and the next larger floating point number.
If you use the code with
you get smaller numbers with 1+eps>1, Instead the relativ difference between two floating point numers is computed. -- Mathemaduenn 10:20, 10 October 2006 (UTC)
float machEps = 1.0001f;
This is wrong: the difference between these numbers is 0.00000000000000000000001, or 2−23. 0.00000000000000000000001 is 10-23, but that is probably not the right number. —Preceding unsigned comment added by 193.78.112.2 ( talk) 06:43, 19 October 2007 (UTC)
It's a binary fraction, not a decimal fraction. I agree it's confusing, but I can't think of a better way to explain it. -- BenRG 19:52, 19 October 2007 (UTC)
The page lists 'David Goldberg (March 1991). "What Every Computer Scientist Should Know About Floating-Point Arithmetic".' as one of it's references. That paper defines machine epsilon as the largest possible relative error when a real number is approximated by a floating point number closest to it. ("..the largest of the bounds in (2) above.."). The formula give is as mentioned by Jlenthe above. So either that paper should be removed from the list of references, or the definition provided in it should also be mentioned. —Preceding unsigned comment added by Gautamsewani ( talk • contribs) 16:11, 18 August 2008 (UTC)
"we can simply take the difference of 1.0 and double(long long (1.0) + 1)."
It's problematic because double(long long(1.0) + 1) evaluates to 2. It should be reinterpret_cast, but I don't think that it will improve the readability. bungalo ( talk) 09:32, 5 October 2009 (UTC)
As of October 20, 2009, the information on this page is wrong. Unless anyone has some strong thoughts to the contrary, in a few days I will make some major changes to put it right. Jfgrcar ( talk) 03:54, 21 October 2009 (UTC)
I corrected this section today, October 25, 2009. I did not alter the examples, which are still wrong. —Preceding unsigned comment added by Jfgrcar ( talk • contribs) 00:15, 26 October 2009 (UTC)
The section "How to determine machine epsilon" contains strange implementations that try to approximate machine epsilon.
In standard C we can nowadays simply use the DBL_EPSILON
constant from float.h
. And more generally, you can use the nextafter
family of functions from math.h
; for example, "nextafter(1.0, 2.0) - 1.0
" should evaluate to DBL_EPSILON if I'm not mistaken. (By the way, the C standard even gives an example showing what this constant should be if you use IEEE floating point numbers: DBL_EPSILON = 2.2204460492503131E-16 = 0X1P-52.)
In Java, we have methods like java.lang.Math.nextAfter
and java.lang.Math.ulp
. Again, no need to use approximations and iterations. —
Miym (
talk)
07:22, 26 October 2009 (UTC)
epsilon(one)
(I know this because this line caused an error when I was trying to convert C to fortran with
f2c).
78.240.11.120 (
talk)
13:49, 25 February 2012 (UTC)Section "Values for standard hardware floating point arithmetics" claims that "while the standard allows several methods of rounding, programming languages and systems vendors do not provide methods to change the rounding mode from the default: round-to-nearest with the tie-breaking scheme round-to-even."
This claim seems to be incorrect. First, "programming languages" do provide such methods: the C standard provides the functions fesetround
and fegetround
and macros such as FE_DOWNWARD
and FE_UPWARD
in fenv.h
. Second, "system vendors" do implement these: I just tried in a standard Gnu/Linux environment and these seem to be working (mostly) as expected. —
Miym (
talk)
07:37, 26 October 2009 (UTC)
Why is the table at the article's beginning listing the machine epsilon as pow(2, -53), when the calculation (correctly) arrives at the conclusion that it is indeed pow(2, -52) (for double precision, i.e. p=52? —Preceding unsigned comment added by 109.90.227.146 ( talk) 21:14, 28 September 2010 (UTC)
I don't know if this is worth putting on the page, but I've just converted the Java estimation code for double type in Matlab. If it's worth adding, here it is to save other people converting it.
function calculateMachineEpsilonDouble()
machEps = double(1.0);
done = true;
while done
machEps = machEps/2.0;
done = (double(1.0 + (machEps/2.0)) ~= 1.0);
end
fprintf('Machine Epsilon = %s\n', num2str(machEps));
end
137.108.145.10 ( talk) 15:29, 3 February 2011 (UTC)
I ported the C version into a C# version. I don't know if this would be valuable or worth adding to the article, so I'm adding it here and letting someone else make the judgement call.
static void Main(string[] args)
{
float machEps = 1.0f;
do
{
Console.WriteLine(machEps.ToString("f10") + "\t" + (1.0f + machEps).ToString("f15"));
machEps /= 2.0f;
}while((float)(1.0f + (machEps / 2.0f)) != 1.0f);
Console.WriteLine("Calculated Machine Epsilon: " + machEps.ToString("f15"));
}
This Prolog code approximates the machine epsilon.
epsilon(X):-
Y is (1.0 + X),
Y = 1.0,
write(X).
epsilon(X):-
Y is X/2,
epsilon(Y).
An example execution in SWI-Prolog:
1 ?- epsilon(1.0). 1.1102230246251565e-16 true .
-- 201.209.40.226 ( talk) 04:23, 30 July 2011 (UTC)
"The following are encountered in practice?" Whose practice? If you ask for the machine epsilon of a double in any programming language I can think of that has a specific function for it (eps functions in Matlab and Octave, finfo in numpy, std::numeric_limits<double>::eps() in C++), then you get , not . Yes, I realise that there are other definitions you can use, but I find it very weird to say that "in practice" you find these numbers, whereas actual people who have to deal with epsilon deal with the smallest step you can take in the mantissa for a given exponent. JordiGH ( talk) 01:08, 1 September 2011 (UTC)
The article states for double "64-bit doubles give 2.220446e-16, which is 2-52 as expected." but the table at the top lists 2-53. Brianbjparker ( talk) 17:23, 8 March 2012 (UTC)
The definition of machine epsilon given use a definition of precision p that excludes the implicit bit so e.g. for double uses a p of 52 rather than the usual definition of p=53. This is very confusing-- the definition and table should be changed to use the standard definition of p including the implicit bit. Brianbjparker ( talk) 17:39, 8 March 2012 (UTC)
The python example uses numpy which is not always installed. Wouldn't the following example, using the standard sys module be more relevant, even if restricted to floats ?
In 1]: import sys
In 2]: sys.float_info.epsilon
Out2]: 2.220446049250313e-16
Frédéric Grosshans ( talk) 19:49, 15 March 2012 (UTC)
Those programs given here all uses an aproximation, you have to note that when calculating 'double' you are actually dealing with numbers of 80 bits (not 64 bits). 64 bits is only a storage, what the procesor does is calculating with 80 bit (or more) precission. The simplest way to calculate machine epsilon can be as such:
for double:
#include <iostream>
#include <stdint.h>
#include <iomanip>
int main()
{
union
{
double f;
uint64_t i;
} u1, u2, u3;
u1.i = 0x3ff0000000000000ul;
// one (exponent set to 1023 and mantissa to zero
// one bit from mantissa is implicit
u2.i = 0x3ff0000000000001ul;
// one and a little more
u3.f = u2.f - u1.f;
std::cout << "machine epsilon: " << std::setprecision(18) << u3.f << std::endl;
}
above program gives:
/home/tomek/roboczy/test$ g++ -O2 -o test test.cpp && ./test
machine epsilon: 2.22044604925031308e-16
and a sample for 32 bit (float)
#include <iostream>
#include <stdint.h>
#include <iomanip>
int main()
{
union
{
float f;
uint32_t i;
} u1, u2, u3;
u1.i = 0x3F800000;
u2.i = 0x3F800001;
u3.f = u2.f - u1.f;
std::cout << "machine epsilon: " << std::setprecision(18) << u3.f << std::endl;
}
and it gives:
/home/tomek/roboczy/test$ g++ -O2 -o test test.cpp && ./test
machine epsilon: 1.1920928955078125e-07
of course above *decimal* values are only an aproximation so I don't see the sense to print them directly. Also C++ users can use std::numeric_limits to get those constants:
std::cout << std::numeric_limits<float>::epsilon() << std::endl;
std::cout << std::numeric_limits<double>::epsilon() << std::endl;
---
And a next sample how to calculate it in a more 'mathematical' manner. Let's talk about double:
S EEEEEEEEEEE FFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFF
0 1 11 12 63
The value V represented by the word may be determined as follows:
* If E=2047 and F is nonzero, then V=NaN ("Not a number")
* If E=2047 and F is zero and S is 1, then V=-Infinity
* If E=2047 and F is zero and S is 0, then V=Infinity
* If 0<E<2047 then V=(-1)**S * 2 ** (E-1023) * (1.F) where "1.F" is intended
to represent the binary number created by prefixing F with an implicit
leading 1 and a binary point.
* If E=0 and F is nonzero, then V=(-1)**S * 2 ** (-1022) * (0.F) These are
"unnormalized" values.
* If E=0 and F is zero and S is 1, then V=-0
* If E=0 and F is zero and S is 0, then V=0
so if you want to set a double to 1.0 you have to set exponent to 1023 and mantissa to zero (one bit is implicit), e.g.
0 01111111111 0000000000000000000000000000000000000000000000000000
if you want a 'little' more than one you have to change the last bit to one:
0 01111111111 0000000000000000000000000000000000000000000000000001
the last bit above has value 2^-52 (not 2^-53) and the machine epsilon can be caltulated in this way:
I have changed values in the first table (binary32, binary64), the rest I don't have a time to test.
-- Tsowa ( talk) 13:14, 3 November 2012 (UTC)
Well there's your problem. You are using the definition of machine epsilon which is compatible with C FLT_EPSILON and DBL_EPSILON: the largest relative difference between neighbouring values. The definition in this article is the maximum relative rounding error, which is half that amount. It may be better to change the definition - I don't really mind either way - but we must certainly keep the article internally consistent. I have changed those two values and related formulae back. 86.24.142.189 ( talk) 17:25, 9 January 2013 (UTC)
Hello fellow Wikipedians,
I have just modified 2 external links on Machine epsilon. Please take a moment to review my edit. If you have any questions, or need the bot to ignore the links, or the page altogether, please visit this simple FaQ for additional information. I made the following changes:
When you have finished reviewing my changes, you may follow the instructions on the template below to fix any issues with the URLs.
This message was posted before February 2018.
After February 2018, "External links modified" talk page sections are no longer generated or monitored by InternetArchiveBot. No special action is required regarding these talk page notices, other than
regular verification using the archive tool instructions below. Editors
have permission to delete these "External links modified" talk page sections if they want to de-clutter talk pages, but see the
RfC before doing mass systematic removals. This message is updated dynamically through the template {{
source check}}
(last update: 5 June 2024).
Cheers.— InternetArchiveBot ( Report bug) 12:50, 11 January 2018 (UTC)
This article seems to make a confusion between the machine epsilon ε and the unit roundoff u. While the definition of the unit roundoff u seems standard (= when considering round-to-nearest), the definition of ε is not, as explained in the article. The lead says
The quantity is also called macheps or unit roundoff, and it has the symbols Greek epsilon or bold Roman u, respectively.
as if there were no differences between them. — Vincent Lefèvre ( talk) 22:17, 22 December 2021 (UTC)
The wiki page on "Machine epsilon" acknowledges that there are two camps: the "formal definition" and "variant definitions". I notice that the software industry overwhelmingly uses the "variant definition", e.g for IEEE 754 'binary64', we have the C programming langiage DBL_EPSILON= constant from float.h, Python sys.float_info.epsilon=, Fortran 90 EPSILON(1.0_real64)=, MATLAB eps=, Pascal epsreal=, etc. Therefore I feel that these applications deserve a clearer terminology than just "variants". What do you think about following suggestion?:
In summary: "rounding machine epsilon" is simply half the "interval machine epsilon", because the rounding error is equal to half the interval in round-to-nearest mode. I hope that this can help to clarify the distinction between the two definitions of machine epsilon. What do you think? Peter.schild ( talk) 21:40, 19 November 2023 (UTC)
This article has not yet been rated on Wikipedia's
content assessment scale. It is of interest to the following WikiProjects: | ||||||||||||||||||
|
This page defines the machine epsilon as "the smallest floating point number such that". What I have seen more commonly is the definition to be the largest floating point number such that . In fact the equation provided gives the latter definition. Granted the two definitions lead to numbers adjacent on the floating point number line, but I would like to see this article either switch to the other definition or else discuss the presence of two definitions in use. Any thoughts? -- Jlenthe 01:01, 9 October 2006 (UTC)
The definition now appears to have changed away from either of those in the first comment in this note, to "an upper bound on the relative error due to rounding". That definition, assuming rounding to nearest, gives us half the value that is generally used, so for example 2^(-24) for IEEE binary32, instead of 2^(-23). The larger value appears as FLT_EPSILON in float.h for C/C++. Then in the table below, the standard definition is used in the last three columns for binary32 and binary64, in contradiction with the header. (As if to make the result match FLT_EPSILSON and DBL_EPSILON??) 86.24.142.189 ( talk) 17:21, 5 January 2013 (UTC)
The variant definition section said, "smallest number that, when added to one, yields a result different from one", that is, . But that , in double, is , i.e., the next floating point number greater than . To get the value that was intended, , the C & C++ standards, Matlab, etc. refer to distance between 1 and the next larger floating point number.
If you use the code with
you get smaller numbers with 1+eps>1, Instead the relativ difference between two floating point numers is computed. -- Mathemaduenn 10:20, 10 October 2006 (UTC)
float machEps = 1.0001f;
This is wrong: the difference between these numbers is 0.00000000000000000000001, or 2−23. 0.00000000000000000000001 is 10-23, but that is probably not the right number. —Preceding unsigned comment added by 193.78.112.2 ( talk) 06:43, 19 October 2007 (UTC)
It's a binary fraction, not a decimal fraction. I agree it's confusing, but I can't think of a better way to explain it. -- BenRG 19:52, 19 October 2007 (UTC)
The page lists 'David Goldberg (March 1991). "What Every Computer Scientist Should Know About Floating-Point Arithmetic".' as one of it's references. That paper defines machine epsilon as the largest possible relative error when a real number is approximated by a floating point number closest to it. ("..the largest of the bounds in (2) above.."). The formula give is as mentioned by Jlenthe above. So either that paper should be removed from the list of references, or the definition provided in it should also be mentioned. —Preceding unsigned comment added by Gautamsewani ( talk • contribs) 16:11, 18 August 2008 (UTC)
"we can simply take the difference of 1.0 and double(long long (1.0) + 1)."
It's problematic because double(long long(1.0) + 1) evaluates to 2. It should be reinterpret_cast, but I don't think that it will improve the readability. bungalo ( talk) 09:32, 5 October 2009 (UTC)
As of October 20, 2009, the information on this page is wrong. Unless anyone has some strong thoughts to the contrary, in a few days I will make some major changes to put it right. Jfgrcar ( talk) 03:54, 21 October 2009 (UTC)
I corrected this section today, October 25, 2009. I did not alter the examples, which are still wrong. —Preceding unsigned comment added by Jfgrcar ( talk • contribs) 00:15, 26 October 2009 (UTC)
The section "How to determine machine epsilon" contains strange implementations that try to approximate machine epsilon.
In standard C we can nowadays simply use the DBL_EPSILON
constant from float.h
. And more generally, you can use the nextafter
family of functions from math.h
; for example, "nextafter(1.0, 2.0) - 1.0
" should evaluate to DBL_EPSILON if I'm not mistaken. (By the way, the C standard even gives an example showing what this constant should be if you use IEEE floating point numbers: DBL_EPSILON = 2.2204460492503131E-16 = 0X1P-52.)
In Java, we have methods like java.lang.Math.nextAfter
and java.lang.Math.ulp
. Again, no need to use approximations and iterations. —
Miym (
talk)
07:22, 26 October 2009 (UTC)
epsilon(one)
(I know this because this line caused an error when I was trying to convert C to fortran with
f2c).
78.240.11.120 (
talk)
13:49, 25 February 2012 (UTC)Section "Values for standard hardware floating point arithmetics" claims that "while the standard allows several methods of rounding, programming languages and systems vendors do not provide methods to change the rounding mode from the default: round-to-nearest with the tie-breaking scheme round-to-even."
This claim seems to be incorrect. First, "programming languages" do provide such methods: the C standard provides the functions fesetround
and fegetround
and macros such as FE_DOWNWARD
and FE_UPWARD
in fenv.h
. Second, "system vendors" do implement these: I just tried in a standard Gnu/Linux environment and these seem to be working (mostly) as expected. —
Miym (
talk)
07:37, 26 October 2009 (UTC)
Why is the table at the article's beginning listing the machine epsilon as pow(2, -53), when the calculation (correctly) arrives at the conclusion that it is indeed pow(2, -52) (for double precision, i.e. p=52? —Preceding unsigned comment added by 109.90.227.146 ( talk) 21:14, 28 September 2010 (UTC)
I don't know if this is worth putting on the page, but I've just converted the Java estimation code for double type in Matlab. If it's worth adding, here it is to save other people converting it.
function calculateMachineEpsilonDouble()
machEps = double(1.0);
done = true;
while done
machEps = machEps/2.0;
done = (double(1.0 + (machEps/2.0)) ~= 1.0);
end
fprintf('Machine Epsilon = %s\n', num2str(machEps));
end
137.108.145.10 ( talk) 15:29, 3 February 2011 (UTC)
I ported the C version into a C# version. I don't know if this would be valuable or worth adding to the article, so I'm adding it here and letting someone else make the judgement call.
static void Main(string[] args)
{
float machEps = 1.0f;
do
{
Console.WriteLine(machEps.ToString("f10") + "\t" + (1.0f + machEps).ToString("f15"));
machEps /= 2.0f;
}while((float)(1.0f + (machEps / 2.0f)) != 1.0f);
Console.WriteLine("Calculated Machine Epsilon: " + machEps.ToString("f15"));
}
This Prolog code approximates the machine epsilon.
epsilon(X):-
Y is (1.0 + X),
Y = 1.0,
write(X).
epsilon(X):-
Y is X/2,
epsilon(Y).
An example execution in SWI-Prolog:
1 ?- epsilon(1.0). 1.1102230246251565e-16 true .
-- 201.209.40.226 ( talk) 04:23, 30 July 2011 (UTC)
"The following are encountered in practice?" Whose practice? If you ask for the machine epsilon of a double in any programming language I can think of that has a specific function for it (eps functions in Matlab and Octave, finfo in numpy, std::numeric_limits<double>::eps() in C++), then you get , not . Yes, I realise that there are other definitions you can use, but I find it very weird to say that "in practice" you find these numbers, whereas actual people who have to deal with epsilon deal with the smallest step you can take in the mantissa for a given exponent. JordiGH ( talk) 01:08, 1 September 2011 (UTC)
The article states for double "64-bit doubles give 2.220446e-16, which is 2-52 as expected." but the table at the top lists 2-53. Brianbjparker ( talk) 17:23, 8 March 2012 (UTC)
The definition of machine epsilon given use a definition of precision p that excludes the implicit bit so e.g. for double uses a p of 52 rather than the usual definition of p=53. This is very confusing-- the definition and table should be changed to use the standard definition of p including the implicit bit. Brianbjparker ( talk) 17:39, 8 March 2012 (UTC)
The python example uses numpy which is not always installed. Wouldn't the following example, using the standard sys module be more relevant, even if restricted to floats ?
In 1]: import sys
In 2]: sys.float_info.epsilon
Out2]: 2.220446049250313e-16
Frédéric Grosshans ( talk) 19:49, 15 March 2012 (UTC)
Those programs given here all uses an aproximation, you have to note that when calculating 'double' you are actually dealing with numbers of 80 bits (not 64 bits). 64 bits is only a storage, what the procesor does is calculating with 80 bit (or more) precission. The simplest way to calculate machine epsilon can be as such:
for double:
#include <iostream>
#include <stdint.h>
#include <iomanip>
int main()
{
union
{
double f;
uint64_t i;
} u1, u2, u3;
u1.i = 0x3ff0000000000000ul;
// one (exponent set to 1023 and mantissa to zero
// one bit from mantissa is implicit
u2.i = 0x3ff0000000000001ul;
// one and a little more
u3.f = u2.f - u1.f;
std::cout << "machine epsilon: " << std::setprecision(18) << u3.f << std::endl;
}
above program gives:
/home/tomek/roboczy/test$ g++ -O2 -o test test.cpp && ./test
machine epsilon: 2.22044604925031308e-16
and a sample for 32 bit (float)
#include <iostream>
#include <stdint.h>
#include <iomanip>
int main()
{
union
{
float f;
uint32_t i;
} u1, u2, u3;
u1.i = 0x3F800000;
u2.i = 0x3F800001;
u3.f = u2.f - u1.f;
std::cout << "machine epsilon: " << std::setprecision(18) << u3.f << std::endl;
}
and it gives:
/home/tomek/roboczy/test$ g++ -O2 -o test test.cpp && ./test
machine epsilon: 1.1920928955078125e-07
of course above *decimal* values are only an aproximation so I don't see the sense to print them directly. Also C++ users can use std::numeric_limits to get those constants:
std::cout << std::numeric_limits<float>::epsilon() << std::endl;
std::cout << std::numeric_limits<double>::epsilon() << std::endl;
---
And a next sample how to calculate it in a more 'mathematical' manner. Let's talk about double:
S EEEEEEEEEEE FFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFF
0 1 11 12 63
The value V represented by the word may be determined as follows:
* If E=2047 and F is nonzero, then V=NaN ("Not a number")
* If E=2047 and F is zero and S is 1, then V=-Infinity
* If E=2047 and F is zero and S is 0, then V=Infinity
* If 0<E<2047 then V=(-1)**S * 2 ** (E-1023) * (1.F) where "1.F" is intended
to represent the binary number created by prefixing F with an implicit
leading 1 and a binary point.
* If E=0 and F is nonzero, then V=(-1)**S * 2 ** (-1022) * (0.F) These are
"unnormalized" values.
* If E=0 and F is zero and S is 1, then V=-0
* If E=0 and F is zero and S is 0, then V=0
so if you want to set a double to 1.0 you have to set exponent to 1023 and mantissa to zero (one bit is implicit), e.g.
0 01111111111 0000000000000000000000000000000000000000000000000000
if you want a 'little' more than one you have to change the last bit to one:
0 01111111111 0000000000000000000000000000000000000000000000000001
the last bit above has value 2^-52 (not 2^-53) and the machine epsilon can be caltulated in this way:
I have changed values in the first table (binary32, binary64), the rest I don't have a time to test.
-- Tsowa ( talk) 13:14, 3 November 2012 (UTC)
Well there's your problem. You are using the definition of machine epsilon which is compatible with C FLT_EPSILON and DBL_EPSILON: the largest relative difference between neighbouring values. The definition in this article is the maximum relative rounding error, which is half that amount. It may be better to change the definition - I don't really mind either way - but we must certainly keep the article internally consistent. I have changed those two values and related formulae back. 86.24.142.189 ( talk) 17:25, 9 January 2013 (UTC)
Hello fellow Wikipedians,
I have just modified 2 external links on Machine epsilon. Please take a moment to review my edit. If you have any questions, or need the bot to ignore the links, or the page altogether, please visit this simple FaQ for additional information. I made the following changes:
When you have finished reviewing my changes, you may follow the instructions on the template below to fix any issues with the URLs.
This message was posted before February 2018.
After February 2018, "External links modified" talk page sections are no longer generated or monitored by InternetArchiveBot. No special action is required regarding these talk page notices, other than
regular verification using the archive tool instructions below. Editors
have permission to delete these "External links modified" talk page sections if they want to de-clutter talk pages, but see the
RfC before doing mass systematic removals. This message is updated dynamically through the template {{
source check}}
(last update: 5 June 2024).
Cheers.— InternetArchiveBot ( Report bug) 12:50, 11 January 2018 (UTC)
This article seems to make a confusion between the machine epsilon ε and the unit roundoff u. While the definition of the unit roundoff u seems standard (= when considering round-to-nearest), the definition of ε is not, as explained in the article. The lead says
The quantity is also called macheps or unit roundoff, and it has the symbols Greek epsilon or bold Roman u, respectively.
as if there were no differences between them. — Vincent Lefèvre ( talk) 22:17, 22 December 2021 (UTC)
The wiki page on "Machine epsilon" acknowledges that there are two camps: the "formal definition" and "variant definitions". I notice that the software industry overwhelmingly uses the "variant definition", e.g for IEEE 754 'binary64', we have the C programming langiage DBL_EPSILON= constant from float.h, Python sys.float_info.epsilon=, Fortran 90 EPSILON(1.0_real64)=, MATLAB eps=, Pascal epsreal=, etc. Therefore I feel that these applications deserve a clearer terminology than just "variants". What do you think about following suggestion?:
In summary: "rounding machine epsilon" is simply half the "interval machine epsilon", because the rounding error is equal to half the interval in round-to-nearest mode. I hope that this can help to clarify the distinction between the two definitions of machine epsilon. What do you think? Peter.schild ( talk) 21:40, 19 November 2023 (UTC)