This
level-5 vital article is rated C-class on Wikipedia's
content assessment scale. It is of interest to the following WikiProjects: | |||||||||||||||||||||
|
Is there any source for this approximation mentioned in the "Applications" section article? This looks more like a Gaussian PDF than a CDF for the values of A and B I tried:
where A and B are certain numeric constants. — Preceding unsigned comment added by 194.94.96.194 ( talk) 10:27, 19 November 2016 (UTC)
Please have someone competent recreate this page. Your error function table of numerical values is WRONG, which is both shockingly inexcusable and could wreak havoc if people actually use it. You can easily verify it is wrong by checking any standard handbook, e. g. CRC handbook of chemistry and physics, CRC math handbook, Lange's handbook of chemistry.
Please fix it, and please permanently bar whomever posted it from contributing to Wikipedia. I realize from what I read here that quality control is anathema, but PLEASE, people really might use this to make important decisions!
Andy Cutler 184.78.143.36 ( talk) 05:40, 14 June 2010 (UTC)
I concur with Andy. Any fool with Mathematica (such as myself) can check the table in a matter of seconds. The table is correct. -B. Yencho —Preceding unsigned comment added by 72.33.79.184 ( talk) 19:13, 13 August 2010 (UTC)
I believe that in practice there are multiple definitions for erf. For example, I have seen it defined with a 1/sqrt(2) out front, instead of a 2. Which way is 'right' probably depends on what field you work in or what book/software you are using. That should probably be mentioned in the article, just so people don't naively try to plug things into the table. 128.119.91.13 ( talk) 18:52, 28 October 2010 (UTC)
Sorry, there is no Spanish page for this article, so i'm forced to ask here =) erf and erfc are "Bounded functions"?? In the sense that, for instance -1<erf<1 and 0<erfc<2.. Is this true? Why not mention it?
I'm reading a text here, (Haykin's Communications Systems) that says that erfc is upper bounded by erfc(u)<exp(-u^2)/sqrt(pi*u) for huge positive values of "u" I don't see how this relates to the graphic in which the maximum value is just "2" for big negative arguments and "0" for big positive ones
Thanks very much, you all rock n' roll big time! Ugo O.
Hello. I see the error function is said to be "non-elementary". What does this mean, exactly? I was under the impression that the division of functions into elementary and special functions, and other categories, was pretty arbitrary. Maybe someone can clarify this point. Is there a more precise category that erf falls into? I'm grasping at straws here -- maybe some group or other algebraic structure? Happy editing, Wile E. Heresiarch 04:09, 17 Feb 2004 (UTC)
It means its antiderivative cannot be expressed as an elementary function. Think of a function who's derivative is the error function, (hint) there isn't one just like sin(x^2) that is why we use series to approximate these functions. — Preceding unsigned comment added by 75.110.96.120 ( talk) 01:06, 20 January 2014 (UTC)
A useful asymptotic expansion of the complementary error function (and therefore also of the error function) for large x is
where . This series diverges for every finite x. However, in practice only the first few terms of this expansion are needed to obtain a good approximation of erfc(x), whereas the Taylor series given above converges very slowly.
Here is a derivation of the asymptotic expansion of the error function ( PDF-Proposition 2.10) 136.142.141.195 ( talk) 00:09, 9 April 2008 (UTC)
Question: What is the relationship between the "complementary error function" and the "inverse error function"?
Ohanian 06:22, 2005 Apr 5 (UTC)
Answer: I'm not aware of any relationship between the two. The complementary error function is simply a scaled version of the error function to find the area under the tail of the gaussian pdf above the value x, rather than integrate between 0 and x. The inverse error function is what most people would expect an inverse function to be: erf-1( erf( x ) ) = x. Bencope 18:15, 21 June 2006 (UTC)
I find it odd that the page talks about the complementary error function (erfc(x)) several times BEFORE even defining erfc(x)! Kdpw ( talk) 13:57, 23 July 2018 (UTC)
Erf is odd. Why use the word "evidently"?
We sometimes say something is "evidently" true when we make this assertion by observation instead of through some proof. I don't believe it matters whether it is included or not. jgoldfar ( talk) 17:53, 4 May 2011 (UTC)
What happen if the the limits of error function changes from -α to x.
Shouldn’t the lower subscript of the integral be negative infinity instead of 0? The picture implies it.
Fvanris 14:06, 30 January 2006 (UTC)
Does anyone know why it is called the error function? Is there something about it that I'm missing? —The preceding unsigned comment was added by 70.113.95.143 ( talk • contribs) 06:24, 4 December 2006 (UTC).
Should approximations for the error function be mentioned? for example, one I saw on the web is
where
By the way, how do I use latex on these pages? Hiiiiiiiiiiiiiiiiiiiii 02:16, 9 July 2007 (UTC)
It seems to me such things could appropriately be included in the article. I'd write
instead of trying to fit that big fraction into a superscript. Michael Hardy 19:14, 9 July 2007 (UTC)
I dislike the fit mentioned until Hiii specifies the range of approximation and precision and/or indicates the source. (If you walk on the street and see the sanwich at the pavement, do not hurry to eat it. Bring it first to the lab of biochemical analysis.) dima ( talk) 06:37, 14 July 2008 (UTC)
P.S. The approximaiton Hiiii wrote is poor, only 2 correct decimal digits. If you want smooth aproximation of erf for all positive values of argument, I suggest . Copyleft 2008 by dima ( talk) 12:40, 14 July 2008 (UTC)
I think the sign of the approximation currently given is wrong, as in it should be negative when x is negative (currently it is always non-negative). I'm not positive of this (otherwise I'd add an x/|x| to the approximation myself), so maybe someone who can verify it would like to make the change. Austin Parker ( talk) 19:56, 14 January 2010 (UTC)
I am myself no mathematician, but the error function reminds me very much on the logistic function. Is there any way to approximate it through this?
I mean something like erf(x) = 1/(1+exp(-x*const)).
178.82.219.114 (
talk) 07:58, 17 June 2010 (UTC)
In the article we read: "Such a fit gives at least one correct decimal digit of function erf in vicinity of the real axis. Using a ≈ 0.140012, the largest error of the approximation is about 0.00012.[2]" But I ask... ONLY ONE CORRECT DECIMAL DIGIT and maximum error 0.00012? Strange! And in the quoted reference we learn that the formula provides an approximation correct do better than 4*10^-4 in relative precision. by Alexor —Preceding unsigned comment added by 151.76.71.189 ( talk) 19:39, 29 March 2011 (UTC)
There are well known approximations for erf that are better than any of these. I've added them to the article, with the reference to Abramowitz and Stegun. (They cite an even earlier source, but A&S has the advantage of being easily available online, as well as being a widely used reference.) I've left the other approximation for the moment, but I suggest removing it. The source for it no longer seems to be available (it was just a pdf on someone's home page), and it really isn't a very good approximation. As far as I can tell, the author simply didn't realize that better approximations (faster, more accurate) had been known for half a century. ( Pkeastman ( talk) 19:36, 6 September 2011 (UTC))
The last approximation under the heading "numerical approximations" is clearly wrong. It claims that The argument of the exponential function cannot get larger than 0. Therefore the range of this approximation is ( 0, 1.3693 ) while the actual range is (0,2). Also erfc(0)=1 but the approximation yields 0.9850. I would recommend to delete this approximation. Dr.who13 ( talk) 16:59, 12 February 2018 (UTC)
I'm studying and I really had a problem with this function. When I learned lesson from subject "Basis of telecommunications and data transfers" we used that function in some analyzing. In my notes stand similar function called error function but differing in product 1/sqrt(pi), and integrating borders were from minus infinity to x. I tried to make analysis with such function and failed. I turned to Internet I find form like one on Wikipedia. (I used that form and successfully made analyze) Nowhere professor's form. I gone to him and tell him, but he is refusing. I realized that there are diverse forms of this function. Then I look to some paper literature of base subject and really find professor's definition and similar ones. So if someone else find those variants also we can add them to article in separate section to avoid confusing others who can be in similar situation as I am.
User:Vanished user 8ij3r8jwefi 19:25, 2 October 2007 (UTC)
Revised by --
User:Vanished user 8ij3r8jwefi 17:38, 9 June 2008 (UTC)
I've recently come across some references to a function ierfc in Crank (1975, the mathematics of diffusion). I couldn't find anything on ierfc in Wolfram/Mathematica, but I found a few odd references, including in Abramovitz and Stegun. Apparently ierfc is the integral of the erfc.
The easy formula for ierfc is ierfc(x) = [exp(-x2)/sqrt(pi)] - x erfc(x)
(sorry, I don't know LaTex).
I don't think there should be an additional article, but I suggest that (1) searches for ierfc be directed here, and (2) there be a brief mention of ierfc and its definition.
129.186.185.139 14:16, 17 October 2007 (UTC)Toby, ewing@iastate.edu
I agree with the above- ierfc isn't defined anywhere! -AJW
There are definitions of ierfc(), i2erfc(), .., inerfc() in Carslaw and Jaeger, Conduction of Heat in Solids, 2nd ed (1959), Appendix II, pp 483-484, equations (9)-(16). I have added them to the article. Billingd ( talk) 15:27, 10 August 2017 (UTC)
Maybe be an obvious Q, but what does 't' represent in the definition of erf(x) —Preceding unsigned comment added by 88.110.201.64 ( talk) 03:55, 22 October 2007 (UTC)
There's an article about that concept: free variables and bound variables. Michael Hardy 04:53, 30 October 2007 (UTC)
In the definition of the error function, perhaps it should be made clear that it applies to the integral of a normal distribution, i.e. a normalised Gaussian function. It is defined more clearly here ( http://mathworld.wolfram.com/Erf.html). I'm not a mathematician, but I'm guessing the error function and all its approximations would not work if your integrand was not normalised.
Dieode 10:18, 29 October 2007 (UTC)
Shouldn't "The error function at infinity is exactly 1" be stated as the limit as the error function approaches infinity is 1? —Preceding unsigned comment added by 65.184.155.154 ( talk) 19:36, 4 December 2007 (UTC)
Would it perhaps be relevant in the applications section to mention that the error function pops up in the moment generating function for the Rayleigh distribution, which is the distribution of the magnitude of a twodimensional vector whose components are uncorrelated and each has a Gaussian distribution with identical variances. Relevant for, e.g., the statistical descrip of wind speed? -- Slaunger ( talk) 11:55, 27 February 2008 (UTC)
The article claims that erf/erfc exist in GNU libc, but aren't part of any standard. I've come across references that claimed erf/erfc are part of the C99 ISO standard. —Preceding unsigned comment added by 87.174.73.108 ( talk) 23:43, 11 March 2008 (UTC)
No approximation is listed for the inverse of erfc, I suggest at least including erfcinv(z)=erfinv(1-z), though it seems a bit trivial. —Preceding unsigned comment added by 201.174.192.4 ( talk) 18:02, 31 March 2008 (UTC)
How many arguments does Gamma function have in the representation of erf sugested? Once it appear with single argument, then, with two argumetns. In the definition, it appears with single argument. How to correct this? dima ( talk) 04:14, 14 July 2008 (UTC)
A minor error is that, as raised in the above section, it mixed the gamma function, which takes one argument, and incomplete gamma function, which takes two arguments. Although we can think the ordinary gamma function as incomplete gamma function with scale=1, it should be mentioned somehow.
Moreover, another obvious major problem is that this expression is not correct, in that, simply in the formula, and . It seems not true. But I have on way to find the correct formula. Please... 193.10.97.31 ( talk) 16:16, 3 December 2008 (UTC)
I have edited the main article by myself concerning these issues. The formula is correct according to the numerical integration. But since the product is always equal to 1, it has been taken away. In addition, after the modification it is easier to see the following result in the article follows since and . 193.10.97.31 ( talk) 22:17, 3 December 2008 (UTC)
Regarding the comment by User:Lklundin ( here, in the edit summary), yes, the implementation is my own, but it is rather trivial and can probably be found (in spirit) in any decent book on numerical analysis.
As requested, I will flesh it out a bit and fix some minor issues (e.g. will not converge for large z
) and try to find a reference for it.
Cheers, pedrito - talk - 20.02.2009 07:34
an
decreasing monotonically as of some i
(as of i=0
for z<=1
) and res + an == res
, i.e. the "correction" an
no longer contributes to the result res
.Is the error function just a convolution of a step function (-1 : x<0, 0 : x==0, 1 : x > 0) with a Gaussian kernel? Numerically that looks right. It also makes sense in that erf is the integral of a Gaussian and convolution with a step gives you integration. If so, that should be mentioned, because it's a very easy way to think about erf. —Ben FrantzDale ( talk) 05:30, 1 August 2009 (UTC)
I just came across the need to repeatedly integrate the error function. Eventually I arrived at this simple recursion relation (note that I used Upsilon only because I'd never seen it used before):
Assuming I got it right, is it worth including? I certainly would have found it useful ;) —Preceding unsigned comment added by Bb vb ( talk • contribs) 01:56, 14 October 2009 (UTC)
i agree with Michael Hardy. this Approximation is crap. the error for x=3.5 is about 0.345!!!!! someone find a better one? —Preceding unsigned comment added by 46.116.223.100 ( talk) 17:28, 5 March 2011 (UTC)
I think the integral definition (first formula) of erf contradicts the graph given next to it. How can erf assume negative values while exp(-x^2) is a positive function? Am I missing something? Does it imply integration from 0 to negative values (reversed bounds)?— Preceding unsigned comment added by 88.230.219.120 ( talk) 19:59, 23 June 2011 (UTC)
I agree with the previous comment, as of now the integrand in the formula (exp(-t^2)) is strictly positive. Thus the plot on the right side presenting negative values does not match with the formula... — Preceding unsigned comment added by 130.15.148.161 ( talk) 12:36, 19 July 2011 (UTC)
The analogy is not valid. The debate about negative numbers was a philosophical debate about whether they can be said to exist. Here it's just a matter of how notation is defined -- with, or without, any definition for integrals from a to b<a.
A key principle is that the lede of a Wikipedia article should be accessible to as many people as possible, consistent with providing a legitimate summary of the article's content. The current version accomplishes that, while your suggested version would not alter the substance of the lede but would be inaccessible to the many people who are very familiar with the concept of an integral but have never seen them defined with a lower bound greater than the upper bound. Duoduoduo ( talk) 18:09, 4 February 2012 (UTC)
Should we add a note to the section Asymptotic Expansion that "!!" is the double factorial and not the factorial of the factorial? RJFJR ( talk) 16:50, 7 February 2012 (UTC)
The definition of these two in the opening section is not very clear. erfi(z) = -i.erf(iz)
What is z here ? A complex number ? A purely imaginary number ? How can you evaluate erf(iz) ? Eregli bob ( talk) 04:43, 18 June 2012 (UTC)
The section labelled "The name 'error function'" does not actually ellaborate on the origin of the name. It instead provides details of the function's general use. This section should be rewritten to reflect its title and purpose. Monsieurisle ( talk) 18:53, 12 February 2016 (UTC)
I've removed the "Implementations" section. There's no benefit to it, and it could potentially contain dozens more entries with no way to really delineate what belongs and what doesn't. I'm including the removed text here just in case. Deacon Vorbis ( talk) 15:01, 17 September 2017 (UTC)
double erf(double x)
and double erfc(double x)
in the header
math.h. The pairs of functions {erff()
,erfcf()
} and {erfl()
,erfcl()
} take and return values of type float
and long double
respectively. For complex double
arguments, the function names cerf
and cerfc
are "reserved for future use"; the missing implementation is provided by the open-source project
libcerf, which is based on the
Faddeeva package.erf()
and erfc()
in the header cmath
. Both functions are overloaded to accept arguments of type float
, double
, and long double
. For complex<double>
, the
Faddeeva package provides a C++ complex<double>
implementation.erf
, and the erfc
functions, nonetheless both inverse functions are not in the current library.
[2]ERF
, ERFC
and ERFC_SCALED
functions to calculate the error function and its complement for real arguments.
Fortran 77 implementations are available in
SLATEC.math.Erf()
and math.Erfc()
for float64 arguments.erf
and erfc
for real and complex arguments. Also has erfi
for calculating math.erf()
and math.erfc()
for real arguments. For previous versions or for complex arguments,
SciPy includes implementations of erf, erfc, erfi, and related functions for complex arguments in scipy.special
.
[6] A complex-argument erf is also in the
arbitrary-precision arithmetic mpmath library as mpmath.erf()
?pnorm
), which is based on W. J. Cody's rational Chebyshev approximation algorithm.
[5]Math.erf()
and Math.erfc()
for real arguments.References
NormSInv
function as follows: erf_inverse(p) = -NormSInv((1 - p)/2)/SQRT(2)
; erfc_inverse(p) = -NormSInv(p/2)/SQRT(2)
. See
[1].
Currently, the complementary error function Erfc is defined in 5.1, but the symbol Erfc appears first in 3.4 (then 3.xx, and 4.x). It would make sense to define it much earlier, possibly in the preamble. — Preceding unsigned comment added by Ceacy ( talk • contribs) 21:08, 19 November 2018 (UTC)
This seems more like a B-class article to me. It has just about everything you need to know about the error function. Bubba73 You talkin' to me? 20:48, 16 February 2019 (UTC)
Please check /info/en/?search=Wikipedia:Content_assessment#Quality_scale
Specifically, cleanup is necessary, there is substantial irrelevant material that would be better included by links to other wikipedia pages, and clarity of presentation, citations, and caveats concerning applicability leave much to be desired.
216.251.133.187 ( talk) 00:34, 6 March 2019 (UTC)
Can someone please expand the term
in the definition? I came here to find out what the error function was because I'm trying to figure out how to integrate
and elsewhere I read that the error function was the way to do this. Now I come here and I find I need to do the integration before I can calculate the error function. How does this work? — VeryRarelyStable ( talk) 02:26, 22 May 2019 (UTC)
The libcerf link doesn't work. Gah4 ( talk) 09:47, 20 October 2021 (UTC)
In the "Approximation with elementary functions" section the author listed are Abramowitz and Stegun. However, first of all chapter 7 from this book has as author Walter Gautschi. Second, Gautschi nicely references specifically for equations 7.1.25 to 7.1.28 C. Hastings jr. : Hastings, C. (1955). Approximations for digital computers. Princeton University Press.
Gautschi multiplied the values for a from Hastings with 2/sqrt(pi) and adjusted the formula accordingly. Stikpet ( talk) 10:45, 21 October 2022 (UTC)
This
level-5 vital article is rated C-class on Wikipedia's
content assessment scale. It is of interest to the following WikiProjects: | |||||||||||||||||||||
|
Is there any source for this approximation mentioned in the "Applications" section article? This looks more like a Gaussian PDF than a CDF for the values of A and B I tried:
where A and B are certain numeric constants. — Preceding unsigned comment added by 194.94.96.194 ( talk) 10:27, 19 November 2016 (UTC)
Please have someone competent recreate this page. Your error function table of numerical values is WRONG, which is both shockingly inexcusable and could wreak havoc if people actually use it. You can easily verify it is wrong by checking any standard handbook, e. g. CRC handbook of chemistry and physics, CRC math handbook, Lange's handbook of chemistry.
Please fix it, and please permanently bar whomever posted it from contributing to Wikipedia. I realize from what I read here that quality control is anathema, but PLEASE, people really might use this to make important decisions!
Andy Cutler 184.78.143.36 ( talk) 05:40, 14 June 2010 (UTC)
I concur with Andy. Any fool with Mathematica (such as myself) can check the table in a matter of seconds. The table is correct. -B. Yencho —Preceding unsigned comment added by 72.33.79.184 ( talk) 19:13, 13 August 2010 (UTC)
I believe that in practice there are multiple definitions for erf. For example, I have seen it defined with a 1/sqrt(2) out front, instead of a 2. Which way is 'right' probably depends on what field you work in or what book/software you are using. That should probably be mentioned in the article, just so people don't naively try to plug things into the table. 128.119.91.13 ( talk) 18:52, 28 October 2010 (UTC)
Sorry, there is no Spanish page for this article, so i'm forced to ask here =) erf and erfc are "Bounded functions"?? In the sense that, for instance -1<erf<1 and 0<erfc<2.. Is this true? Why not mention it?
I'm reading a text here, (Haykin's Communications Systems) that says that erfc is upper bounded by erfc(u)<exp(-u^2)/sqrt(pi*u) for huge positive values of "u" I don't see how this relates to the graphic in which the maximum value is just "2" for big negative arguments and "0" for big positive ones
Thanks very much, you all rock n' roll big time! Ugo O.
Hello. I see the error function is said to be "non-elementary". What does this mean, exactly? I was under the impression that the division of functions into elementary and special functions, and other categories, was pretty arbitrary. Maybe someone can clarify this point. Is there a more precise category that erf falls into? I'm grasping at straws here -- maybe some group or other algebraic structure? Happy editing, Wile E. Heresiarch 04:09, 17 Feb 2004 (UTC)
It means its antiderivative cannot be expressed as an elementary function. Think of a function who's derivative is the error function, (hint) there isn't one just like sin(x^2) that is why we use series to approximate these functions. — Preceding unsigned comment added by 75.110.96.120 ( talk) 01:06, 20 January 2014 (UTC)
A useful asymptotic expansion of the complementary error function (and therefore also of the error function) for large x is
where . This series diverges for every finite x. However, in practice only the first few terms of this expansion are needed to obtain a good approximation of erfc(x), whereas the Taylor series given above converges very slowly.
Here is a derivation of the asymptotic expansion of the error function ( PDF-Proposition 2.10) 136.142.141.195 ( talk) 00:09, 9 April 2008 (UTC)
Question: What is the relationship between the "complementary error function" and the "inverse error function"?
Ohanian 06:22, 2005 Apr 5 (UTC)
Answer: I'm not aware of any relationship between the two. The complementary error function is simply a scaled version of the error function to find the area under the tail of the gaussian pdf above the value x, rather than integrate between 0 and x. The inverse error function is what most people would expect an inverse function to be: erf-1( erf( x ) ) = x. Bencope 18:15, 21 June 2006 (UTC)
I find it odd that the page talks about the complementary error function (erfc(x)) several times BEFORE even defining erfc(x)! Kdpw ( talk) 13:57, 23 July 2018 (UTC)
Erf is odd. Why use the word "evidently"?
We sometimes say something is "evidently" true when we make this assertion by observation instead of through some proof. I don't believe it matters whether it is included or not. jgoldfar ( talk) 17:53, 4 May 2011 (UTC)
What happen if the the limits of error function changes from -α to x.
Shouldn’t the lower subscript of the integral be negative infinity instead of 0? The picture implies it.
Fvanris 14:06, 30 January 2006 (UTC)
Does anyone know why it is called the error function? Is there something about it that I'm missing? —The preceding unsigned comment was added by 70.113.95.143 ( talk • contribs) 06:24, 4 December 2006 (UTC).
Should approximations for the error function be mentioned? for example, one I saw on the web is
where
By the way, how do I use latex on these pages? Hiiiiiiiiiiiiiiiiiiiii 02:16, 9 July 2007 (UTC)
It seems to me such things could appropriately be included in the article. I'd write
instead of trying to fit that big fraction into a superscript. Michael Hardy 19:14, 9 July 2007 (UTC)
I dislike the fit mentioned until Hiii specifies the range of approximation and precision and/or indicates the source. (If you walk on the street and see the sanwich at the pavement, do not hurry to eat it. Bring it first to the lab of biochemical analysis.) dima ( talk) 06:37, 14 July 2008 (UTC)
P.S. The approximaiton Hiiii wrote is poor, only 2 correct decimal digits. If you want smooth aproximation of erf for all positive values of argument, I suggest . Copyleft 2008 by dima ( talk) 12:40, 14 July 2008 (UTC)
I think the sign of the approximation currently given is wrong, as in it should be negative when x is negative (currently it is always non-negative). I'm not positive of this (otherwise I'd add an x/|x| to the approximation myself), so maybe someone who can verify it would like to make the change. Austin Parker ( talk) 19:56, 14 January 2010 (UTC)
I am myself no mathematician, but the error function reminds me very much on the logistic function. Is there any way to approximate it through this?
I mean something like erf(x) = 1/(1+exp(-x*const)).
178.82.219.114 (
talk) 07:58, 17 June 2010 (UTC)
In the article we read: "Such a fit gives at least one correct decimal digit of function erf in vicinity of the real axis. Using a ≈ 0.140012, the largest error of the approximation is about 0.00012.[2]" But I ask... ONLY ONE CORRECT DECIMAL DIGIT and maximum error 0.00012? Strange! And in the quoted reference we learn that the formula provides an approximation correct do better than 4*10^-4 in relative precision. by Alexor —Preceding unsigned comment added by 151.76.71.189 ( talk) 19:39, 29 March 2011 (UTC)
There are well known approximations for erf that are better than any of these. I've added them to the article, with the reference to Abramowitz and Stegun. (They cite an even earlier source, but A&S has the advantage of being easily available online, as well as being a widely used reference.) I've left the other approximation for the moment, but I suggest removing it. The source for it no longer seems to be available (it was just a pdf on someone's home page), and it really isn't a very good approximation. As far as I can tell, the author simply didn't realize that better approximations (faster, more accurate) had been known for half a century. ( Pkeastman ( talk) 19:36, 6 September 2011 (UTC))
The last approximation under the heading "numerical approximations" is clearly wrong. It claims that The argument of the exponential function cannot get larger than 0. Therefore the range of this approximation is ( 0, 1.3693 ) while the actual range is (0,2). Also erfc(0)=1 but the approximation yields 0.9850. I would recommend to delete this approximation. Dr.who13 ( talk) 16:59, 12 February 2018 (UTC)
I'm studying and I really had a problem with this function. When I learned lesson from subject "Basis of telecommunications and data transfers" we used that function in some analyzing. In my notes stand similar function called error function but differing in product 1/sqrt(pi), and integrating borders were from minus infinity to x. I tried to make analysis with such function and failed. I turned to Internet I find form like one on Wikipedia. (I used that form and successfully made analyze) Nowhere professor's form. I gone to him and tell him, but he is refusing. I realized that there are diverse forms of this function. Then I look to some paper literature of base subject and really find professor's definition and similar ones. So if someone else find those variants also we can add them to article in separate section to avoid confusing others who can be in similar situation as I am.
User:Vanished user 8ij3r8jwefi 19:25, 2 October 2007 (UTC)
Revised by --
User:Vanished user 8ij3r8jwefi 17:38, 9 June 2008 (UTC)
I've recently come across some references to a function ierfc in Crank (1975, the mathematics of diffusion). I couldn't find anything on ierfc in Wolfram/Mathematica, but I found a few odd references, including in Abramovitz and Stegun. Apparently ierfc is the integral of the erfc.
The easy formula for ierfc is ierfc(x) = [exp(-x2)/sqrt(pi)] - x erfc(x)
(sorry, I don't know LaTex).
I don't think there should be an additional article, but I suggest that (1) searches for ierfc be directed here, and (2) there be a brief mention of ierfc and its definition.
129.186.185.139 14:16, 17 October 2007 (UTC)Toby, ewing@iastate.edu
I agree with the above- ierfc isn't defined anywhere! -AJW
There are definitions of ierfc(), i2erfc(), .., inerfc() in Carslaw and Jaeger, Conduction of Heat in Solids, 2nd ed (1959), Appendix II, pp 483-484, equations (9)-(16). I have added them to the article. Billingd ( talk) 15:27, 10 August 2017 (UTC)
Maybe be an obvious Q, but what does 't' represent in the definition of erf(x) —Preceding unsigned comment added by 88.110.201.64 ( talk) 03:55, 22 October 2007 (UTC)
There's an article about that concept: free variables and bound variables. Michael Hardy 04:53, 30 October 2007 (UTC)
In the definition of the error function, perhaps it should be made clear that it applies to the integral of a normal distribution, i.e. a normalised Gaussian function. It is defined more clearly here ( http://mathworld.wolfram.com/Erf.html). I'm not a mathematician, but I'm guessing the error function and all its approximations would not work if your integrand was not normalised.
Dieode 10:18, 29 October 2007 (UTC)
Shouldn't "The error function at infinity is exactly 1" be stated as the limit as the error function approaches infinity is 1? —Preceding unsigned comment added by 65.184.155.154 ( talk) 19:36, 4 December 2007 (UTC)
Would it perhaps be relevant in the applications section to mention that the error function pops up in the moment generating function for the Rayleigh distribution, which is the distribution of the magnitude of a twodimensional vector whose components are uncorrelated and each has a Gaussian distribution with identical variances. Relevant for, e.g., the statistical descrip of wind speed? -- Slaunger ( talk) 11:55, 27 February 2008 (UTC)
The article claims that erf/erfc exist in GNU libc, but aren't part of any standard. I've come across references that claimed erf/erfc are part of the C99 ISO standard. —Preceding unsigned comment added by 87.174.73.108 ( talk) 23:43, 11 March 2008 (UTC)
No approximation is listed for the inverse of erfc, I suggest at least including erfcinv(z)=erfinv(1-z), though it seems a bit trivial. —Preceding unsigned comment added by 201.174.192.4 ( talk) 18:02, 31 March 2008 (UTC)
How many arguments does Gamma function have in the representation of erf sugested? Once it appear with single argument, then, with two argumetns. In the definition, it appears with single argument. How to correct this? dima ( talk) 04:14, 14 July 2008 (UTC)
A minor error is that, as raised in the above section, it mixed the gamma function, which takes one argument, and incomplete gamma function, which takes two arguments. Although we can think the ordinary gamma function as incomplete gamma function with scale=1, it should be mentioned somehow.
Moreover, another obvious major problem is that this expression is not correct, in that, simply in the formula, and . It seems not true. But I have on way to find the correct formula. Please... 193.10.97.31 ( talk) 16:16, 3 December 2008 (UTC)
I have edited the main article by myself concerning these issues. The formula is correct according to the numerical integration. But since the product is always equal to 1, it has been taken away. In addition, after the modification it is easier to see the following result in the article follows since and . 193.10.97.31 ( talk) 22:17, 3 December 2008 (UTC)
Regarding the comment by User:Lklundin ( here, in the edit summary), yes, the implementation is my own, but it is rather trivial and can probably be found (in spirit) in any decent book on numerical analysis.
As requested, I will flesh it out a bit and fix some minor issues (e.g. will not converge for large z
) and try to find a reference for it.
Cheers, pedrito - talk - 20.02.2009 07:34
an
decreasing monotonically as of some i
(as of i=0
for z<=1
) and res + an == res
, i.e. the "correction" an
no longer contributes to the result res
.Is the error function just a convolution of a step function (-1 : x<0, 0 : x==0, 1 : x > 0) with a Gaussian kernel? Numerically that looks right. It also makes sense in that erf is the integral of a Gaussian and convolution with a step gives you integration. If so, that should be mentioned, because it's a very easy way to think about erf. —Ben FrantzDale ( talk) 05:30, 1 August 2009 (UTC)
I just came across the need to repeatedly integrate the error function. Eventually I arrived at this simple recursion relation (note that I used Upsilon only because I'd never seen it used before):
Assuming I got it right, is it worth including? I certainly would have found it useful ;) —Preceding unsigned comment added by Bb vb ( talk • contribs) 01:56, 14 October 2009 (UTC)
i agree with Michael Hardy. this Approximation is crap. the error for x=3.5 is about 0.345!!!!! someone find a better one? —Preceding unsigned comment added by 46.116.223.100 ( talk) 17:28, 5 March 2011 (UTC)
I think the integral definition (first formula) of erf contradicts the graph given next to it. How can erf assume negative values while exp(-x^2) is a positive function? Am I missing something? Does it imply integration from 0 to negative values (reversed bounds)?— Preceding unsigned comment added by 88.230.219.120 ( talk) 19:59, 23 June 2011 (UTC)
I agree with the previous comment, as of now the integrand in the formula (exp(-t^2)) is strictly positive. Thus the plot on the right side presenting negative values does not match with the formula... — Preceding unsigned comment added by 130.15.148.161 ( talk) 12:36, 19 July 2011 (UTC)
The analogy is not valid. The debate about negative numbers was a philosophical debate about whether they can be said to exist. Here it's just a matter of how notation is defined -- with, or without, any definition for integrals from a to b<a.
A key principle is that the lede of a Wikipedia article should be accessible to as many people as possible, consistent with providing a legitimate summary of the article's content. The current version accomplishes that, while your suggested version would not alter the substance of the lede but would be inaccessible to the many people who are very familiar with the concept of an integral but have never seen them defined with a lower bound greater than the upper bound. Duoduoduo ( talk) 18:09, 4 February 2012 (UTC)
Should we add a note to the section Asymptotic Expansion that "!!" is the double factorial and not the factorial of the factorial? RJFJR ( talk) 16:50, 7 February 2012 (UTC)
The definition of these two in the opening section is not very clear. erfi(z) = -i.erf(iz)
What is z here ? A complex number ? A purely imaginary number ? How can you evaluate erf(iz) ? Eregli bob ( talk) 04:43, 18 June 2012 (UTC)
The section labelled "The name 'error function'" does not actually ellaborate on the origin of the name. It instead provides details of the function's general use. This section should be rewritten to reflect its title and purpose. Monsieurisle ( talk) 18:53, 12 February 2016 (UTC)
I've removed the "Implementations" section. There's no benefit to it, and it could potentially contain dozens more entries with no way to really delineate what belongs and what doesn't. I'm including the removed text here just in case. Deacon Vorbis ( talk) 15:01, 17 September 2017 (UTC)
double erf(double x)
and double erfc(double x)
in the header
math.h. The pairs of functions {erff()
,erfcf()
} and {erfl()
,erfcl()
} take and return values of type float
and long double
respectively. For complex double
arguments, the function names cerf
and cerfc
are "reserved for future use"; the missing implementation is provided by the open-source project
libcerf, which is based on the
Faddeeva package.erf()
and erfc()
in the header cmath
. Both functions are overloaded to accept arguments of type float
, double
, and long double
. For complex<double>
, the
Faddeeva package provides a C++ complex<double>
implementation.erf
, and the erfc
functions, nonetheless both inverse functions are not in the current library.
[2]ERF
, ERFC
and ERFC_SCALED
functions to calculate the error function and its complement for real arguments.
Fortran 77 implementations are available in
SLATEC.math.Erf()
and math.Erfc()
for float64 arguments.erf
and erfc
for real and complex arguments. Also has erfi
for calculating math.erf()
and math.erfc()
for real arguments. For previous versions or for complex arguments,
SciPy includes implementations of erf, erfc, erfi, and related functions for complex arguments in scipy.special
.
[6] A complex-argument erf is also in the
arbitrary-precision arithmetic mpmath library as mpmath.erf()
?pnorm
), which is based on W. J. Cody's rational Chebyshev approximation algorithm.
[5]Math.erf()
and Math.erfc()
for real arguments.References
NormSInv
function as follows: erf_inverse(p) = -NormSInv((1 - p)/2)/SQRT(2)
; erfc_inverse(p) = -NormSInv(p/2)/SQRT(2)
. See
[1].
Currently, the complementary error function Erfc is defined in 5.1, but the symbol Erfc appears first in 3.4 (then 3.xx, and 4.x). It would make sense to define it much earlier, possibly in the preamble. — Preceding unsigned comment added by Ceacy ( talk • contribs) 21:08, 19 November 2018 (UTC)
This seems more like a B-class article to me. It has just about everything you need to know about the error function. Bubba73 You talkin' to me? 20:48, 16 February 2019 (UTC)
Please check /info/en/?search=Wikipedia:Content_assessment#Quality_scale
Specifically, cleanup is necessary, there is substantial irrelevant material that would be better included by links to other wikipedia pages, and clarity of presentation, citations, and caveats concerning applicability leave much to be desired.
216.251.133.187 ( talk) 00:34, 6 March 2019 (UTC)
Can someone please expand the term
in the definition? I came here to find out what the error function was because I'm trying to figure out how to integrate
and elsewhere I read that the error function was the way to do this. Now I come here and I find I need to do the integration before I can calculate the error function. How does this work? — VeryRarelyStable ( talk) 02:26, 22 May 2019 (UTC)
The libcerf link doesn't work. Gah4 ( talk) 09:47, 20 October 2021 (UTC)
In the "Approximation with elementary functions" section the author listed are Abramowitz and Stegun. However, first of all chapter 7 from this book has as author Walter Gautschi. Second, Gautschi nicely references specifically for equations 7.1.25 to 7.1.28 C. Hastings jr. : Hastings, C. (1955). Approximations for digital computers. Princeton University Press.
Gautschi multiplied the values for a from Hastings with 2/sqrt(pi) and adjusted the formula accordingly. Stikpet ( talk) 10:45, 21 October 2022 (UTC)