![]() | This article is rated B-class on Wikipedia's
content assessment scale. It is of interest to the following WikiProjects: | ||||||||||||||||||||
|
I think the formula for calculating the n-th moment based on the characteristic function was not correct. It was:
.
I changed it to
.
Please compare with [1].
What does this phrase mean?: “which is essentially a different change of parameter”. —Preceding unsigned comment added by 130.207.104.54 ( talk) 15:56, 20 August 2009 (UTC)
I have a bit of a different definition for the inversion theorem that seems to contradict the one in this article. From "probability and random processes" by Grimmet I have:
then
-- Faisel Gulamhussein 22:17, 20 October 2006 (UTC)
Probably you're correct. I don't think there is a contradiction exactly. Just many authors assume their random variables have a density, if this is the case the statements would be the same. Thenub314 13:07, 21 October 2006 (UTC)
Would someone like to write a brief explanation of the meaning of a characteristic function, for those of us without such strong statistics and mathematics backgrounds? I have no idea of its concept or application. An example that appeals to intuition would be great, thanks! BenWilliamson 10:28, 21 February 2007 (UTC)
There is something strange as it is formulated.
The would-be characteristic function
satisfies all the requested properties but it implies that the corresponding distribution has vanishing second moment
-- MagnusPI ( talk) 09:25, 24 April 2008 (UTC)
I believe the problem lies in condition: " is a positive definite function". I suggest that this doesn't hold, and there are results that indirectly prove it isn't ... but I don't know of a direct approach to showing this. Note that "positive definite function" is a non-straightforward math thing if you are not familiar with it. Melcombe ( talk) 09:34, 25 April 2008 (UTC)
On any compact is positive definite, the only points where it is not are but in this case I do not understand why the characteristic function for some Levy stable can be accepted since they show the same behaviour at
-- MagnusPI ( talk) 08:44, 28 April 2008 (UTC)
Interesting but this is not what a physicist would call a positive definite function. I will therefore add a note to the main page
-- MagnusPI ( talk) 08:55, 29 April 2008 (UTC)
Historia Matematica has a nice little thread on the history of characteristic functions which could be used to give some background on where c.f.'s came from. -- Michael Stone 19:22, 10 April 2007 (UTC)
Please check your work.
.
The article mentions that you can calculate the characteristic function taking the conjugate of the Fourier transform of the pdf, but isn't this simply the inverse Fourier transform?... This should be made more clear. We tend to think about transforming using only the "direct" transform first, and then transforming back again, but you can perfectly say that you can find the characteristic function taking the inverse Fourier transform, then come back again taking the direct Fourier transform...
It's like the pdf is already the "transform", and then the characteristic function is the "original" function, that we could see as a signal to be transformed...
The idea is just that we save some words using the idea of "inverse fourier transform" instead of "conjugate of the transform", and also help to see the inverse transform as something as natural as the "direct" transform. No transform domain is intrinsically better! there is no reason to insist in seeing the pdf as the correlate of a signal to be transformed by the direct fourier transform, and not the inverse one... —Preceding unsigned comment added by Nwerneck ( talk • contribs) 22:53, 17 November 2007 (UTC)
The article gives the impression that every random variable can be described by a characteristic function; I am almost sure (no, that's too strong to say in a statistics page - let me rephrase it: I have more than 95% confidence) that a pathological example can be designed that does not have a characteristic function... Something along the lines of the perforated interval in page standard probability space. Darth Albmont ( talk) 19:06, 18 November 2008 (UTC)
In the above-named section, what the dimensions of the matrix T in relation to those of X? Are the dimensions the obvious ones or are other sizes used occasionally? Melcombe ( talk) 15:45, 9 December 2008 (UTC)
I would like to have any mention of the moment-generating function (mgf) removed from the "definition" section. The concept of characteristic function can and should be defined without any reference to mgf; in fact such reference is only confusing, because unlike cf, mgf may not always exist, or may be unbounded, etc.
The reference itself is quite dubious as well. The expression denotes an mgf of a complex-valued random variable. I'm not entirely sure how mgf for a complex-valued r.v. is supposed to be defined, but most likely the argument t will be complex as well. The expression is not correct. Function is defined over a domain of real numbers, it cannot be evaluated at an imaginary point (at least not from the point of view of strict mathematics). Of course if function is given by some simple formula then we can always plug in instead of and hopefully get a meaningful result. But what if there is no simple formula? What if is defined graphically for example. It is difficult to interpret without going back to definition of , but then such interpretation turns into a tautology.
Anyways, i believe that mgf is nothing more than a "related concept", and thus should be mentioned in the corresponding section at the end of the article.
Stpasha ( talk) 11:24, 17 June 2009 (UTC)
Does anybody know if standard normal N(0,1) is the only random variable whose characteristic function coincides with its pdf? Then it would be a fixed point of the Fourier transformation; wonder what this would imply. Stpasha ( talk) 18:35, 30 June 2009 (UTC)
The article says that, if phi is a characteristic function, then so is Im(phi). But one property all characteristic functions phi have is phi(0) = 1, so Im(phi)(o) = Im(phi (0)) = Im(1) = 0, which is not 1. So, surely, if phi is a characteristic function, then Im(phi) never is? I tried correcting this by removing Im(phi) from the list, but I never saved my edit. I previewed it, only to find out I'd messed up the code on all the other symbols for Re(phi) etc. Could someone who knows what they're doing with the code please remove Im(phi)? (Unless it actually does belong there, but for the life of me I cannot see how!) 90.206.183.244 ( talk) 15:38, 6 July 2009 (UTC)
Under "properties" the article states "the characteristic function of a symmetric random variable is real-valued and even". A similar remark is made in the caption for graph of the characteristic function of a uniform distribution. But is this really generally true, or true only if the random variable is symetric around zero. For example, a normal or Cauchy distribution with a non-zero mean is symetric, but it appears that the characteristic functions would retain complex values. But I admit that I am weak on complex math, so maybe those i's that seem to be there drop out. Rlendog ( talk) 13:54, 21 August 2009 (UTC)
"The argument of the characteristic function will always belong to the same space where random variable X takes values" (Sect. "Definition") — really? In principle, it belongs to the dual space. In practice, the given space usually is endowed with an inner product and so, may be treated as dual to itself. But not always; the distinction may be essential for random processes (because of infinite dimension). In fact, the distinction manifests itself already in finite dimension, if the given space is just an abstract linear space (with no preferred basis). Boris Tsirelson ( talk) 17:42, 24 September 2009 (UTC)
So, if t is in the dual space, then which dual is it — algebraic or continuous? Of course the two duals coincide in finite dimensions, but suppose we want to consider the cf of an infinite-dimensional random element taking values in a Hilbert space H. Then the algebraic dual is"larger" than the original space, and they are not isomorphic. That would mean that the characteristic function has"more dimensions" than the distribution function. If we consider the continuous duals however, then by Riestz representation theorem this space is isomorphic to the domain of the random variable, and there isn’t much error in saying that t lies in the same space as X. // stpasha » 20:58, 26 September 2010 (UTC)
How about we denote the characteristic function with letter χ instead of φ? The reason being that φ is a reserved symbol for the pdf of the standard normal distribution. … stpasha » 00:04, 5 October 2009 (UTC)
How about providing a brief explanation of terms, such as t in e^it? That would be great to improve understanding of the cf for those who have not taken advanced courses in math. "and t∈R is the argument of the characteristic function" is not fully comprehensible. I can only conclude that this is a real number, a continuous quantity. Does it always changes from -infinity to +infinity? And what is the role of this argument?
Another suggestion is to provide examples of derivation cf for, say, bernulli, binomial and normal distributions. Such examples are useful for quick learning of what cfs are. User:NoName ( contribs) 17 May 2010 (UTC)
How about explaining what "E" is ? I think we shouldn't assume any universal meaning of any notation, especially when people come from various countries to this article and may not understand what "E" or "M" is. For example first sentence of introduction should be:
The characteristic function ( FX(x)) provides an alternative way for describing a random variable (X). Similarly to the cumulative distribution function and then explanation to what E in this formula is. This way every symbol used is explained in the description. I am new to wikipedia and I am not comfortable editing such important article yet. — Preceding unsigned comment added by Zenmaster82 ( talk • contribs) 21:10, 18 November 2011 (UTC)
What's T suppose to denote? How different is this from t? —Preceding unsigned comment added by 161.53.64.70 ( talk) 08:53, 7 October 2009 (UTC)
This text appears in the Definition Section. .. I may be blind, but I don't see any parentheses in that section.. OldMacDonalds ( talk) 03:45, 19 February 2014 (UTC)
"Inversion formulas", after the Levy theorem: "This formula is valid only for strictly positive random variables." — Really?? Hard to believe. What exactly is meant? (This is a recent insertion by User:Manoguru.) Boris Tsirelson ( talk) 20:07, 29 July 2015 (UTC)
References
This [1] looks like a very useful statement, relating independence of to the characteristic functions and . Is there a reason for the absence? (If not, I can add it.)
Theorem (Kac's theorem, from [1]). Let be -valued random variables. Then the following statements are equivalent.
1. are independent 2. ,
[1] http://math.stackexchange.com/questions/287138/moment-generating-functions-characteristic-functions-of-x-y-factor-implies-x/287321#287321 — Preceding unsigned comment added by Ceacy ( talk • contribs) 11:01, 30 May 2016 (UTC)
Theorem. If a is (possibly) an atom of X (in the univariate case this means a point of discontinuity of FX ) then
But so this quantity is trivially zero. Is this supposed to reflect some kind of limit?
Manybytes ( talk) 03:48, 24 September 2019 (UTC)
The introduction contains this sentence:
"If a random variable admits a probability density function, then the characteristic function is the Fourier transform of the probability density function."
But among the various definitions of the Fourier transform, all of them involve an integral involving exp(-ixt) or exp(-2πixt), with a negative sign inside the exponential.
Doesn't this mean that the characteristic function is more like the inverse Fourier transform of the density function (if any)? 2601:200:C000:1A0:BC00:5039:DB55:E9EC ( talk) 18:51, 29 July 2022 (UTC)
![]() | This article is rated B-class on Wikipedia's
content assessment scale. It is of interest to the following WikiProjects: | ||||||||||||||||||||
|
I think the formula for calculating the n-th moment based on the characteristic function was not correct. It was:
.
I changed it to
.
Please compare with [1].
What does this phrase mean?: “which is essentially a different change of parameter”. —Preceding unsigned comment added by 130.207.104.54 ( talk) 15:56, 20 August 2009 (UTC)
I have a bit of a different definition for the inversion theorem that seems to contradict the one in this article. From "probability and random processes" by Grimmet I have:
then
-- Faisel Gulamhussein 22:17, 20 October 2006 (UTC)
Probably you're correct. I don't think there is a contradiction exactly. Just many authors assume their random variables have a density, if this is the case the statements would be the same. Thenub314 13:07, 21 October 2006 (UTC)
Would someone like to write a brief explanation of the meaning of a characteristic function, for those of us without such strong statistics and mathematics backgrounds? I have no idea of its concept or application. An example that appeals to intuition would be great, thanks! BenWilliamson 10:28, 21 February 2007 (UTC)
There is something strange as it is formulated.
The would-be characteristic function
satisfies all the requested properties but it implies that the corresponding distribution has vanishing second moment
-- MagnusPI ( talk) 09:25, 24 April 2008 (UTC)
I believe the problem lies in condition: " is a positive definite function". I suggest that this doesn't hold, and there are results that indirectly prove it isn't ... but I don't know of a direct approach to showing this. Note that "positive definite function" is a non-straightforward math thing if you are not familiar with it. Melcombe ( talk) 09:34, 25 April 2008 (UTC)
On any compact is positive definite, the only points where it is not are but in this case I do not understand why the characteristic function for some Levy stable can be accepted since they show the same behaviour at
-- MagnusPI ( talk) 08:44, 28 April 2008 (UTC)
Interesting but this is not what a physicist would call a positive definite function. I will therefore add a note to the main page
-- MagnusPI ( talk) 08:55, 29 April 2008 (UTC)
Historia Matematica has a nice little thread on the history of characteristic functions which could be used to give some background on where c.f.'s came from. -- Michael Stone 19:22, 10 April 2007 (UTC)
Please check your work.
.
The article mentions that you can calculate the characteristic function taking the conjugate of the Fourier transform of the pdf, but isn't this simply the inverse Fourier transform?... This should be made more clear. We tend to think about transforming using only the "direct" transform first, and then transforming back again, but you can perfectly say that you can find the characteristic function taking the inverse Fourier transform, then come back again taking the direct Fourier transform...
It's like the pdf is already the "transform", and then the characteristic function is the "original" function, that we could see as a signal to be transformed...
The idea is just that we save some words using the idea of "inverse fourier transform" instead of "conjugate of the transform", and also help to see the inverse transform as something as natural as the "direct" transform. No transform domain is intrinsically better! there is no reason to insist in seeing the pdf as the correlate of a signal to be transformed by the direct fourier transform, and not the inverse one... —Preceding unsigned comment added by Nwerneck ( talk • contribs) 22:53, 17 November 2007 (UTC)
The article gives the impression that every random variable can be described by a characteristic function; I am almost sure (no, that's too strong to say in a statistics page - let me rephrase it: I have more than 95% confidence) that a pathological example can be designed that does not have a characteristic function... Something along the lines of the perforated interval in page standard probability space. Darth Albmont ( talk) 19:06, 18 November 2008 (UTC)
In the above-named section, what the dimensions of the matrix T in relation to those of X? Are the dimensions the obvious ones or are other sizes used occasionally? Melcombe ( talk) 15:45, 9 December 2008 (UTC)
I would like to have any mention of the moment-generating function (mgf) removed from the "definition" section. The concept of characteristic function can and should be defined without any reference to mgf; in fact such reference is only confusing, because unlike cf, mgf may not always exist, or may be unbounded, etc.
The reference itself is quite dubious as well. The expression denotes an mgf of a complex-valued random variable. I'm not entirely sure how mgf for a complex-valued r.v. is supposed to be defined, but most likely the argument t will be complex as well. The expression is not correct. Function is defined over a domain of real numbers, it cannot be evaluated at an imaginary point (at least not from the point of view of strict mathematics). Of course if function is given by some simple formula then we can always plug in instead of and hopefully get a meaningful result. But what if there is no simple formula? What if is defined graphically for example. It is difficult to interpret without going back to definition of , but then such interpretation turns into a tautology.
Anyways, i believe that mgf is nothing more than a "related concept", and thus should be mentioned in the corresponding section at the end of the article.
Stpasha ( talk) 11:24, 17 June 2009 (UTC)
Does anybody know if standard normal N(0,1) is the only random variable whose characteristic function coincides with its pdf? Then it would be a fixed point of the Fourier transformation; wonder what this would imply. Stpasha ( talk) 18:35, 30 June 2009 (UTC)
The article says that, if phi is a characteristic function, then so is Im(phi). But one property all characteristic functions phi have is phi(0) = 1, so Im(phi)(o) = Im(phi (0)) = Im(1) = 0, which is not 1. So, surely, if phi is a characteristic function, then Im(phi) never is? I tried correcting this by removing Im(phi) from the list, but I never saved my edit. I previewed it, only to find out I'd messed up the code on all the other symbols for Re(phi) etc. Could someone who knows what they're doing with the code please remove Im(phi)? (Unless it actually does belong there, but for the life of me I cannot see how!) 90.206.183.244 ( talk) 15:38, 6 July 2009 (UTC)
Under "properties" the article states "the characteristic function of a symmetric random variable is real-valued and even". A similar remark is made in the caption for graph of the characteristic function of a uniform distribution. But is this really generally true, or true only if the random variable is symetric around zero. For example, a normal or Cauchy distribution with a non-zero mean is symetric, but it appears that the characteristic functions would retain complex values. But I admit that I am weak on complex math, so maybe those i's that seem to be there drop out. Rlendog ( talk) 13:54, 21 August 2009 (UTC)
"The argument of the characteristic function will always belong to the same space where random variable X takes values" (Sect. "Definition") — really? In principle, it belongs to the dual space. In practice, the given space usually is endowed with an inner product and so, may be treated as dual to itself. But not always; the distinction may be essential for random processes (because of infinite dimension). In fact, the distinction manifests itself already in finite dimension, if the given space is just an abstract linear space (with no preferred basis). Boris Tsirelson ( talk) 17:42, 24 September 2009 (UTC)
So, if t is in the dual space, then which dual is it — algebraic or continuous? Of course the two duals coincide in finite dimensions, but suppose we want to consider the cf of an infinite-dimensional random element taking values in a Hilbert space H. Then the algebraic dual is"larger" than the original space, and they are not isomorphic. That would mean that the characteristic function has"more dimensions" than the distribution function. If we consider the continuous duals however, then by Riestz representation theorem this space is isomorphic to the domain of the random variable, and there isn’t much error in saying that t lies in the same space as X. // stpasha » 20:58, 26 September 2010 (UTC)
How about we denote the characteristic function with letter χ instead of φ? The reason being that φ is a reserved symbol for the pdf of the standard normal distribution. … stpasha » 00:04, 5 October 2009 (UTC)
How about providing a brief explanation of terms, such as t in e^it? That would be great to improve understanding of the cf for those who have not taken advanced courses in math. "and t∈R is the argument of the characteristic function" is not fully comprehensible. I can only conclude that this is a real number, a continuous quantity. Does it always changes from -infinity to +infinity? And what is the role of this argument?
Another suggestion is to provide examples of derivation cf for, say, bernulli, binomial and normal distributions. Such examples are useful for quick learning of what cfs are. User:NoName ( contribs) 17 May 2010 (UTC)
How about explaining what "E" is ? I think we shouldn't assume any universal meaning of any notation, especially when people come from various countries to this article and may not understand what "E" or "M" is. For example first sentence of introduction should be:
The characteristic function ( FX(x)) provides an alternative way for describing a random variable (X). Similarly to the cumulative distribution function and then explanation to what E in this formula is. This way every symbol used is explained in the description. I am new to wikipedia and I am not comfortable editing such important article yet. — Preceding unsigned comment added by Zenmaster82 ( talk • contribs) 21:10, 18 November 2011 (UTC)
What's T suppose to denote? How different is this from t? —Preceding unsigned comment added by 161.53.64.70 ( talk) 08:53, 7 October 2009 (UTC)
This text appears in the Definition Section. .. I may be blind, but I don't see any parentheses in that section.. OldMacDonalds ( talk) 03:45, 19 February 2014 (UTC)
"Inversion formulas", after the Levy theorem: "This formula is valid only for strictly positive random variables." — Really?? Hard to believe. What exactly is meant? (This is a recent insertion by User:Manoguru.) Boris Tsirelson ( talk) 20:07, 29 July 2015 (UTC)
References
This [1] looks like a very useful statement, relating independence of to the characteristic functions and . Is there a reason for the absence? (If not, I can add it.)
Theorem (Kac's theorem, from [1]). Let be -valued random variables. Then the following statements are equivalent.
1. are independent 2. ,
[1] http://math.stackexchange.com/questions/287138/moment-generating-functions-characteristic-functions-of-x-y-factor-implies-x/287321#287321 — Preceding unsigned comment added by Ceacy ( talk • contribs) 11:01, 30 May 2016 (UTC)
Theorem. If a is (possibly) an atom of X (in the univariate case this means a point of discontinuity of FX ) then
But so this quantity is trivially zero. Is this supposed to reflect some kind of limit?
Manybytes ( talk) 03:48, 24 September 2019 (UTC)
The introduction contains this sentence:
"If a random variable admits a probability density function, then the characteristic function is the Fourier transform of the probability density function."
But among the various definitions of the Fourier transform, all of them involve an integral involving exp(-ixt) or exp(-2πixt), with a negative sign inside the exponential.
Doesn't this mean that the characteristic function is more like the inverse Fourier transform of the density function (if any)? 2601:200:C000:1A0:BC00:5039:DB55:E9EC ( talk) 18:51, 29 July 2022 (UTC)