This is the
talk page for discussing improvements to the
Fourier analysis article. This is not a forum for general discussion of the article's subject. |
Article policies
|
Find sources: Google ( books · news · scholar · free images · WP refs) · FENS · JSTOR · TWL |
This
level-4 vital article is rated B-class on Wikipedia's
content assessment scale. It is of interest to the following WikiProjects: | |||||||||||||||||||||||||||||||
|
This talk page has been archived at Talk:Fourier transform/Archive1. I moved the page, so the edit history is preserved with the archive page. I've copied back the most recent thread. Wile E. Heresiarch 23:58, 20 Sep 2004 (UTC)
Wouldn't the correct notation be:
or
Since f(t) is in fact the inverse fourier transform of F, which is itself a function of ?
Also, as a precedent, from the laplace transform article:
Yes, but couldn't one argue that ω is not a specific point, but rather a locus of all of the points ω such that ω extends from minus infinity to plus infinity? Also, it is helpful to show F(ω) rather than plain old F as a convenience so that we all can keep track of the fact that F is a function of ω and not, say, γ or β or whatever. Or t for that matter. -- Metacomet 02:00, 13 March 2006 (UTC)
It would also be helpful to use a letter other than F to denote the function of interest, since we are already using F (in calligraphy style) to represent the Fourier transform operator itself. Perhaps we could use, oh I don't know, maybe X or Y to represent the transformed function...
So, for example:
seems less potentially ambiguous to me. -- Metacomet 02:10, 13 March 2006 (UTC)
I think using is slightly ambiguous since it could be misinterpreted as the inverse fourier multiplied by t. It would seem extrenuous since the inverse fourier transform itself, as a result of the integral, generates a function of t...
--
Tristan Jones (
talk) 11:16, 13 March 2006 (MT)
In that case couldn't it be written as: ? I think there is value in showing the omega (or any other variable) even if it is not "technically" correct, as the meaning is the same and it shows in a more explicit manner that the transform changes the function into a different 'space'... Also, since both methods could be considered correct (the omega merely shows that F is a function of some single variable, in this case valled omega, but it could be anything else), that to someone who has not previously seen the transform, writing is more obvious and easier to understand... -- Tristan Jones ( talk) 11:16, 13 March 2006 (MT)
In danger of ambiguity use a·(b+c) for multiplication and a(b+c) for evaluating the function a in the point (b+c). Jitse, you want to replace all three occurences of ω by s. And there is no need to use curly brackets. And the outer pair of parentheses is redundant.
Bo Jacoby 18:12, 13 March 2006 (UTC)
There are other forms of the continuous Fourier transform, which have different advantages and are favoured by different people because of their mathematical or practical simplicity, or because they make the inverse transform look more like the transform. I mean, it's not just a few recalcitrant nutbars with a chip on their shoulder; I'm talking widespread usage. Most discussions of the Fourier transform mention this, so maybe we ought to as well. For example,
gets rid of the ::out the front, and gets s in terms of frequency rather than rads. This form is used in the course notes for Signal Processing here at Adelaide Uni. N-gon 10:08, 22 August 2006 (UTC)
Delete this if you like but it will help lots of people. If you want to *truly* understand the Fourier Transform and where it really comes from then read the book "Who is Fourier? A mathematical adventure" ISBN 0964350408. It's excellent. Lecturers can't teach this subject for toffee. It's a shame I only found this book after my course involving Fourier but it more than makes up for it now. You can get it off Amazon.com Chris, Wales UK 16:14 25th November 2005
I don't know where else to ask. What's the difference between and . Are they used correctly in this article? - Omegatron 01:56, Sep 19, 2004 (UTC)
Yeah. If you're picky you will note that t has no real purpose in the above formula. It really should be within the scope of a binding operator such as .; this however by rules of lambda-calclus reduces to the term f. CSTAR 02:47, 19 Sep 2004 (UTC)
I'm used to engineering notation, where , so you can use f for frequency and avoid confusing Fs. Then of course there's . :-) - Omegatron 02:40, Sep 19, 2004 (UTC)
In fact, I vote that we change f(t)->F(ω) into some other letter (I know you won't use X, but maybe g?), to avoid confusing newcomers to the Fourier transform. - Omegatron 02:42, Sep 19, 2004 (UTC)
One thing the "engineering notation" does not allow for is the idea that functions have values. E.g., if f(x) = x3 for all values of x, then f(2) is the value of that function at 2, and is equal to 8. If you say f(ω) is the function to be transformed, and g(t) is the transformed function, then g(2) should be the value of the transformed function at the point t = 2. But if you use the "engineers' notation" and write , then you cannot plug 2 into the left side. But watch this: is the result of plugging 2 into the transformed function.
One thing to be said for the difficulties introduced by the engineers' notation that are avoided by the cleaner, simpler, but more abstract "mathematicians' notation", is that perhaps sometimes one ought not to be evaluating these functions pointwise anyway! But that's a slightly bigger can of worms than what I want to open at this moment ... Michael Hardy 22:38, 19 Sep 2004 (UTC)
The formula for the discrete fourier transform
can be simplified because
into
This convention that an n'th root of unity is written rather than is convenient. Introducing the analytical concepts and and into an algebraic context is confusing and pointless.
The transform can be modified a little by taking the square root of n and by taking the complex conjugate of f.
Now this transformation is an involution. The procedure that transforms f into x is exactly the same as the procedure that transforms x into f. This is nice.
PS: I inserted the note "Moreover: reality in one domain implies symmetry in the conjugate domain."
Bo Jacoby 13:18, 8 September 2005 (UTC)
Stevenj removed my entry on involutory fourier transform. I think it should be put back. It makes life easier if you don't have to worry about the difference between forwards and backwards. Note that it is not merely "a trick to compute the inverse DFT in terms of the forward DFT" but a true identification of the inverse and the forward transform. The fourier integral and fourier series should be considered limiting cases of the DFT, so the involutory version is perfectly general and not specific to the discrete fourier transform. Please explain your point of view more carefully. See involution. Bo Jacoby 12:23, 22 September 2005 (UTC)
Here's my site full of Fourier transform example problems. Someone please put it in the external links if you think it's helpful!
http://www.exampleproblems.com/wiki/index.php/PDE:Fourier_Transforms
The introductory paragraph for any section in this encyclopedia should be accessible to all readers.
What was wrong with my input? Was it too easy to understand?:
" It is a mathematical technique for expressing a waveform as a weighted sum of sines and cosines."
Some introductory info is appropriate as:
It usually happens in mathematical analysis that certain given functions are to be operated on, and other functions or numbers are thus obtained. Often, the operations used involve integration, and here is found a very general class of operators, the so-called integral transforms. The simplest integral transform is the operation of integrating.
---
Voyajer
04:34, 30 December 2005 (UTC)
---
I concur, the introduction hardly explains what FT is to the layman.
?can anyone tell me why the first term of Fourier series when expanding a square wave
I have done the exercise about it..
thanks a lot-- HydrogenSu 14:53, 19 January 2006 (UTC)
If you consider this article: [1], it appears that the definition of DFT you have here on this page is actually the inverse DFT. Whose page is wrong?
James Nilsson and Susan Riedel's book Electric Circuits seventh edition defines fourier series like this:
and inverse:
I couldn't find a matching type of fourier transform on here, and they just call it "fourier transform" as if theres no other types. It goes on to give this table of transforms:
f(t) | F(w) |
---|---|
1 | |
A (constant) | |
sgn(t) | 2/jw |
u(t) | \pi delta(w) + 1/jw |
etc | etc |
Is there something I'm missing? Fresheneesz 02:16, 5 June 2006 (UTC)
Why does this article start off with with writing down the inverse transform?!?
Also there is no mention of PDE's anywhere. This is where the subject found it's birth and much of it's most fruitful application. I will try to add something to this effect but input would be appreciated. Also as defined here (as with any other definition I am familiar with)the DFT takes a finite vector and produces a vector of the same length.
But in the section "Family of Fourier transforms" we include it on this list as periodic. There should be some explanation there as to what is meant. Are we thinking of the DFT living on ?
This article establishes that the Fourier transform of a function s(t) is a new function and this relation is written
However, a frequently appearing notation (at several places in Wikipedia) is to write it simply as
There has already been a discussion above about the validity of the former notation. But it should also be recognized that the latter notation, while formally incorrect, has the advantage of allowing us to insert an arbitrary function expression into the transform operator. For example, in a compact way we can express that the Fourier transform of a sinc function is a rectangular as
The meaning of the operator has here changed into a transform table lookup and this way of using appears handy and is probably the motivation for its frequent use. There are limitations to this notations as well, for instance writing
implicitly assumes that the function being transformed is a function of t, not of a.
Here are my points
The advantage of the second form of the operator is that is allows a formalization of the lookup functionality which also makes explicit which variable is used in the transformation. -- KYN 09:34, 4 August 2006 (UTC)
Using an arrow instead of a comma makes it more readable.
The function f, defined by y = f(x), can be written f = (y←x), pronounced: "y as a function of x".. This nice notation allows you to distinguish the power function, (xy←x), from the exponential function, (xy←y).
The transform
means that
and
This notation does not identify the function f with the function value f(x), using a special letter x to identify the independent variable. Bo Jacoby 12:07, 4 August 2006 (UTC)
The natural place to put the notation is in the article function (mathematics), where presently the sum of two functions is defined by (f+g)(x) = f(x)+g(x) rather than by f+g = (f(x)+g(x)←x), thus defining the function value (f+g)(x) rather than the function f+g. For functions of several variables and for functions whose arguments or values are functions rather than simple variables, the notation y = f(x) instead of f = (y←x) is severely insufficient. The lambda calculus writes the independent variable to the left, f = λ x.y, rather than to the right as in f = (y←x). I prefer the latter convention because the formula for composition of functions,
is written without breaking the order like this:
Anyway, when the arrow points from the independent to the dependent variable there is not much room for misunderstanding.
Note also the elegant notations for inverse functions such as the natural logaritm and for the square root:
I will make a note on Talk:Function (mathematics) to ask the advice of the mathematicians there.
Bo Jacoby 11:40, 7 August 2006 (UTC)
I agree. Is this understandable? :
Your examples look like this:
There is no need to use special names for dummy variables, nor is there any need for special function names:
Bo Jacoby 09:16, 8 August 2006 (UTC)
The problem with the notation that Bo is suggesting for the Fourier transform, and for functions in general, is that it is much less common than the usual notations which are already employed in the articles. If the notation is established but obscure, it might be worth mentioning in function (mathematics), but a wholesale adoption in other articles is not a good idea.
Given that Bo has a history of trying to promote his own invented nonstandard notations in Wikipedia, however, I would also request that he provide a mainstream citation for his notation before inserting it into any article.
—Steven G. Johnson 17:03, 8 August 2006 (UTC)
Steven seems to have an excellent point here. I agree. Thenub314 23:50, 8 August 2006 (UTC)
I am unconvinced that we have a severe notational problem. It seems to me that in this case the proposed cure is worst than the disease (for the reasons StevenJ gave). Thank you for suspending progress. Also, I don't agree to the relevance of a programming language, because it is created for a completely different set of constraints. -- Bob K 18:24, 9 August 2006 (UTC)
I agree that "informal usage of the operator should be avoided, in particular when it is not perfectly clear which variable the function to be transform depends on."
The formula tells which variable the function to be transformed depends on.
There is no need for calligraphy. The formula is: F(s(t+u)←t) = F(s) e(i ω u←ω).
Linearity is irrelevant in the context.
Bo Jacoby 14:31, 13 August 2006 (UTC)
Hi Bob. The symbols t and t0 enters symmetrically on the left hand side of , but asymmetrically on the right hand side. What is ? The answer depends on whether u or v is the independent variable, and the readers are clueless. The explicit multiplication sign improves clarity: F(s(t+u)←t) = F(s)·(ei·ω·u←ω). Rather than a formula for the function, you might prefer a formula for the function value: F(s(t+u)←t)(ω) = F(s)(ω)·ei·ω·u. The expression F(s) = F(s(t)←t) is the transform of the function s. People are aquainted to expressions giving function values, while the fourier transform is about functions. You need some notation for a function f apart from the function value f(x) or f(t) or f(ω). Bo Jacoby 09:04, 14 August 2006 (UTC)
Thanks, Bob. When it comes to liking and disliking there is no point arguing. I trust that time will settle the question, so that either you will learn to love the arrow (which you seriously doubt because it looks strange), or I will learn to love the ambiguity (which I seriously doubt because unambiguous notation makes commentary unnecessary, which is lovely). The html is nasty mainly because of the italics. I write first without the italics, F(s)(ω)·ei·ω·u , and then I insert two apostrophes around each variable name: F(s)(ω)·ei·ω·u . Bo Jacoby 12:14, 14 August 2006 (UTC)
Hi, the topic of this thread is twofold: (1) is there an established notation which can be used for saying, eg, the FT of sinc is rect, in a formally correct way, and (2) should the notation issue be mentioned in the article. Bo has proposed a notation which addresses (1) which it OK with me, but since I havn't seen any evidence that it is established I don't want to write about it since Wikipedia should not be the place to introduce new notations which does not exist in the literature. An alternative to Bo's arrow can be found in Daubechies "Ten Lectures on Wavelets" where the anonymous variable is simply denoted with a dot: . To my surprise the informal notation can be found in hard core math text book, for example see Debnath & Mikusinsky "Introduction to Hilbert Spaces with Applications" (2ed, page 194). About (2), I will try to figure out something based on the text I wrote above and put it into the article shortly. -- KYN 15:48, 14 August 2006 (UTC)
The dot notation has a place. But it needs to know that place and stay there. When the point is simply that the Fourier tranform of sinc is rect, it avoids the irrelevant question of whether we're talking about time or space or some other dimension. -- Bob K 11:31, 15 August 2006 (UTC)
Standard notation for F(x(·)) is F(x(t)). You cannot tell, however, whether you are refering to the value F(x(t)) or the function (F(x(t))←t) or the operator value F(x) = F(x(t)←t) or the operator itself F = (F(x)←x) = (F(x(t)←t)←(x(t)←t)) . Bo Jacoby 10:08, 16 August 2006 (UTC)
My conclusion of this discussion is:
-- KYN 19:50, 15 August 2006 (UTC)
The following notation is supposed to be standard, , according to the recommandation at Talk:Function (mathematics)#Standard notation. The maps-to-arrow must have the special shape , and it must point to the right, otherwise it is not standard. Bo Jacoby 12:30, 17 August 2006 (UTC)
Bob, you cannot be serious. Which reference are you talking about? Bo Jacoby 12:25, 18 August 2006 (UTC)
Don't you consider me and KYN as part of the community of mathematicians? Don't you think that wikipedia should be readable to others than the few mathematicians who can reliably guess the meaning of say ? Bo Jacoby 12:25, 18 August 2006 (UTC)
Sorry. I agree that there is no point in solving a nonexisting problem. It is no problem to write f(x)=x2 rather than f=λx.x2. However, the notation f(x(t)) is ambiguous, as has been pointed out. We cannot pretend that there is no problem of notation. Bo Jacoby 14:31, 20 August 2006 (UTC)
Yes, BobK, you wrote: "I am unconvinced that we have a severe notational problem". Now we agree that there is a severe notational problem. KYN suggests "to leave the arrow and the dot until more evidence of their usages can be demonstrated". Now we have evidence for the maps-to arrow and the lambda-dot. Is there any reason why we should not start clarifying the article using these notations? Bo Jacoby 16:50, 20 August 2006 (UTC)
Somehow, people in mathematics and engineering have learned to live with this "severe notational problem," and the reason is obvious. The meaning of is perfectly clear, taken in context, since it only has one reasonable interpretation if you know that stands for the Fourier transform. In the rare cases where clarification is needed, it can be given in prose. This is no reason to depart from the most established and widely understoon notation for this subject. "A foolish consistency is the hobgoblin of little minds." —Steven G. Johnson 17:53, 20 August 2006 (UTC)
It would be nifty if there were a small graph of sinc function, etc. in the tables of equations. The continuous graphs plotted with nice smooth curves to emphasize that it is continuous, while discrete graphs plotted as more of a bar-chart or set of spikes, to emphasize its discreteness. Do we really need 4 sets of equations and graphs:
? (If there is some significant difference in the equations and graphs, it would be more visible if they were plotted all on one page, rather than scattered across all 4 articles).
I understand that the normal distribution is an eigenfunction of the Fourier transform. Is this the case? Should the article mention this? —Ben FrantzDale 19:32, 9 August 2006 (UTC)
Rbj is correct in that there are uncountably many choices of eigenfunctions, because there are only four eigenvalues (the 4th roots of unity), each with infinite multiplicity. However, arguably the most important eigenfunctions of the continuous Fourier transform are the Hermite functions (of which the gaussian is a special case). These functions are orthogonal and have some optimal localization properties I believe. The most appropriate place to mention them, however, would be under continuous Fourier transform. Unfortunately, they don't generalize in an easy way to the other Fourier variants. For example, it is not obvious what is the appropriate discrete analogue of the Hermite functions for the DFT, and this is still being debated in the literature. —Steven G. Johnson 17:59, 15 August 2006 (UTC)
Hello my question is why nobody has pointed the "asymptotic" behavior for a Fourier transform?.. in the sense:
as
I wish to raise an issue about the use of the expression "linear combination" in relation to the contunuous Fourier transform. Much as I appreciate that the transformed function is simmilar to the coefficients of a linear combination, it's not what the transformed function IS. For one thing, we could change many of the values of the transformed function and as long as we did it over finitely many values, or over a simmilarly null set, we would change really nothing about the essential properties of the transformed function, ie would generate the same function when the inverse transform was appled. So we're not really talking about the individual contributions made by different frequencies, because each frequency taken in isolation contributes nothing. Also, and perhaps this is a pointless thing to argue about, I have repeatedly had it drilled into me by more than one person that linear combinations have to be finite, not infinite and definitely not continuous.
See Talk:Continuous Fourier transform -- catslash 12:19, 4 September 2006 (UTC)
Fourier integral redirects here, and I was reading Partial differential equation which suggested it is somewhat different from a Fourier transform. Can anyone explain this? - Rainwarrior 16:53, 27 September 2006 (UTC)
This section used to be called "Variants of Fourier transforms", when the whole article was called "Fourier transform". Is it really the analysis that differs, or is it only the transforms?
On another note, I feel that much of this section is redundant, which is discussed in Talk:Fourier transform. I'll see if I can change the section so that it expresses what I meant in that discussion. — Sebastian (talk) 00:25, 8 October 2006 (UTC)
I was trying to standardize the formulas so the comparison is easy. Here's what I ended up with:
Name | transform | inverse transform |
---|---|---|
(Continuous) Fourier transform | ||
Fourier series | ||
Discrete-time Fourier transform | ||
Discrete Fourier transform |
Now I feel gypped: The formula for DTFT is basically be same as that for the Fourier series. Both transforms go from a continuous, periodic function to a discrete and aperiodic. Calling one variable "time" and the other "frequency" is an arbitrary convention that has nothing to do with mathematical reality. Is that really the way it is defined, or are we only looking at the inverse transformation here? I rather expected something like this:
Name | transform | inverse transform |
---|---|---|
(Continuous) Fourier transform | ||
Fourier series | ||
Discrete-time Fourier transform | ||
Discrete Fourier transform |
— Sebastian (talk) 03:28, 8 October 2006 (UTC)
Considering this is such a technical article - Why do I come across the word "jaggies" 82.25.174.136 19:22, 24 November 2006 (UTC)
16-Oct-2007: To handle concerns of "too technical" (or "too simple"), I have expanded the article with a new bottom section, Basic explanation. That section gives a simple explanation for "Fourier analysis" (plus the applications and branches of mathematics used), without cluttering the main text of the article. I think such a bottom section is about the best way to explain many novice issues about the subject, without annoying the more advanced readers about the background basics, such as using "formulas in integral calculus" and "algebra". I have removed the "{technical}" tag at top (for this year). - Wikid77 23:17, 16 October 2007 (UTC)
I appreciate what you are attempting to do. And I appreciate that you are trying to keep the intro uncluttered. If it were up to me, I would put it in a separate place completely, such as another article or a subcategory (Fourier_analysis/Basic_explanation_1 "1" because there are bound to be other perspectives). But that is not how Wikipedia resolves these things. (Pity, because I think it would save a lot of editor time.) Anyhow, I perused your reference for the numerical integration analogy, and it appears to come from chapter 8.5, where he poses and tries to answer the question "how do we determine the bandwidth of each of the discrete frequencies in the frequency domain?" His answer is "the bandwidth can be defined by drawing dividing lines between the samples." That might give the desired answer, but in mathematics, the end does not justify the means. IMO, he should clearly state that it is not to be taken literally. And it bothers me to refer to that fabrication as a "basic explanation". It is an incorrect explanation that just happens to fit the formula, apparently.
-- Bob K 15:10, 25 October 2007 (UTC)
Also, in chapter 8.6, pertaining to "calculating the DFT", he does not mention the numerical integration analogy. It turns into "correlation", which I think is a much better description of what's going on.
Another issue is the statement in the article: "The calculation computes the total area of thin rectangles aligned under a similar skewed curve,". Does the reference say anything about a "skewed curve"? What does that mean?
-- Bob K 15:18, 25 October 2007 (UTC)
Bob, your explanation is rather a detailed mechanical one, but it's actually a better explanation of the [[periodogram] (including Lomb's method; see Least-squares spectral analysis). That's because it doesn't describe the use of orthogonal basis functions, nor the notion of decomposition, from which derives inversion. By focusing on how to "measure the amplitude and phase of a particular frequency component" it presupposes the idea of particular frequency component and misses the idea of a transform, that a signal can be expressed in terms of such components.
Coming up with simple explanations is tricky; we've all tried, and I think all failed. Maybe we can find something in a book? Dicklyon 16:04, 28 October 2007 (UTC)
The reason I changed "inverse transform" to "inverse transform (synthesis)" was not because synthesis is mentioned in the other article (which it's not). It's because inverse tranform is not previously mentioned in this article. It just appears out of nowhere.
-- Bob K 14:50, 1 December 2007 (UTC)
I think instead of just listing the applications, it would help a lot to include exactly what function the Fourier analysis has in each application. -- Sylvestersteele ( talk) 10:12, 20 January 2008 (UTC)
Not a bad idea, let's expand the list here and see if we get something of good enough quality to put in the article.
Fourier analysis has many scientific applications:
References
{{
citation}}
: Cite has empty unknown parameter: |1=
(
help)It seems that in some sense I could not really do justice to magnitude of applicability even in cases where I know the subject. But here is a start at least, I hope other people add there two cents. Thenub314 ( talk) 11:20, 15 September 2008 (UTC)
Why do we separate "Applications in signal processing" form "Applications"? Thenub314 ( talk) 13:59, 18 September 2008 (UTC)
The new lead by User:Thenub314 seems a bit vacuous, whereas the old one had at least a bit of a definition in the opening sentence. I don't understand why the attempt to make it less technical, but I think it is not a success. Comments? Dicklyon ( talk) 04:37, 19 September 2008 (UTC)
My first most basic attpempt at an intro. section.... many changes to be made. Thenub314 ( talk) 14:55, 20 September 2008 (UTC)
The subject of Fourier analysis began with the study of Fourier series. In 1807, Joseph Fourier showed how one could attempt to express general functions as sums trigonometric functions. The attempts to understand the full validity and generality of Fourier's method lead to many important discoveries in mathematics. Including Dirichlet's definition of a function, Reimann's integral in it's full generality, Cantor's study of cardinality, and Lebesgue's definition of the integral (Cite Zygmund's intro here). At the same time from its inception with Fourier's work on heat propagation, the study has been useful in many applications. So much so that the study of the discrete Fourier transform and the FFT algorithm predates the invention of the computer (cite Gauss's work).
Maybe going the other direction will be helpful. I've put a plain simple statement of what it is for the lead sentence. I can't see how starting with something more abstract, more general, or less definite than that is going to help anybody, but I'm sure there are still ways it can be improved. Once we can say what it is, it should be easier to write an introduction... Dicklyon ( talk) 05:06, 21 September 2008 (UTC)
Thenub, please response here if you disagree, instead of just changing it. I very much disagree with the approach "is a subject area" in a lead sentence, instead of a plain simple statement of what it is. Dicklyon ( talk) 21:48, 21 September 2008 (UTC)
Dicklyon, I did respond here, but I put it under thread above, which is about the lead instead of the thread about the introduction. Thenub314 ( talk) 06:41, 22 September 2008 (UTC)
I am changing the revision to the Fourier Series section of the Variants of the Fourier transform. It seems to me that using the Poisson summation formula to introduce Fourier series is too complicated. It also circular and requires more of the function. Perhaps what is there could be more straight forward, but I didn't see the change as an improvement. Thenub314 ( talk) 06:06, 24 September 2008 (UTC)
If that is still germaine, please clarify.
I have tried to rewrite the section to be a bit more clear. Perhaps it should be the first of the "Variants of Harmonic analysis" that we list, seeing as it is usually the variant the students encounter first. Thenub314 ( talk) 08:46, 24 September 2008 (UTC)
For starters, I would like to move the FS section closer (in appearance) to the FT notation. (See latest revision.)
-- Bob K ( talk) 14:38, 25 September 2008 (UTC)
I think we should stick to attempting to describe the subject simply, more complex ideas, like the Fourier transform of a periodic function are better suited to the main articles. Thenub314 ( talk) 15:22, 25 September 2008 (UTC)
My mistake, the sentance "The Fourier transform of a periodic function is zero-valued except at a discrete set of harmonic frequencies, where it can diverge." was trying to make a connection to a Dirac comb. It is, by the way, false that the Fourier transform of a periodic function converges to zero at points outside this discrete set of harmonic frequencies. It divereges at every point, so it can not be used to distinguish the harmonic frequencies on which you'd like to base your Fourier series. I dislike introducing the inverse transform before the forward transform. The comment about convergence of Fourier series shouldn't be a foot note. For the same reason we don't define the Fourier transform on Rn, I think we should stick with 2π periodic functions, it keeps the descriptions simplier without losing much. My understanding of the history was that Fourier was studying heat distributions in a ring, which is why I put more emphasis on functions defined on a circle. Thenub314 ( talk) 18:03, 25 September 2008 (UTC)
The article currently uses both and for the "time" domain and and for ordinary frequency (cycles per time-unit). It also uses for the time-domain function, but not at the same time it uses for frequency (thank goodness).
Predictably, it is causing confusion and editorial thrashing. But of course it does accurately reflect the real world, which is also inconsistent and confusing.
-- Bob K ( talk) 13:01, 28 March 2009 (UTC)
Most of the EE texts on signals and systems use and for input and output functions, respectively. Groovamos ( talk) 06:11, 18 March 2016 (UTC)
Maybe it's just because I'm not a layman on this subject, but the How it works (basic explanation) section really seems to grate. At best, I think that this belongs in the cross-correlation article, but either way there are some specific points in the language that need addressing:
Basically, I think that as it stands this section has more potential to confuse a newcomer than to help them. Any thoughts? Oli Filth( talk| contribs) 21:35, 28 May 2009 (UTC)
Currently the Fourier transform has the factor in the exponent of , which I find reasonable since this definition simplifies most laws (no constant factors). The factor is also in the exponent of in the discrete Fourier transform. For consistency, I suggest to have this factor also in the Fourier series' exponential function argument, meaning that the Fourier series express 1-periodic functions. HenningThielemann ( talk) 16:15, 1 November 2010 (UTC)
Fourier transform is a representation of a signal as a collection of sinusoidals. But when adding two periodic signals the resultant would be another periodic signal. If we have fourier transform for aperiodic signals, it implies that the sum of periodic signals results an aperiodic signal. can anyone clarify this paradox? — Preceding unsigned comment added by 103.247.48.153 ( talk) 18:31, 27 August 2015 (UTC)
I'll give it a try. The sum of two periodic signals gives another periodic signal only if their periods are proportioned as a rational number. Second, periodic signals can be represented as a Fourier series so that not all sums of pairs of periodic signals will have a Fourier series, based on the first point, and no signal which is not square integrable (such as the sum of two differently periodic signals) can be represented as a Fourier transform*. So all sums of any set of periodic signals are not subject to Fourier transformation except for the trivial case below, and any sum of a set of periodic signals only will have a Fourier series if all possible pairings of their periods are of rational proportionality. This leaves you with the scenario that not all sums of sets of periodic signals are subject to Fourier analysis without subjecting the members of the set to analysis and summing the results. There is the trivial case of two periodic signals, one being the negative of the other, the sum of which is square integrable.
*Not strictly true where periodic signals, with the use of limits, the Fourier series of which can be viewed as a special case of the Fourier transform, made up of Dirac delta functions. Also as mentioned, if the sum of periodic signals is not periodic, then the Fourier series of each component can be analysed independently and the results summed for a visual graph of Dirac deltas, representing overlayed Fourier series.
I'm not a mathematician but retired EE so I would welcome any correction. Groovamos ( talk) 19:18, 17 March 2016 (UTC)
It is not true that all functions can be fourier transformed. Already Euler remarked on that. The requirement is that the function is square-integrabel in the sense of Lebesgue or otherwise expressed that the function has a length in the relevant Hilbert space. This condition is also applicable to the Laplace transform and the Heaviside operator etc..
I do not have a reference but it was proven in 1978 ( if I remember correct) and everybody know it (except the author of this article).
Burningbrand ( talk) 17:09, 6 August 2019 (UTC)
This is necessarily a complex subject, so what about some worked examples? Perhaps some classics. An infinitely long pure sine wave. A square wave. A bell curve (dont remember the name). Two sine waves that heterodyne. I note that the article on Fourier Series has examples, so there is a precedent. Well sort of — Preceding unsigned comment added by 2001:8003:E448:D401:7D27:BA42:5CB1:E259 ( talk) 08:10, 25 August 2019 (UTC)
This is the
talk page for discussing improvements to the
Fourier analysis article. This is not a forum for general discussion of the article's subject. |
Article policies
|
Find sources: Google ( books · news · scholar · free images · WP refs) · FENS · JSTOR · TWL |
This
level-4 vital article is rated B-class on Wikipedia's
content assessment scale. It is of interest to the following WikiProjects: | |||||||||||||||||||||||||||||||
|
This talk page has been archived at Talk:Fourier transform/Archive1. I moved the page, so the edit history is preserved with the archive page. I've copied back the most recent thread. Wile E. Heresiarch 23:58, 20 Sep 2004 (UTC)
Wouldn't the correct notation be:
or
Since f(t) is in fact the inverse fourier transform of F, which is itself a function of ?
Also, as a precedent, from the laplace transform article:
Yes, but couldn't one argue that ω is not a specific point, but rather a locus of all of the points ω such that ω extends from minus infinity to plus infinity? Also, it is helpful to show F(ω) rather than plain old F as a convenience so that we all can keep track of the fact that F is a function of ω and not, say, γ or β or whatever. Or t for that matter. -- Metacomet 02:00, 13 March 2006 (UTC)
It would also be helpful to use a letter other than F to denote the function of interest, since we are already using F (in calligraphy style) to represent the Fourier transform operator itself. Perhaps we could use, oh I don't know, maybe X or Y to represent the transformed function...
So, for example:
seems less potentially ambiguous to me. -- Metacomet 02:10, 13 March 2006 (UTC)
I think using is slightly ambiguous since it could be misinterpreted as the inverse fourier multiplied by t. It would seem extrenuous since the inverse fourier transform itself, as a result of the integral, generates a function of t...
--
Tristan Jones (
talk) 11:16, 13 March 2006 (MT)
In that case couldn't it be written as: ? I think there is value in showing the omega (or any other variable) even if it is not "technically" correct, as the meaning is the same and it shows in a more explicit manner that the transform changes the function into a different 'space'... Also, since both methods could be considered correct (the omega merely shows that F is a function of some single variable, in this case valled omega, but it could be anything else), that to someone who has not previously seen the transform, writing is more obvious and easier to understand... -- Tristan Jones ( talk) 11:16, 13 March 2006 (MT)
In danger of ambiguity use a·(b+c) for multiplication and a(b+c) for evaluating the function a in the point (b+c). Jitse, you want to replace all three occurences of ω by s. And there is no need to use curly brackets. And the outer pair of parentheses is redundant.
Bo Jacoby 18:12, 13 March 2006 (UTC)
There are other forms of the continuous Fourier transform, which have different advantages and are favoured by different people because of their mathematical or practical simplicity, or because they make the inverse transform look more like the transform. I mean, it's not just a few recalcitrant nutbars with a chip on their shoulder; I'm talking widespread usage. Most discussions of the Fourier transform mention this, so maybe we ought to as well. For example,
gets rid of the ::out the front, and gets s in terms of frequency rather than rads. This form is used in the course notes for Signal Processing here at Adelaide Uni. N-gon 10:08, 22 August 2006 (UTC)
Delete this if you like but it will help lots of people. If you want to *truly* understand the Fourier Transform and where it really comes from then read the book "Who is Fourier? A mathematical adventure" ISBN 0964350408. It's excellent. Lecturers can't teach this subject for toffee. It's a shame I only found this book after my course involving Fourier but it more than makes up for it now. You can get it off Amazon.com Chris, Wales UK 16:14 25th November 2005
I don't know where else to ask. What's the difference between and . Are they used correctly in this article? - Omegatron 01:56, Sep 19, 2004 (UTC)
Yeah. If you're picky you will note that t has no real purpose in the above formula. It really should be within the scope of a binding operator such as .; this however by rules of lambda-calclus reduces to the term f. CSTAR 02:47, 19 Sep 2004 (UTC)
I'm used to engineering notation, where , so you can use f for frequency and avoid confusing Fs. Then of course there's . :-) - Omegatron 02:40, Sep 19, 2004 (UTC)
In fact, I vote that we change f(t)->F(ω) into some other letter (I know you won't use X, but maybe g?), to avoid confusing newcomers to the Fourier transform. - Omegatron 02:42, Sep 19, 2004 (UTC)
One thing the "engineering notation" does not allow for is the idea that functions have values. E.g., if f(x) = x3 for all values of x, then f(2) is the value of that function at 2, and is equal to 8. If you say f(ω) is the function to be transformed, and g(t) is the transformed function, then g(2) should be the value of the transformed function at the point t = 2. But if you use the "engineers' notation" and write , then you cannot plug 2 into the left side. But watch this: is the result of plugging 2 into the transformed function.
One thing to be said for the difficulties introduced by the engineers' notation that are avoided by the cleaner, simpler, but more abstract "mathematicians' notation", is that perhaps sometimes one ought not to be evaluating these functions pointwise anyway! But that's a slightly bigger can of worms than what I want to open at this moment ... Michael Hardy 22:38, 19 Sep 2004 (UTC)
The formula for the discrete fourier transform
can be simplified because
into
This convention that an n'th root of unity is written rather than is convenient. Introducing the analytical concepts and and into an algebraic context is confusing and pointless.
The transform can be modified a little by taking the square root of n and by taking the complex conjugate of f.
Now this transformation is an involution. The procedure that transforms f into x is exactly the same as the procedure that transforms x into f. This is nice.
PS: I inserted the note "Moreover: reality in one domain implies symmetry in the conjugate domain."
Bo Jacoby 13:18, 8 September 2005 (UTC)
Stevenj removed my entry on involutory fourier transform. I think it should be put back. It makes life easier if you don't have to worry about the difference between forwards and backwards. Note that it is not merely "a trick to compute the inverse DFT in terms of the forward DFT" but a true identification of the inverse and the forward transform. The fourier integral and fourier series should be considered limiting cases of the DFT, so the involutory version is perfectly general and not specific to the discrete fourier transform. Please explain your point of view more carefully. See involution. Bo Jacoby 12:23, 22 September 2005 (UTC)
Here's my site full of Fourier transform example problems. Someone please put it in the external links if you think it's helpful!
http://www.exampleproblems.com/wiki/index.php/PDE:Fourier_Transforms
The introductory paragraph for any section in this encyclopedia should be accessible to all readers.
What was wrong with my input? Was it too easy to understand?:
" It is a mathematical technique for expressing a waveform as a weighted sum of sines and cosines."
Some introductory info is appropriate as:
It usually happens in mathematical analysis that certain given functions are to be operated on, and other functions or numbers are thus obtained. Often, the operations used involve integration, and here is found a very general class of operators, the so-called integral transforms. The simplest integral transform is the operation of integrating.
---
Voyajer
04:34, 30 December 2005 (UTC)
---
I concur, the introduction hardly explains what FT is to the layman.
?can anyone tell me why the first term of Fourier series when expanding a square wave
I have done the exercise about it..
thanks a lot-- HydrogenSu 14:53, 19 January 2006 (UTC)
If you consider this article: [1], it appears that the definition of DFT you have here on this page is actually the inverse DFT. Whose page is wrong?
James Nilsson and Susan Riedel's book Electric Circuits seventh edition defines fourier series like this:
and inverse:
I couldn't find a matching type of fourier transform on here, and they just call it "fourier transform" as if theres no other types. It goes on to give this table of transforms:
f(t) | F(w) |
---|---|
1 | |
A (constant) | |
sgn(t) | 2/jw |
u(t) | \pi delta(w) + 1/jw |
etc | etc |
Is there something I'm missing? Fresheneesz 02:16, 5 June 2006 (UTC)
Why does this article start off with with writing down the inverse transform?!?
Also there is no mention of PDE's anywhere. This is where the subject found it's birth and much of it's most fruitful application. I will try to add something to this effect but input would be appreciated. Also as defined here (as with any other definition I am familiar with)the DFT takes a finite vector and produces a vector of the same length.
But in the section "Family of Fourier transforms" we include it on this list as periodic. There should be some explanation there as to what is meant. Are we thinking of the DFT living on ?
This article establishes that the Fourier transform of a function s(t) is a new function and this relation is written
However, a frequently appearing notation (at several places in Wikipedia) is to write it simply as
There has already been a discussion above about the validity of the former notation. But it should also be recognized that the latter notation, while formally incorrect, has the advantage of allowing us to insert an arbitrary function expression into the transform operator. For example, in a compact way we can express that the Fourier transform of a sinc function is a rectangular as
The meaning of the operator has here changed into a transform table lookup and this way of using appears handy and is probably the motivation for its frequent use. There are limitations to this notations as well, for instance writing
implicitly assumes that the function being transformed is a function of t, not of a.
Here are my points
The advantage of the second form of the operator is that is allows a formalization of the lookup functionality which also makes explicit which variable is used in the transformation. -- KYN 09:34, 4 August 2006 (UTC)
Using an arrow instead of a comma makes it more readable.
The function f, defined by y = f(x), can be written f = (y←x), pronounced: "y as a function of x".. This nice notation allows you to distinguish the power function, (xy←x), from the exponential function, (xy←y).
The transform
means that
and
This notation does not identify the function f with the function value f(x), using a special letter x to identify the independent variable. Bo Jacoby 12:07, 4 August 2006 (UTC)
The natural place to put the notation is in the article function (mathematics), where presently the sum of two functions is defined by (f+g)(x) = f(x)+g(x) rather than by f+g = (f(x)+g(x)←x), thus defining the function value (f+g)(x) rather than the function f+g. For functions of several variables and for functions whose arguments or values are functions rather than simple variables, the notation y = f(x) instead of f = (y←x) is severely insufficient. The lambda calculus writes the independent variable to the left, f = λ x.y, rather than to the right as in f = (y←x). I prefer the latter convention because the formula for composition of functions,
is written without breaking the order like this:
Anyway, when the arrow points from the independent to the dependent variable there is not much room for misunderstanding.
Note also the elegant notations for inverse functions such as the natural logaritm and for the square root:
I will make a note on Talk:Function (mathematics) to ask the advice of the mathematicians there.
Bo Jacoby 11:40, 7 August 2006 (UTC)
I agree. Is this understandable? :
Your examples look like this:
There is no need to use special names for dummy variables, nor is there any need for special function names:
Bo Jacoby 09:16, 8 August 2006 (UTC)
The problem with the notation that Bo is suggesting for the Fourier transform, and for functions in general, is that it is much less common than the usual notations which are already employed in the articles. If the notation is established but obscure, it might be worth mentioning in function (mathematics), but a wholesale adoption in other articles is not a good idea.
Given that Bo has a history of trying to promote his own invented nonstandard notations in Wikipedia, however, I would also request that he provide a mainstream citation for his notation before inserting it into any article.
—Steven G. Johnson 17:03, 8 August 2006 (UTC)
Steven seems to have an excellent point here. I agree. Thenub314 23:50, 8 August 2006 (UTC)
I am unconvinced that we have a severe notational problem. It seems to me that in this case the proposed cure is worst than the disease (for the reasons StevenJ gave). Thank you for suspending progress. Also, I don't agree to the relevance of a programming language, because it is created for a completely different set of constraints. -- Bob K 18:24, 9 August 2006 (UTC)
I agree that "informal usage of the operator should be avoided, in particular when it is not perfectly clear which variable the function to be transform depends on."
The formula tells which variable the function to be transformed depends on.
There is no need for calligraphy. The formula is: F(s(t+u)←t) = F(s) e(i ω u←ω).
Linearity is irrelevant in the context.
Bo Jacoby 14:31, 13 August 2006 (UTC)
Hi Bob. The symbols t and t0 enters symmetrically on the left hand side of , but asymmetrically on the right hand side. What is ? The answer depends on whether u or v is the independent variable, and the readers are clueless. The explicit multiplication sign improves clarity: F(s(t+u)←t) = F(s)·(ei·ω·u←ω). Rather than a formula for the function, you might prefer a formula for the function value: F(s(t+u)←t)(ω) = F(s)(ω)·ei·ω·u. The expression F(s) = F(s(t)←t) is the transform of the function s. People are aquainted to expressions giving function values, while the fourier transform is about functions. You need some notation for a function f apart from the function value f(x) or f(t) or f(ω). Bo Jacoby 09:04, 14 August 2006 (UTC)
Thanks, Bob. When it comes to liking and disliking there is no point arguing. I trust that time will settle the question, so that either you will learn to love the arrow (which you seriously doubt because it looks strange), or I will learn to love the ambiguity (which I seriously doubt because unambiguous notation makes commentary unnecessary, which is lovely). The html is nasty mainly because of the italics. I write first without the italics, F(s)(ω)·ei·ω·u , and then I insert two apostrophes around each variable name: F(s)(ω)·ei·ω·u . Bo Jacoby 12:14, 14 August 2006 (UTC)
Hi, the topic of this thread is twofold: (1) is there an established notation which can be used for saying, eg, the FT of sinc is rect, in a formally correct way, and (2) should the notation issue be mentioned in the article. Bo has proposed a notation which addresses (1) which it OK with me, but since I havn't seen any evidence that it is established I don't want to write about it since Wikipedia should not be the place to introduce new notations which does not exist in the literature. An alternative to Bo's arrow can be found in Daubechies "Ten Lectures on Wavelets" where the anonymous variable is simply denoted with a dot: . To my surprise the informal notation can be found in hard core math text book, for example see Debnath & Mikusinsky "Introduction to Hilbert Spaces with Applications" (2ed, page 194). About (2), I will try to figure out something based on the text I wrote above and put it into the article shortly. -- KYN 15:48, 14 August 2006 (UTC)
The dot notation has a place. But it needs to know that place and stay there. When the point is simply that the Fourier tranform of sinc is rect, it avoids the irrelevant question of whether we're talking about time or space or some other dimension. -- Bob K 11:31, 15 August 2006 (UTC)
Standard notation for F(x(·)) is F(x(t)). You cannot tell, however, whether you are refering to the value F(x(t)) or the function (F(x(t))←t) or the operator value F(x) = F(x(t)←t) or the operator itself F = (F(x)←x) = (F(x(t)←t)←(x(t)←t)) . Bo Jacoby 10:08, 16 August 2006 (UTC)
My conclusion of this discussion is:
-- KYN 19:50, 15 August 2006 (UTC)
The following notation is supposed to be standard, , according to the recommandation at Talk:Function (mathematics)#Standard notation. The maps-to-arrow must have the special shape , and it must point to the right, otherwise it is not standard. Bo Jacoby 12:30, 17 August 2006 (UTC)
Bob, you cannot be serious. Which reference are you talking about? Bo Jacoby 12:25, 18 August 2006 (UTC)
Don't you consider me and KYN as part of the community of mathematicians? Don't you think that wikipedia should be readable to others than the few mathematicians who can reliably guess the meaning of say ? Bo Jacoby 12:25, 18 August 2006 (UTC)
Sorry. I agree that there is no point in solving a nonexisting problem. It is no problem to write f(x)=x2 rather than f=λx.x2. However, the notation f(x(t)) is ambiguous, as has been pointed out. We cannot pretend that there is no problem of notation. Bo Jacoby 14:31, 20 August 2006 (UTC)
Yes, BobK, you wrote: "I am unconvinced that we have a severe notational problem". Now we agree that there is a severe notational problem. KYN suggests "to leave the arrow and the dot until more evidence of their usages can be demonstrated". Now we have evidence for the maps-to arrow and the lambda-dot. Is there any reason why we should not start clarifying the article using these notations? Bo Jacoby 16:50, 20 August 2006 (UTC)
Somehow, people in mathematics and engineering have learned to live with this "severe notational problem," and the reason is obvious. The meaning of is perfectly clear, taken in context, since it only has one reasonable interpretation if you know that stands for the Fourier transform. In the rare cases where clarification is needed, it can be given in prose. This is no reason to depart from the most established and widely understoon notation for this subject. "A foolish consistency is the hobgoblin of little minds." —Steven G. Johnson 17:53, 20 August 2006 (UTC)
It would be nifty if there were a small graph of sinc function, etc. in the tables of equations. The continuous graphs plotted with nice smooth curves to emphasize that it is continuous, while discrete graphs plotted as more of a bar-chart or set of spikes, to emphasize its discreteness. Do we really need 4 sets of equations and graphs:
? (If there is some significant difference in the equations and graphs, it would be more visible if they were plotted all on one page, rather than scattered across all 4 articles).
I understand that the normal distribution is an eigenfunction of the Fourier transform. Is this the case? Should the article mention this? —Ben FrantzDale 19:32, 9 August 2006 (UTC)
Rbj is correct in that there are uncountably many choices of eigenfunctions, because there are only four eigenvalues (the 4th roots of unity), each with infinite multiplicity. However, arguably the most important eigenfunctions of the continuous Fourier transform are the Hermite functions (of which the gaussian is a special case). These functions are orthogonal and have some optimal localization properties I believe. The most appropriate place to mention them, however, would be under continuous Fourier transform. Unfortunately, they don't generalize in an easy way to the other Fourier variants. For example, it is not obvious what is the appropriate discrete analogue of the Hermite functions for the DFT, and this is still being debated in the literature. —Steven G. Johnson 17:59, 15 August 2006 (UTC)
Hello my question is why nobody has pointed the "asymptotic" behavior for a Fourier transform?.. in the sense:
as
I wish to raise an issue about the use of the expression "linear combination" in relation to the contunuous Fourier transform. Much as I appreciate that the transformed function is simmilar to the coefficients of a linear combination, it's not what the transformed function IS. For one thing, we could change many of the values of the transformed function and as long as we did it over finitely many values, or over a simmilarly null set, we would change really nothing about the essential properties of the transformed function, ie would generate the same function when the inverse transform was appled. So we're not really talking about the individual contributions made by different frequencies, because each frequency taken in isolation contributes nothing. Also, and perhaps this is a pointless thing to argue about, I have repeatedly had it drilled into me by more than one person that linear combinations have to be finite, not infinite and definitely not continuous.
See Talk:Continuous Fourier transform -- catslash 12:19, 4 September 2006 (UTC)
Fourier integral redirects here, and I was reading Partial differential equation which suggested it is somewhat different from a Fourier transform. Can anyone explain this? - Rainwarrior 16:53, 27 September 2006 (UTC)
This section used to be called "Variants of Fourier transforms", when the whole article was called "Fourier transform". Is it really the analysis that differs, or is it only the transforms?
On another note, I feel that much of this section is redundant, which is discussed in Talk:Fourier transform. I'll see if I can change the section so that it expresses what I meant in that discussion. — Sebastian (talk) 00:25, 8 October 2006 (UTC)
I was trying to standardize the formulas so the comparison is easy. Here's what I ended up with:
Name | transform | inverse transform |
---|---|---|
(Continuous) Fourier transform | ||
Fourier series | ||
Discrete-time Fourier transform | ||
Discrete Fourier transform |
Now I feel gypped: The formula for DTFT is basically be same as that for the Fourier series. Both transforms go from a continuous, periodic function to a discrete and aperiodic. Calling one variable "time" and the other "frequency" is an arbitrary convention that has nothing to do with mathematical reality. Is that really the way it is defined, or are we only looking at the inverse transformation here? I rather expected something like this:
Name | transform | inverse transform |
---|---|---|
(Continuous) Fourier transform | ||
Fourier series | ||
Discrete-time Fourier transform | ||
Discrete Fourier transform |
— Sebastian (talk) 03:28, 8 October 2006 (UTC)
Considering this is such a technical article - Why do I come across the word "jaggies" 82.25.174.136 19:22, 24 November 2006 (UTC)
16-Oct-2007: To handle concerns of "too technical" (or "too simple"), I have expanded the article with a new bottom section, Basic explanation. That section gives a simple explanation for "Fourier analysis" (plus the applications and branches of mathematics used), without cluttering the main text of the article. I think such a bottom section is about the best way to explain many novice issues about the subject, without annoying the more advanced readers about the background basics, such as using "formulas in integral calculus" and "algebra". I have removed the "{technical}" tag at top (for this year). - Wikid77 23:17, 16 October 2007 (UTC)
I appreciate what you are attempting to do. And I appreciate that you are trying to keep the intro uncluttered. If it were up to me, I would put it in a separate place completely, such as another article or a subcategory (Fourier_analysis/Basic_explanation_1 "1" because there are bound to be other perspectives). But that is not how Wikipedia resolves these things. (Pity, because I think it would save a lot of editor time.) Anyhow, I perused your reference for the numerical integration analogy, and it appears to come from chapter 8.5, where he poses and tries to answer the question "how do we determine the bandwidth of each of the discrete frequencies in the frequency domain?" His answer is "the bandwidth can be defined by drawing dividing lines between the samples." That might give the desired answer, but in mathematics, the end does not justify the means. IMO, he should clearly state that it is not to be taken literally. And it bothers me to refer to that fabrication as a "basic explanation". It is an incorrect explanation that just happens to fit the formula, apparently.
-- Bob K 15:10, 25 October 2007 (UTC)
Also, in chapter 8.6, pertaining to "calculating the DFT", he does not mention the numerical integration analogy. It turns into "correlation", which I think is a much better description of what's going on.
Another issue is the statement in the article: "The calculation computes the total area of thin rectangles aligned under a similar skewed curve,". Does the reference say anything about a "skewed curve"? What does that mean?
-- Bob K 15:18, 25 October 2007 (UTC)
Bob, your explanation is rather a detailed mechanical one, but it's actually a better explanation of the [[periodogram] (including Lomb's method; see Least-squares spectral analysis). That's because it doesn't describe the use of orthogonal basis functions, nor the notion of decomposition, from which derives inversion. By focusing on how to "measure the amplitude and phase of a particular frequency component" it presupposes the idea of particular frequency component and misses the idea of a transform, that a signal can be expressed in terms of such components.
Coming up with simple explanations is tricky; we've all tried, and I think all failed. Maybe we can find something in a book? Dicklyon 16:04, 28 October 2007 (UTC)
The reason I changed "inverse transform" to "inverse transform (synthesis)" was not because synthesis is mentioned in the other article (which it's not). It's because inverse tranform is not previously mentioned in this article. It just appears out of nowhere.
-- Bob K 14:50, 1 December 2007 (UTC)
I think instead of just listing the applications, it would help a lot to include exactly what function the Fourier analysis has in each application. -- Sylvestersteele ( talk) 10:12, 20 January 2008 (UTC)
Not a bad idea, let's expand the list here and see if we get something of good enough quality to put in the article.
Fourier analysis has many scientific applications:
References
{{
citation}}
: Cite has empty unknown parameter: |1=
(
help)It seems that in some sense I could not really do justice to magnitude of applicability even in cases where I know the subject. But here is a start at least, I hope other people add there two cents. Thenub314 ( talk) 11:20, 15 September 2008 (UTC)
Why do we separate "Applications in signal processing" form "Applications"? Thenub314 ( talk) 13:59, 18 September 2008 (UTC)
The new lead by User:Thenub314 seems a bit vacuous, whereas the old one had at least a bit of a definition in the opening sentence. I don't understand why the attempt to make it less technical, but I think it is not a success. Comments? Dicklyon ( talk) 04:37, 19 September 2008 (UTC)
My first most basic attpempt at an intro. section.... many changes to be made. Thenub314 ( talk) 14:55, 20 September 2008 (UTC)
The subject of Fourier analysis began with the study of Fourier series. In 1807, Joseph Fourier showed how one could attempt to express general functions as sums trigonometric functions. The attempts to understand the full validity and generality of Fourier's method lead to many important discoveries in mathematics. Including Dirichlet's definition of a function, Reimann's integral in it's full generality, Cantor's study of cardinality, and Lebesgue's definition of the integral (Cite Zygmund's intro here). At the same time from its inception with Fourier's work on heat propagation, the study has been useful in many applications. So much so that the study of the discrete Fourier transform and the FFT algorithm predates the invention of the computer (cite Gauss's work).
Maybe going the other direction will be helpful. I've put a plain simple statement of what it is for the lead sentence. I can't see how starting with something more abstract, more general, or less definite than that is going to help anybody, but I'm sure there are still ways it can be improved. Once we can say what it is, it should be easier to write an introduction... Dicklyon ( talk) 05:06, 21 September 2008 (UTC)
Thenub, please response here if you disagree, instead of just changing it. I very much disagree with the approach "is a subject area" in a lead sentence, instead of a plain simple statement of what it is. Dicklyon ( talk) 21:48, 21 September 2008 (UTC)
Dicklyon, I did respond here, but I put it under thread above, which is about the lead instead of the thread about the introduction. Thenub314 ( talk) 06:41, 22 September 2008 (UTC)
I am changing the revision to the Fourier Series section of the Variants of the Fourier transform. It seems to me that using the Poisson summation formula to introduce Fourier series is too complicated. It also circular and requires more of the function. Perhaps what is there could be more straight forward, but I didn't see the change as an improvement. Thenub314 ( talk) 06:06, 24 September 2008 (UTC)
If that is still germaine, please clarify.
I have tried to rewrite the section to be a bit more clear. Perhaps it should be the first of the "Variants of Harmonic analysis" that we list, seeing as it is usually the variant the students encounter first. Thenub314 ( talk) 08:46, 24 September 2008 (UTC)
For starters, I would like to move the FS section closer (in appearance) to the FT notation. (See latest revision.)
-- Bob K ( talk) 14:38, 25 September 2008 (UTC)
I think we should stick to attempting to describe the subject simply, more complex ideas, like the Fourier transform of a periodic function are better suited to the main articles. Thenub314 ( talk) 15:22, 25 September 2008 (UTC)
My mistake, the sentance "The Fourier transform of a periodic function is zero-valued except at a discrete set of harmonic frequencies, where it can diverge." was trying to make a connection to a Dirac comb. It is, by the way, false that the Fourier transform of a periodic function converges to zero at points outside this discrete set of harmonic frequencies. It divereges at every point, so it can not be used to distinguish the harmonic frequencies on which you'd like to base your Fourier series. I dislike introducing the inverse transform before the forward transform. The comment about convergence of Fourier series shouldn't be a foot note. For the same reason we don't define the Fourier transform on Rn, I think we should stick with 2π periodic functions, it keeps the descriptions simplier without losing much. My understanding of the history was that Fourier was studying heat distributions in a ring, which is why I put more emphasis on functions defined on a circle. Thenub314 ( talk) 18:03, 25 September 2008 (UTC)
The article currently uses both and for the "time" domain and and for ordinary frequency (cycles per time-unit). It also uses for the time-domain function, but not at the same time it uses for frequency (thank goodness).
Predictably, it is causing confusion and editorial thrashing. But of course it does accurately reflect the real world, which is also inconsistent and confusing.
-- Bob K ( talk) 13:01, 28 March 2009 (UTC)
Most of the EE texts on signals and systems use and for input and output functions, respectively. Groovamos ( talk) 06:11, 18 March 2016 (UTC)
Maybe it's just because I'm not a layman on this subject, but the How it works (basic explanation) section really seems to grate. At best, I think that this belongs in the cross-correlation article, but either way there are some specific points in the language that need addressing:
Basically, I think that as it stands this section has more potential to confuse a newcomer than to help them. Any thoughts? Oli Filth( talk| contribs) 21:35, 28 May 2009 (UTC)
Currently the Fourier transform has the factor in the exponent of , which I find reasonable since this definition simplifies most laws (no constant factors). The factor is also in the exponent of in the discrete Fourier transform. For consistency, I suggest to have this factor also in the Fourier series' exponential function argument, meaning that the Fourier series express 1-periodic functions. HenningThielemann ( talk) 16:15, 1 November 2010 (UTC)
Fourier transform is a representation of a signal as a collection of sinusoidals. But when adding two periodic signals the resultant would be another periodic signal. If we have fourier transform for aperiodic signals, it implies that the sum of periodic signals results an aperiodic signal. can anyone clarify this paradox? — Preceding unsigned comment added by 103.247.48.153 ( talk) 18:31, 27 August 2015 (UTC)
I'll give it a try. The sum of two periodic signals gives another periodic signal only if their periods are proportioned as a rational number. Second, periodic signals can be represented as a Fourier series so that not all sums of pairs of periodic signals will have a Fourier series, based on the first point, and no signal which is not square integrable (such as the sum of two differently periodic signals) can be represented as a Fourier transform*. So all sums of any set of periodic signals are not subject to Fourier transformation except for the trivial case below, and any sum of a set of periodic signals only will have a Fourier series if all possible pairings of their periods are of rational proportionality. This leaves you with the scenario that not all sums of sets of periodic signals are subject to Fourier analysis without subjecting the members of the set to analysis and summing the results. There is the trivial case of two periodic signals, one being the negative of the other, the sum of which is square integrable.
*Not strictly true where periodic signals, with the use of limits, the Fourier series of which can be viewed as a special case of the Fourier transform, made up of Dirac delta functions. Also as mentioned, if the sum of periodic signals is not periodic, then the Fourier series of each component can be analysed independently and the results summed for a visual graph of Dirac deltas, representing overlayed Fourier series.
I'm not a mathematician but retired EE so I would welcome any correction. Groovamos ( talk) 19:18, 17 March 2016 (UTC)
It is not true that all functions can be fourier transformed. Already Euler remarked on that. The requirement is that the function is square-integrabel in the sense of Lebesgue or otherwise expressed that the function has a length in the relevant Hilbert space. This condition is also applicable to the Laplace transform and the Heaviside operator etc..
I do not have a reference but it was proven in 1978 ( if I remember correct) and everybody know it (except the author of this article).
Burningbrand ( talk) 17:09, 6 August 2019 (UTC)
This is necessarily a complex subject, so what about some worked examples? Perhaps some classics. An infinitely long pure sine wave. A square wave. A bell curve (dont remember the name). Two sine waves that heterodyne. I note that the article on Fourier Series has examples, so there is a precedent. Well sort of — Preceding unsigned comment added by 2001:8003:E448:D401:7D27:BA42:5CB1:E259 ( talk) 08:10, 25 August 2019 (UTC)