This is an archive of past discussions. Do not edit the contents of this page. If you wish to start a new discussion or revive an old one, please do so on the current talk page. |
Archive 1 |
I think the introduction to this article needs cleaning up, as 'A convolution is a kind of very general moving average' is vague and doesn't offer any helpful insights.
I originally wrote:
Akella changed it to:
I have a problem with that, because I know of no such discipline as Systems Science, and neither, at this point, does Wikipedia. I know of "system analysis," but as a rather imprecisely defined terms that once may have had an engineering meaning, and then became something like a high-ranking computer programmer...
I realize that the scope is much broader than electrical engineering, but I was trying to list some contexts, that might be familiar to readers, in which convolution makes an appearance.
Also, in electrical engineering, the phrase "linear system" is often used to refer to precisely the kind of system described by a one-dimensional convolution integral. Whereas, in other fields, such as, say, Linear algebra, the scope of the word "linear system" is much broader, so you couldn't just say "linear system" and leave it at that.
So, I've changed it to:
Dpbsmith 23:58, 26 Feb 2004 (UTC)
Does the factor in the convolution theorem apply where Laplace transforms are concerned? I can see where it comes in for Fourier transforms, but not Laplace transforms. -- Glengarry 01:34, 15 Jul 2004 (UTC)
Anybody know where the term "convolution" comes from? It'd be nice to add this bit of historical trivia.
The PlanetMath has a nice GFDL article on convolution, see http://planetmath.org/?op=getobj&from=objects&id=2790 Anybody willing to merge that stuff? Oleg Alexandrov ( talk) 02:17, 24 November 2005 (UTC)
Hi - The link to 'Convolution' on Planet Math in the external links section seems to be broken. --anon
in risk theory the distribution of the sum of n i.i.d random variables is found by convolution.
For things like image processing we have 2d convolution . Could someone who knows about this stuff explain how this works, and the rules for separating orders (I think it's the same as the theorem that allows 2d FT to be composed using two 1D FTs?) -- njh 05:25, 19 April 2006 (UTC)
Convolution kernel points here. This article should define it. - 72.58.19.66 05:55, 23 May 2006 (UTC)
The notation μ×ν (used in section Convolution of measures) is not explained here, neither in Borel measure or any other (more or less "directly") linked page. I am not used to work on this subject, but from the background I have, I suspect it should rather be denoted as a tensor product . (The only definitions I know for a "cross product" are the vector cross product and the Cartesian product of sets, but not of maps.) — MFH: Talk 13:49, 2 June 2006 (UTC)
Please add a page listing different image processing convolution kernel matrices since there seems to be no good general reference for them on the web. Include the matrix values as well as a description of what the convolution kernel does.
it might be nice to see a discussion of convergence issues. E.g. the convolution operator F(f):=f*g is a linear operator on L^1 if g is in L^1. What other spaces does that hold for?
You wrote: "The integration range depends on the domain on which the functions are defined." Could you, please, specify the endpoints of the integration interval more specifically.
For example, if f has a normal distribution and g has a uniform distribution in the interval (a,b). How exactly are the endpoints of the integration interval.
Or if f has a normal distribution and g(x) has the distribution a*(b+1)*(1-a*x)**b in the interval (0,1/a). How exactly are the endpoints of the integration interval.
Or if f has a normal distribution and g(x) has the exponential distribution λ*e**(-λ*x) in the interval (0,infinity). How exactly are the endpoints of the integration interval.
Can you give a general rule?
I added some bare bones information about fast convolution. If someone that knows more about it would add some juice, that'd be nice. -Mojodaddy
There's a very thin article on circular convolution, I think that there should be a section here comparing linear to circular. -Mojodaddy
The definition is not satisfying. What exactly are f and g? Something like ? -- Bfrey 14:58, 10 May 2006 (UTC)
As far as I can see, the limits in the integral after the change of variable are wrong. Shouldn't the second integral be:
Or am I going mad? Oli Filth 22:38, 1 May 2007 (UTC)
I'm moving this off the article because it is licensed under creative commons attribution noncommercial share-alike 2.5. Noncommercial material cannot be used in articles to allow reuse of Wikipedia on commercial sites like answers.com (see copyright problems). -- h2g2bob ( talk) 11:28, 20 May 2007 (UTC)
Simple practical application
(transcribed from prof. Arthur Mattuck video lecture available @ MIT's OCW: 18.03 Differential Equations, Spring 2006 - Lecture 21: http://ocw.mit.edu/OcwWeb/Mathematics/18-03Spring-2006/CourseHome/index.htm)
Problem:
A nuclear power plant dumps radioactive waste at a rate of f(t) (kg/year).
The approximate radioactive material dumped in the interval [ti ; ti+1] is:
Starting at = 0. What is the amount of radioactive material present in the pile at time ?
(As more radioactive material is getting dumped, the existing material decays. The problem is focused only on the material that is still radioactive at time .)
Solution:
We model the radioactive decay with a simple exponential: If the initial amount is , then at time there is: material left.
( depends on the type of radioactive material used and for simplicity it is assumed there is only one type of material being dumped - thus is constant).
Replace in the figure above with and we have:
So amount dumped at time . How much of this material is still left at time ?Amount left at time from that contribution is .
Total amount left at time (starting at ):
Let then the sum becomes an integral:
Radioactive decay of actual nuclear waste is more complicated in several ways. A radioactive element decays into a different element that typically is itself radioactive. The daughter element may emit a different kind of radiation -- e.g. alpha particles (helium nuclei), beta particles (electrons), gamma rays (photons), or neutrons. If you concentrate on radioactivity of just one kind, you could calculate the radioactivity at some future time with a convolution integral, but with a more complicated . In addition, the original nuclear waste probably included more than one kind of radioactive element.
Should this be in the article?
-- h2g2bob ( talk) 21:28, 3 June 2007 (UTC)
I've removed the convolution power stuff again, because it's still not defined adequately, and even if it was, I don't believe it belongs in this article. See my comments below:
The meaning of hasn't been defined. Or is this supposed to be the definition?
The meaning of hasn't been defined. Or is this supposed to be the definition?
The meaning of hasn't been defined. Or is this supposed to be the definition?
Besides which, the notation is now inconsistent. In , the superscript term indicates how many times to convolve by itself. uses the same notation (i.e. ), but means something completely different. Do you have a reference that uses this notation?
More importantly, I think that "convolution power" as portrayed here is actually defined via the convolution theorem, i.e.
In which case, all of these aren't basic properties of convolution at all, merely a set of definitions. Therefore, it should belong in its own article, or perhaps in the convolution theorem article. Oli Filth 17:47, 4 July 2007 (UTC)
I think the Taylor series' property is an important property of the connvolution. You may write a new article about the
Convolution power if you would like to, but for now I put it back. I surely think the Taylor series' property should be here.
Bombshell 18:05, 4 July 2007 (UTC)
I'm sorry, but I made some mistakes in the equations, of course one needs to change 1 by the Dirac delta
Now the equations are exact: for your example:
we have on the one hand:
and on the other:
The last equation becomes:
I 've used these properties in the studie of renewal theory, where there used to make the solution of the renewel equation "easier":
E.g.: renewal equation: G = g + G * F
Thus: and therefore
However I admit making a lot of mistakes the first time, but I think the Taylor series property is very useful
Bombshell 22:14, 4 July 2007 (UTC)
You can prove it as follows:
which proves it
I'll start a new article Convolution power.
You're welcome to enhance it
Bombshell 16:23, 5 July 2007 (UTC)
Is there maybe a way that a proof for commutativity could be added, using simple substitution (z = x-y) I'm sorry but I have no experience with Latex. Ameeya 19:07, 21 July 2007 (UTC)
This is an archive of past discussions. Do not edit the contents of this page. If you wish to start a new discussion or revive an old one, please do so on the current talk page. |
Archive 1 |
This is an archive of past discussions. Do not edit the contents of this page. If you wish to start a new discussion or revive an old one, please do so on the current talk page. |
Archive 1 |
I think the introduction to this article needs cleaning up, as 'A convolution is a kind of very general moving average' is vague and doesn't offer any helpful insights.
I originally wrote:
Akella changed it to:
I have a problem with that, because I know of no such discipline as Systems Science, and neither, at this point, does Wikipedia. I know of "system analysis," but as a rather imprecisely defined terms that once may have had an engineering meaning, and then became something like a high-ranking computer programmer...
I realize that the scope is much broader than electrical engineering, but I was trying to list some contexts, that might be familiar to readers, in which convolution makes an appearance.
Also, in electrical engineering, the phrase "linear system" is often used to refer to precisely the kind of system described by a one-dimensional convolution integral. Whereas, in other fields, such as, say, Linear algebra, the scope of the word "linear system" is much broader, so you couldn't just say "linear system" and leave it at that.
So, I've changed it to:
Dpbsmith 23:58, 26 Feb 2004 (UTC)
Does the factor in the convolution theorem apply where Laplace transforms are concerned? I can see where it comes in for Fourier transforms, but not Laplace transforms. -- Glengarry 01:34, 15 Jul 2004 (UTC)
Anybody know where the term "convolution" comes from? It'd be nice to add this bit of historical trivia.
The PlanetMath has a nice GFDL article on convolution, see http://planetmath.org/?op=getobj&from=objects&id=2790 Anybody willing to merge that stuff? Oleg Alexandrov ( talk) 02:17, 24 November 2005 (UTC)
Hi - The link to 'Convolution' on Planet Math in the external links section seems to be broken. --anon
in risk theory the distribution of the sum of n i.i.d random variables is found by convolution.
For things like image processing we have 2d convolution . Could someone who knows about this stuff explain how this works, and the rules for separating orders (I think it's the same as the theorem that allows 2d FT to be composed using two 1D FTs?) -- njh 05:25, 19 April 2006 (UTC)
Convolution kernel points here. This article should define it. - 72.58.19.66 05:55, 23 May 2006 (UTC)
The notation μ×ν (used in section Convolution of measures) is not explained here, neither in Borel measure or any other (more or less "directly") linked page. I am not used to work on this subject, but from the background I have, I suspect it should rather be denoted as a tensor product . (The only definitions I know for a "cross product" are the vector cross product and the Cartesian product of sets, but not of maps.) — MFH: Talk 13:49, 2 June 2006 (UTC)
Please add a page listing different image processing convolution kernel matrices since there seems to be no good general reference for them on the web. Include the matrix values as well as a description of what the convolution kernel does.
it might be nice to see a discussion of convergence issues. E.g. the convolution operator F(f):=f*g is a linear operator on L^1 if g is in L^1. What other spaces does that hold for?
You wrote: "The integration range depends on the domain on which the functions are defined." Could you, please, specify the endpoints of the integration interval more specifically.
For example, if f has a normal distribution and g has a uniform distribution in the interval (a,b). How exactly are the endpoints of the integration interval.
Or if f has a normal distribution and g(x) has the distribution a*(b+1)*(1-a*x)**b in the interval (0,1/a). How exactly are the endpoints of the integration interval.
Or if f has a normal distribution and g(x) has the exponential distribution λ*e**(-λ*x) in the interval (0,infinity). How exactly are the endpoints of the integration interval.
Can you give a general rule?
I added some bare bones information about fast convolution. If someone that knows more about it would add some juice, that'd be nice. -Mojodaddy
There's a very thin article on circular convolution, I think that there should be a section here comparing linear to circular. -Mojodaddy
The definition is not satisfying. What exactly are f and g? Something like ? -- Bfrey 14:58, 10 May 2006 (UTC)
As far as I can see, the limits in the integral after the change of variable are wrong. Shouldn't the second integral be:
Or am I going mad? Oli Filth 22:38, 1 May 2007 (UTC)
I'm moving this off the article because it is licensed under creative commons attribution noncommercial share-alike 2.5. Noncommercial material cannot be used in articles to allow reuse of Wikipedia on commercial sites like answers.com (see copyright problems). -- h2g2bob ( talk) 11:28, 20 May 2007 (UTC)
Simple practical application
(transcribed from prof. Arthur Mattuck video lecture available @ MIT's OCW: 18.03 Differential Equations, Spring 2006 - Lecture 21: http://ocw.mit.edu/OcwWeb/Mathematics/18-03Spring-2006/CourseHome/index.htm)
Problem:
A nuclear power plant dumps radioactive waste at a rate of f(t) (kg/year).
The approximate radioactive material dumped in the interval [ti ; ti+1] is:
Starting at = 0. What is the amount of radioactive material present in the pile at time ?
(As more radioactive material is getting dumped, the existing material decays. The problem is focused only on the material that is still radioactive at time .)
Solution:
We model the radioactive decay with a simple exponential: If the initial amount is , then at time there is: material left.
( depends on the type of radioactive material used and for simplicity it is assumed there is only one type of material being dumped - thus is constant).
Replace in the figure above with and we have:
So amount dumped at time . How much of this material is still left at time ?Amount left at time from that contribution is .
Total amount left at time (starting at ):
Let then the sum becomes an integral:
Radioactive decay of actual nuclear waste is more complicated in several ways. A radioactive element decays into a different element that typically is itself radioactive. The daughter element may emit a different kind of radiation -- e.g. alpha particles (helium nuclei), beta particles (electrons), gamma rays (photons), or neutrons. If you concentrate on radioactivity of just one kind, you could calculate the radioactivity at some future time with a convolution integral, but with a more complicated . In addition, the original nuclear waste probably included more than one kind of radioactive element.
Should this be in the article?
-- h2g2bob ( talk) 21:28, 3 June 2007 (UTC)
I've removed the convolution power stuff again, because it's still not defined adequately, and even if it was, I don't believe it belongs in this article. See my comments below:
The meaning of hasn't been defined. Or is this supposed to be the definition?
The meaning of hasn't been defined. Or is this supposed to be the definition?
The meaning of hasn't been defined. Or is this supposed to be the definition?
Besides which, the notation is now inconsistent. In , the superscript term indicates how many times to convolve by itself. uses the same notation (i.e. ), but means something completely different. Do you have a reference that uses this notation?
More importantly, I think that "convolution power" as portrayed here is actually defined via the convolution theorem, i.e.
In which case, all of these aren't basic properties of convolution at all, merely a set of definitions. Therefore, it should belong in its own article, or perhaps in the convolution theorem article. Oli Filth 17:47, 4 July 2007 (UTC)
I think the Taylor series' property is an important property of the connvolution. You may write a new article about the
Convolution power if you would like to, but for now I put it back. I surely think the Taylor series' property should be here.
Bombshell 18:05, 4 July 2007 (UTC)
I'm sorry, but I made some mistakes in the equations, of course one needs to change 1 by the Dirac delta
Now the equations are exact: for your example:
we have on the one hand:
and on the other:
The last equation becomes:
I 've used these properties in the studie of renewal theory, where there used to make the solution of the renewel equation "easier":
E.g.: renewal equation: G = g + G * F
Thus: and therefore
However I admit making a lot of mistakes the first time, but I think the Taylor series property is very useful
Bombshell 22:14, 4 July 2007 (UTC)
You can prove it as follows:
which proves it
I'll start a new article Convolution power.
You're welcome to enhance it
Bombshell 16:23, 5 July 2007 (UTC)
Is there maybe a way that a proof for commutativity could be added, using simple substitution (z = x-y) I'm sorry but I have no experience with Latex. Ameeya 19:07, 21 July 2007 (UTC)
This is an archive of past discussions. Do not edit the contents of this page. If you wish to start a new discussion or revive an old one, please do so on the current talk page. |
Archive 1 |