From Wikipedia, the free encyclopedia

Concerning the new addition, does the author behind it have a reference on the new statement "with K~1.6 for approximating a Laplacian Of Gaussian and K~5 in the retina"? According to my humble opinion, the Laplacian can be well approximated also for other values of K. Tpl 13:51, 19 November 2006 (UTC) reply

Since no reply has been received concerning the previous question and this statement is neither correct and nor supported by any references I have removed this statement. Tpl 08:34, 3 December 2006 (UTC) reply

I've not been fast enough! 1.6 is the best fit using MSE. 5 is the best fit using physiological data (though there exist a wide heteregenoity) see http://retina.anatomy.upenn.edu/~lance/modelmath/enroth_freeman.html. main fact is that it's wider than a LOG. Meduz 13:22, 27 December 2006 (UTC) reply

Please clarify one point. I'm almost sure, but not positive, that K is the constant scaling factor between the standard deviations of the two Gaussians? As in sigma2 = K*sigma1? Also any comments of the practical effects of different K values? As in: larger K values would result in wider but shallower lobes on the function so larger K values will respond less strongly if 'blobs' of become too close (spatially). —Preceding unsigned comment added by Craigyk ( talkcontribs) 21:34, 31 March 2008 (UTC) reply

zero crossing

Just a note: the zero crossing, r, of the DoG function is given by

where . This can easily be derived by the setting function:

equal to zero and substituting .

I came here looking for exactly this (saved me a little work). Seems like this should be included in the article. Thanks! 24.91.117.221 ( talk) 00:05, 10 August 2008 (UTC) reply

What's the use?

I was just wondering: Why would anyone favor the use of the difference of Gaussians approach over the "normal" calculation of the laplacian of gaussians? My first idea was that since normal blurred images can be computed with separable (thus quick) convolutions with 1D gaussians so it could be faster.This turned out to be false, since the laplacian of gaussians can also be written as the sum of two separable convolutions (so should be equal in calculation speed). The benefit of the true Laplacian of Gaussians approach is of course that is not an approximation.

Can anyone comment on that? Drugbird ( talk) 14:57, 20 April 2010 (UTC) reply

Found a relevant link on Stack.
https://dsp.stackexchange.com/questions/37673/what-is-the-difference-between-difference-of-gaussian-laplace-of-gaussian-and
All of the optimizations for computing convolutions with Gaussians that I am aware of can also be used for its derivatives. I agree with the main answer here in that there's little reason to prefer LoG over DoG, at least in theory. My best guess is that (1) CV libraries are more likely to have optimized functions for convolving with Gaussians, (2) GPUs are more likely to have hardware-level implementations for Gaussians, and (3) DoG is a bit more general in that it can serve as a tunable band-pass filter. If someone can offer alternative hypotheses or search for corroborating sources, that would be great. HoaiDucLongPhung ( talk) 01:29, 28 July 2024 (UTC) reply
From Wikipedia, the free encyclopedia

Concerning the new addition, does the author behind it have a reference on the new statement "with K~1.6 for approximating a Laplacian Of Gaussian and K~5 in the retina"? According to my humble opinion, the Laplacian can be well approximated also for other values of K. Tpl 13:51, 19 November 2006 (UTC) reply

Since no reply has been received concerning the previous question and this statement is neither correct and nor supported by any references I have removed this statement. Tpl 08:34, 3 December 2006 (UTC) reply

I've not been fast enough! 1.6 is the best fit using MSE. 5 is the best fit using physiological data (though there exist a wide heteregenoity) see http://retina.anatomy.upenn.edu/~lance/modelmath/enroth_freeman.html. main fact is that it's wider than a LOG. Meduz 13:22, 27 December 2006 (UTC) reply

Please clarify one point. I'm almost sure, but not positive, that K is the constant scaling factor between the standard deviations of the two Gaussians? As in sigma2 = K*sigma1? Also any comments of the practical effects of different K values? As in: larger K values would result in wider but shallower lobes on the function so larger K values will respond less strongly if 'blobs' of become too close (spatially). —Preceding unsigned comment added by Craigyk ( talkcontribs) 21:34, 31 March 2008 (UTC) reply

zero crossing

Just a note: the zero crossing, r, of the DoG function is given by

where . This can easily be derived by the setting function:

equal to zero and substituting .

I came here looking for exactly this (saved me a little work). Seems like this should be included in the article. Thanks! 24.91.117.221 ( talk) 00:05, 10 August 2008 (UTC) reply

What's the use?

I was just wondering: Why would anyone favor the use of the difference of Gaussians approach over the "normal" calculation of the laplacian of gaussians? My first idea was that since normal blurred images can be computed with separable (thus quick) convolutions with 1D gaussians so it could be faster.This turned out to be false, since the laplacian of gaussians can also be written as the sum of two separable convolutions (so should be equal in calculation speed). The benefit of the true Laplacian of Gaussians approach is of course that is not an approximation.

Can anyone comment on that? Drugbird ( talk) 14:57, 20 April 2010 (UTC) reply

Found a relevant link on Stack.
https://dsp.stackexchange.com/questions/37673/what-is-the-difference-between-difference-of-gaussian-laplace-of-gaussian-and
All of the optimizations for computing convolutions with Gaussians that I am aware of can also be used for its derivatives. I agree with the main answer here in that there's little reason to prefer LoG over DoG, at least in theory. My best guess is that (1) CV libraries are more likely to have optimized functions for convolving with Gaussians, (2) GPUs are more likely to have hardware-level implementations for Gaussians, and (3) DoG is a bit more general in that it can serve as a tunable band-pass filter. If someone can offer alternative hypotheses or search for corroborating sources, that would be great. HoaiDucLongPhung ( talk) 01:29, 28 July 2024 (UTC) reply

Videos

Youtube | Vimeo | Bing

Websites

Google | Yahoo | Bing

Encyclopedia

Google | Yahoo | Bing

Facebook