![]() | This article is rated Start-class on Wikipedia's
content assessment scale. It is of interest to the following WikiProjects: | ||||||||||
|
The images shown do not contain "noise" as I would expect. It seems to be the same image but shifted. That means that all the "noise" is also shifted with (30,33). You can see the same "brighter" spot right of the lion's head, if you look carefully. It would be nice if this can be done with real noise. 1.) Take a picture. 2.) Pick a region. 3.) Apply white noise. 4.) Pick another region. 5.) Apply white noise. 6.) Do the math. —Preceding unsigned comment added by Saviourmachine ( talk • contribs) 08:41, 11 May 2011 (UTC)
I have trouble understanding this page. There's a description of a method, and a proof, but it isn't very clear what the method is supposed to achieve. So there's no "theorem" or "claim" to prove. Certainly the claim needs to be stated more clearly, preferably both in an informal (for understanding) and a formal way. Without this the "proof" should be downgraded to a "motivation" or "why it works" kind of section. Thanks & regards. akay 09:34, 19 July 2006 (UTC)
I'm not sure what the example is supposed to show. Is the vector from
<0 0> to <30 33> (where the prominent white dot appears) supposed to represent the displacement of image 2 relative
to image 1? 23, August 2006
The proof presented is flawed. The second line only applies if the image is circularly shifted, i.e. only if:
Either this needs to be stated, or the proof needs to be rewritten in terms of the continuous Fourier transform or the discrete-time Fourier transform.
Oli Filth 22:51, 1 May 2007 (UTC)
What's more, the expression that's referred to as a "normalised cross correlation" isn't a correlation at all, it's simply a multiplication. It's the spatial-domain result which is the cross-correlation. Oli Filth 08:54, 2 May 2007 (UTC)
Answer : anyway, in practice, images will be noisy and will never be the circular translated of one another, so the proof will never apply to a practical case, and the only interesting thing is the idea behind the algorithm, that's why it's useless to clutter the proof with a circular shift of the pictures. Say we suppose that both images are on a black background and translated by a small shift, so that they are both in the frame, then the theorem applies, and it gives us some kind of justification so as to why the algorithm is supposed to work. That's all we are asking for (since, again, in practice the images will never be actual translations of each other). —The preceding unsigned comment was added by 129.199.224.240 ( talk • contribs) 11:12, 24 June 2007.
In reality, you zero-pad the signals so that the circularity doesn't matter. At the edges you're just measuring the correlation of the image with nothing, instead of a wrapped version of itself. —Preceding unsigned comment added by 96.224.64.135 ( talk) 17:45, 1 December 2009 (UTC)
This article is too narrow. Phase correlation is also used in other fields besides image processing. Please generalize it and make the image processing application a subsection. — Keenan Pepper 03:34, 19 July 2007 (UTC)
I found this article very informative. However, you mention sub-pixel methods, but don't specify how this might be done. Also, I think it would be helpful to comment on how this method relates to motion estimation by the correlation method - increased accuracy and reduced robustness? 196.2.111.133 15:55, 20 July 2007 (UTC)
First, I would like to mention that the phase correlation is a private case of the normalized correlation, where we assume that two images have approximately the same Fourier magnitude. Second, one major benefit of this technique is the fact that you don't actually need to multiply the Fourier transforms of the two images and then divide by the magnitudes, but instead you can DISCARD the two magnitudes and just subtract the phases - which is much faster. —Preceding unsigned comment added by 212.25.107.145 ( talk) 09:50, 7 January 2008 (UTC)
Would someone explain why this alternate expression does not work?
It would seem to give the same result. Cgage22 ( talk) 22:54, 6 November 2008 (UTC)
I've been reverting the changes which state that phase correlation is a method of identifying "similarity" been images. Whilst this is just a correlation, and therefore its suitable post-processing of its output might give a crude estimate of "similarity"), the article doesn't discuss this. The article only discusses identifying the peak in the cross-correlation, and assuming that this corresponds to a translation.
If you'd like to keep the mention of "similarity", please could you provide a reference which describes this. Then, per WP:LEAD, a description of this should be added to the article body before we mention it in the lead, as the lead shouldn't introduce material that isn't discussed elsehwere in the article. Regards, Oli Filth( talk| contribs) 19:58, 15 August 2009 (UTC)
How is this any different from calculating the cross-correlation between the images? The fact that this can be done using fft's is noted on the cross-correlation page. Is this really a different thing? TimmmmCam ( talk) 12:29, 1 October 2009 (UTC)
The basic idea is simple but it's not obvious from the explanation full of math.
Basically, determining a position of a lion is complicated but determining a position of a single white dot is easy. Phase correlation transforms the problem from a lion to a dot.
Shifted copies of an image will differ only in the phases of their frequencies, not in amplitudes. A given pixel shift will produce small angular rotation on low frequencies and high rotation on high frequencies. If we discard the amplitude information (shape of lion) and keep the phase information (pixel shift), we get what would happen if we were shifting a single white dot instead of a lion. The result will be a white dot shifted by the same amount as the lion.
This explanation immediately shows a drawback of the method - if only small portion of the frequency space is occupied by signal frequencies (narrow-band signal or blurred image) and the rest by noise, most of the phase results will give random result and the phases will mutually contradict on how much the shift was. The phases derived from noise have the same say on the result as those derived from signal. I wonder what the result will be and how this drawback can be fixed. —Preceding unsigned comment added by 80.218.244.105 ( talk) 11:01, 21 January 2010 (UTC)
both references links provided on this page seem to be outdated and dead (404) doing a quick google search I could at least find an, as of yet, working link to the second reference http://www.liralab.it/teaching/SINA/slides-current/fourier-mellin-paper.pdf . Could someone fix that? 62.159.242.114 ( talk) 13:17, 14 March 2011 (UTC)
Hello fellow Wikipedians,
I have just added archive links to one external link on
Phase correlation. Please take a moment to review
my edit. If necessary, add {{
cbignore}}
after the link to keep me from modifying it. Alternatively, you can add {{
nobots|deny=InternetArchiveBot}}
to keep me off the page altogether. I made the following changes:
When you have finished reviewing my changes, please set the checked parameter below to true to let others know.
This message was posted before February 2018.
After February 2018, "External links modified" talk page sections are no longer generated or monitored by InternetArchiveBot. No special action is required regarding these talk page notices, other than
regular verification using the archive tool instructions below. Editors
have permission to delete these "External links modified" talk page sections if they want to de-clutter talk pages, but see the
RfC before doing mass systematic removals. This message is updated dynamically through the template {{
source check}}
(last update: 5 June 2024).
Cheers.— cyberbot II Talk to my owner:Online 00:22, 12 February 2016 (UTC)
Could someone who understands the matter please rewrite this expression for the subpixel shift?
It is not clear what the meaning is of the plus-minus symbol. If r00=1 (peak value) and r01=0.5 (neighbor), then the subpixel offset is 1/(1+/-0.5) = 2.0 or 0.67 ? Han-Kwang ( t) 17:25, 10 January 2018 (UTC)
![]() | This article is rated Start-class on Wikipedia's
content assessment scale. It is of interest to the following WikiProjects: | ||||||||||
|
The images shown do not contain "noise" as I would expect. It seems to be the same image but shifted. That means that all the "noise" is also shifted with (30,33). You can see the same "brighter" spot right of the lion's head, if you look carefully. It would be nice if this can be done with real noise. 1.) Take a picture. 2.) Pick a region. 3.) Apply white noise. 4.) Pick another region. 5.) Apply white noise. 6.) Do the math. —Preceding unsigned comment added by Saviourmachine ( talk • contribs) 08:41, 11 May 2011 (UTC)
I have trouble understanding this page. There's a description of a method, and a proof, but it isn't very clear what the method is supposed to achieve. So there's no "theorem" or "claim" to prove. Certainly the claim needs to be stated more clearly, preferably both in an informal (for understanding) and a formal way. Without this the "proof" should be downgraded to a "motivation" or "why it works" kind of section. Thanks & regards. akay 09:34, 19 July 2006 (UTC)
I'm not sure what the example is supposed to show. Is the vector from
<0 0> to <30 33> (where the prominent white dot appears) supposed to represent the displacement of image 2 relative
to image 1? 23, August 2006
The proof presented is flawed. The second line only applies if the image is circularly shifted, i.e. only if:
Either this needs to be stated, or the proof needs to be rewritten in terms of the continuous Fourier transform or the discrete-time Fourier transform.
Oli Filth 22:51, 1 May 2007 (UTC)
What's more, the expression that's referred to as a "normalised cross correlation" isn't a correlation at all, it's simply a multiplication. It's the spatial-domain result which is the cross-correlation. Oli Filth 08:54, 2 May 2007 (UTC)
Answer : anyway, in practice, images will be noisy and will never be the circular translated of one another, so the proof will never apply to a practical case, and the only interesting thing is the idea behind the algorithm, that's why it's useless to clutter the proof with a circular shift of the pictures. Say we suppose that both images are on a black background and translated by a small shift, so that they are both in the frame, then the theorem applies, and it gives us some kind of justification so as to why the algorithm is supposed to work. That's all we are asking for (since, again, in practice the images will never be actual translations of each other). —The preceding unsigned comment was added by 129.199.224.240 ( talk • contribs) 11:12, 24 June 2007.
In reality, you zero-pad the signals so that the circularity doesn't matter. At the edges you're just measuring the correlation of the image with nothing, instead of a wrapped version of itself. —Preceding unsigned comment added by 96.224.64.135 ( talk) 17:45, 1 December 2009 (UTC)
This article is too narrow. Phase correlation is also used in other fields besides image processing. Please generalize it and make the image processing application a subsection. — Keenan Pepper 03:34, 19 July 2007 (UTC)
I found this article very informative. However, you mention sub-pixel methods, but don't specify how this might be done. Also, I think it would be helpful to comment on how this method relates to motion estimation by the correlation method - increased accuracy and reduced robustness? 196.2.111.133 15:55, 20 July 2007 (UTC)
First, I would like to mention that the phase correlation is a private case of the normalized correlation, where we assume that two images have approximately the same Fourier magnitude. Second, one major benefit of this technique is the fact that you don't actually need to multiply the Fourier transforms of the two images and then divide by the magnitudes, but instead you can DISCARD the two magnitudes and just subtract the phases - which is much faster. —Preceding unsigned comment added by 212.25.107.145 ( talk) 09:50, 7 January 2008 (UTC)
Would someone explain why this alternate expression does not work?
It would seem to give the same result. Cgage22 ( talk) 22:54, 6 November 2008 (UTC)
I've been reverting the changes which state that phase correlation is a method of identifying "similarity" been images. Whilst this is just a correlation, and therefore its suitable post-processing of its output might give a crude estimate of "similarity"), the article doesn't discuss this. The article only discusses identifying the peak in the cross-correlation, and assuming that this corresponds to a translation.
If you'd like to keep the mention of "similarity", please could you provide a reference which describes this. Then, per WP:LEAD, a description of this should be added to the article body before we mention it in the lead, as the lead shouldn't introduce material that isn't discussed elsehwere in the article. Regards, Oli Filth( talk| contribs) 19:58, 15 August 2009 (UTC)
How is this any different from calculating the cross-correlation between the images? The fact that this can be done using fft's is noted on the cross-correlation page. Is this really a different thing? TimmmmCam ( talk) 12:29, 1 October 2009 (UTC)
The basic idea is simple but it's not obvious from the explanation full of math.
Basically, determining a position of a lion is complicated but determining a position of a single white dot is easy. Phase correlation transforms the problem from a lion to a dot.
Shifted copies of an image will differ only in the phases of their frequencies, not in amplitudes. A given pixel shift will produce small angular rotation on low frequencies and high rotation on high frequencies. If we discard the amplitude information (shape of lion) and keep the phase information (pixel shift), we get what would happen if we were shifting a single white dot instead of a lion. The result will be a white dot shifted by the same amount as the lion.
This explanation immediately shows a drawback of the method - if only small portion of the frequency space is occupied by signal frequencies (narrow-band signal or blurred image) and the rest by noise, most of the phase results will give random result and the phases will mutually contradict on how much the shift was. The phases derived from noise have the same say on the result as those derived from signal. I wonder what the result will be and how this drawback can be fixed. —Preceding unsigned comment added by 80.218.244.105 ( talk) 11:01, 21 January 2010 (UTC)
both references links provided on this page seem to be outdated and dead (404) doing a quick google search I could at least find an, as of yet, working link to the second reference http://www.liralab.it/teaching/SINA/slides-current/fourier-mellin-paper.pdf . Could someone fix that? 62.159.242.114 ( talk) 13:17, 14 March 2011 (UTC)
Hello fellow Wikipedians,
I have just added archive links to one external link on
Phase correlation. Please take a moment to review
my edit. If necessary, add {{
cbignore}}
after the link to keep me from modifying it. Alternatively, you can add {{
nobots|deny=InternetArchiveBot}}
to keep me off the page altogether. I made the following changes:
When you have finished reviewing my changes, please set the checked parameter below to true to let others know.
This message was posted before February 2018.
After February 2018, "External links modified" talk page sections are no longer generated or monitored by InternetArchiveBot. No special action is required regarding these talk page notices, other than
regular verification using the archive tool instructions below. Editors
have permission to delete these "External links modified" talk page sections if they want to de-clutter talk pages, but see the
RfC before doing mass systematic removals. This message is updated dynamically through the template {{
source check}}
(last update: 5 June 2024).
Cheers.— cyberbot II Talk to my owner:Online 00:22, 12 February 2016 (UTC)
Could someone who understands the matter please rewrite this expression for the subpixel shift?
It is not clear what the meaning is of the plus-minus symbol. If r00=1 (peak value) and r01=0.5 (neighbor), then the subpixel offset is 1/(1+/-0.5) = 2.0 or 0.67 ? Han-Kwang ( t) 17:25, 10 January 2018 (UTC)