The contents of the Super-resolution page were merged into Superresolution on July 24, 2012. For the contribution history and old versions of the merged article please see its history. |
Hello,
Does that Image really display superresolution? As I understood it, super-resolution was when the image was able to identify features smaller than the diameter (radius?) of the Airy disc used for imaging. I thought super-resolution required physical analyses such as time-reversal for active signals or negative refractive index metamaterials. Either that or deconvolution techniques such as the Lucy-Richardson algorithm. It seems to me that the image is merely displaying supersampling or something akin to this, and thus doesn't belong here. Please comment, as I am unsure about this to some extent. User A1 07:39, 31 August 2007 (UTC)
It increases the resolution beyond what the camera is capable of, so I would imagine it's superresolution in at least a loose sense. All I know about the algorithm is that it depends on knowledge of the camera model. (I got a free copy of the software by taking test images with my camera for them to process and put in their libraries.)
By all means upload something else if you know of a better example. — Omegatron 22:55, 31 August 2007 (UTC)
why there isn't a list of programs capable of doing superresolution (dunno what to call this, "doing super resolution" doesn't sounds right for me, dunno for sure) linked from this article?-- TiagoTiago ( talk) 15:04, 15 July 2008 (UTC)
One of the industry standard terms, which I've verified has no Wiki search entry cross referencing, is "microscan". Many military, engineering, and patent doccuemnts on the web, describe "superresolution", using the term microscan, along with the suppoert algorithms of matrix operation sub-pixel motion estimation, for use in uncontrolled microscan application, where absolute sensor position information is not present, to produce a microscan using known pixel displacement, and fourier inverse filtering of known displacement microscan frames, when being integrated into higher resolution images, derived from the pixel frequency point spread function definition. Especially with regards to the low resolution military FLIR camera systems using resolution enhancement applications like microscanning, to allow small payload IR cameras to resolve better with minimal spplication weight and power increases. LoneRubberDragon ( talk) 23:56, 17 August 2008 (UTC)
Perhaps a paragraph, and links can be added, to also point to this standard term, along with super resolution. LoneRubberDragon ( talk) 23:56, 17 August 2008 (UTC)
And you are correct that an optic system that is nyquist limited to the sensor, cannot be used for microscanning. To perform real microscanning, requires an optical system with a nyquist limit of the optical transfer function, of the right resolution fold over the sensor, to produce real microscanning resolution enhancement. Without such an optical system to sensor match, one can only perform morphological edge enhancements to an image, but features lower reolution than the baseband image limited by optical transfer function, will not be brought out by any microscan / superresolution algorithm, like the old saying, you can't enhance a blurry image, because you cannot create what isn't there (without implicit truth data subject modeling information to artificially enhance the image being guessed at).
http://en.wikipedia.org/wiki/Nyquist_limit
http://en.wikipedia.org/wiki/Optical_transfer_function
http://en.wikipedia.org/wiki/Point_spread_function
http://en.wikipedia.org/wiki/Airy_disk
LoneRubberDragon ( talk) 23:56, 17 August 2008 (UTC)
MICROSCAN
http://oai.dtic.mil/oai/oai?verb=getRecord&metadataPrefix=html&identifier=ADA390373 LoneRubberDragon ( talk) 23:56, 19 August 2008 (UTC)
ENHANCEMENT LIMITATIONS
http://en.wikipedia.org/wiki/Dawes_limit
http://en.wikipedia.org/wiki/Rayleigh_criterion
http://en.wikipedia.org/wiki/Airy_disk
http://en.wikipedia.org/wiki/Airy_disk#Mathematical_details
http://en.wikipedia.org/wiki/Diffraction_limited
http://www.quantumfocus.com/publications/2005_Optical_and_Infrared_FA_Microscopy.pdf
http://en.wikipedia.org/wiki/Point_Spread_Function
http://support.svi.nl/wiki/NyquistCalculator
RECONSTRUCTION METHOD BASE
http://en.wikipedia.org/wiki/Wiener_deconvolution
http://en.wikipedia.org/wiki/Deconvolution
http://en.wikipedia.org/wiki/Poisson_distribution
http://en.wikipedia.org/wiki/Signal-to-noise_ratio
LoneRubberDragon ( talk) 23:56, 19 August 2008 (UTC)
Lucy's original paper. [1] ; its quite clever. User A1 ( talk) 12:50, 20 August 2008 (UTC)
Yea, that's a very good appearing document, yet I will have to digest that one more, especially given its age. First face, it appears it has its statistical limitations based on low SNR data sets, just as microscanning does for sensor noise. But, is it not for smoothed-morphology data-modeling constructions, with some possible applications for a sensor's PSF estimation step in microscanning algorithms? Am I groking the paper right? LoneRubberDragon ( talk) 10:47, 25 August 2008 (UTC)
If you scan through Irvine Sensors Corporation, Costa Mesa, CA, SBIR projects, there is a project on Microscan circa 1995-7 +- couple years, which uses gradient matrix motion based image displacement estimation, used for the proper subpixel multiple-frame motion displacement estimates, that are required for making the displaced microbinning image stack. And then ther's an iterative process to estimate (if I recall right) the point spread function from the stacks of the low resolution image sensor that is being microscanned to the optics' and sensor noise figure limits, to perform the wiener type deconvolution in the presence of poisson sensor illuminance noise. If I can find the original article here and code work, I'll post some equations from the core, but my paperwork here is quite behind, arresting proper information flow. LoneRubberDragon ( talk) 10:47, 25 August 2008 (UTC)
It wasn't a mistake, it was intentional. The current "reference" section is longer than the article, and is not being used to support statements in the article as is policy at WP:CITE. If I wanted a list of super resolution articles I would simply run a search at a journal search engine. Readers are in no way benefitting by having a dozen papers thrown in their face. User A1 ( talk) 23:13, 20 November 2009 (UTC)
they are of real value to the reader in their selection and organisation : This whole article is essentially a list, and continually adding to a list of papers makes this more the case. The perfect article is supposedly engaging, self contained and not a list of information. Prose is key, and needs to be good prose that explains the concept to the reader. Lists are not useful for explaining a topic, which is the key point of this article.
Saying "they could just search for them" is not an argument (such an argument would exclude essentially all content at Wikipedia One cannot simply search for encyclopaedic content, in fact I would say very few good articles could be searched for readily -- one would have to simultaneously span the "complex enough to contain information and not so complex that one needs to be an academic" gap -- a gap in knowledge that is very apparent, particularly if you start reading any of those papers. Secondly: there is no "organisation" to this list, not even alphabetical organisation.
I do think they should be divided between "Related works" and "References" actually used to support article statements, which I've done. I would like to see a scientific paper that can be taken seriously which uses references in this manner. This is most appaling for a wikipedia article and in direct opposition to the citation guidelines
You seem to be treating the papers as some kind of advertising, which I'm sure wasn't the intention. I also don't understand your purpose in removing links to papers.. This simply relates to my first point -- I see no value (and indeed a reduction in value) by simply citing literature ad nauseum. This topic needs explaining by someone who actually understands what it is that the topic is about (which i do not, as evidenced by the above).
Simply put I find it most disturbing that [1] There is not a single inline citation. and [2] the so called "references" are longer than the text, which leads to the situation that I have no idea how super-resolution works, short of having to read academic papers, which only an academic, or student would do. Most people simply do not understand academic papers, as they almost always assume a thorough understading of the topic already at hand, and a significant understanding of other principles involved (note how linear algebra, nor matrix nor vector calculus nor cost function etc is not linked in the text?
Apologies if I am a bit blunt, and I realise that I have written more on this talk page than on the article, but I feel unable to improve this article personally, and am watching it deteriorate. User A1 ( talk) 08:58, 21 November 2009 (UTC)
For UserA1:
If the Irvine Sensors Corporation / Wright Patterson Air Force Base systems math is a bit complex, try this basic method. First take a multiple frame slightly-moving-camera image sequence, from a non-nyquist limited CCD sensor - which is a Prerequisite. Use least squares displacement search to stack the frames on top of each other with these "saccadic" movements, to place them within one pixel displacement of the first frame. Then take X and Y differential gradiant functions dot producted with eigen-estimates of the X and Y dispalcement based on the image context function, for each frame against the first image, in order to estimate each frame's sub-pixel displacment measurement to the first frame. The sub-pixel motions of the stacked frames, can then be used to interleave the low resolution baseband images onto a high resolution image map of 2, 3, or 4 times the resolution of the original CCDs. Then you can fill in the blanks with interpolations, for any remaining empty cells of the oversampled high resolution image map. Then you can use an inverse point spread function estimate for the CCD pixels, to adjust the point spread functions of the pixels, using FFT image processing. Noise will be enhanced by performing this inverse PSF correlation operation, but it will approximate a microscan algorithm resolution enhancement. If you want to improve performance against noise in the inverse Pixel-PSF filter, try the Weiner Method Link that you called "Useless". LoneRubberDragon ( talk) 20:32, 15 January 2010 (UTC)
If your camera is not optically nyquist limited more-sharply than the CCD, by the resolution enhancement desired, the algorithm will simply produce a blurry image 2, 3, or 4 times the size of the original data, limited by the optical blur. Also, the more frames you use, the lower the noise effect, and convergence to a stable high resolution answer. Most cameras focus algorithms are fixed, and may only derive near CCD resolution focus, thus preventing all microscanning superresolution capability. Some cameras, I've noticed, are not even focused to the resolution of the native CCD, producing blurry image cross-pixel-talk, before one even begins algorithms. Also, given the undersampled effects by Bayer Patterend Cameras that "cheat", one may also not be able to perform superresolution without using monochromatic only scenes for the algorithm application. LoneRubberDragon ( talk) 20:32, 15 January 2010 (UTC)
http://en.wikipedia.org/wiki/Crosstalk
http://en.wikipedia.org/wiki/Bayer_pattern
Additionally, once a high resolution image is created, with the accompanying differential noise increase, a Median Filter on the high resolution map is often useful to preserve morphology, and smooth the noise enhancement. Also, once a high resolution image has been derived, robust-rank morphological modeling can be performed on the high resolution image to rotoscope the image forms at the higher resolution. LoneRubberDragon ( talk) 13:57, 17 January 2010 (UTC)
Another solution you may try, is Maximum Entropy type of processing. Repeating much of the above, take the high resolution map, and converge each high resolution pixel in a ROI (Region Of Interest), so that the error of the multiple frames represented by the ROI of the high resolution model image, is minimized. It is like seismic analysis, where one models the earth with modifyable voxels that minimize the error of the multiple seismic responses of known seismometers. LoneRubberDragon ( talk) 20:32, 15 January 2010 (UTC)
I should also add, that increasing the resolution beyond 4 times oversampling of the CCD native resolution, creates increasingly long high resolution image model convergence times of analysis, due to the Poisson Pixel noise margins in the differential analysis, and per-CCD-pixel irregularities, without longer integration times, and a more adaptive model of Entropy Maximization modeling, as one example, and the above example. LoneRubberDragon ( talk) 13:36, 17 January 2010 (UTC)
And, if all you seek is creating animated models based on low resolution CCD images, simply use contextual edge detection algorithms, and morphological-rotoscope modeling to create objects that can be scaled arbitrarily like polygons and such, with sharp edges at any resolution. But it is not reolution enhancement per-se, but merely digital animation morphological rotoscoping. The intrinsic resolution of the models will still be merely limited by CCD resolution, with fake edges, that are off by the nyquist model of the native CCD resolution. Also, inter-frame smoothing algorithms can be made to smooth the nyquist limited low resolution modeling "moire effects" of the CCD and inherently low resolution methods. LoneRubberDragon ( talk) 13:40, 17 January 2010 (UTC)
http://en.wikipedia.org/wiki/Rotoscope
http://en.wikipedia.org/wiki/Mathematical_morphology
http://en.wikipedia.org/wiki/Max_Fleischer
http://en.wikipedia.org/wiki/Gradients
http://en.wikipedia.org/wiki/Moire
Okay, Okay, my unction tells me, Here, to help you in detail with the hardest step, related to linear algebra and calculus. LoneRubberDragon ( talk) 15:03, 17 January 2010 (UTC)
Take a stack of constant exposure, and constant image-context frames, with an unknown sub-pixel displacement, I[X,Y,Frame]. For each frame, calculate the image X and Y point gradients thus (for X gradient to example): IGradX[X,Y,Frame] = -1*I[X-1,Y,Frame] + 0*I[X,Y,Frame] + 1*I[X+1,Y,Frame]. Or even take blurred gradients like this extended kernel Gradient, IGradXBlur[X,Y,Frame] = -1*I[X-1,Y-1,Frame] + 0*I[X,Y-1,Frame] + 1*I[X+1,Y-1,Frame] + -1*I[X-1,Y,Frame] + 0*I[X,Y,Frame] + 1*I[X+1,Y,Frame] + -1*I[X-1,Y+1,Frame] + 0*I[X,Y+1,Frame] + 1*I[X+1,Y+1,Frame]. LoneRubberDragon ( talk) 15:03, 17 January 2010 (UTC)
Then take each frame gradient, and dot product it with the first frame, as the reference. Like thus (for X gradeint to example): GradientDotX[Frame] = SUM(over X,Y || I[X,Y,1] * IGradX[X,Y,Frame]). LoneRubberDragon ( talk) 15:03, 17 January 2010 (UTC)
Then you have both an X and Y gradiant dot product for each frame, relative to the first frame, as the reference, appearing as thus: (GradientDotX[Frame], GradientDotY[Frame]). Now here you have to play a little with the math, in order to normalize the Gradient Dot Products, because they have a scalar relative to the integral sum of the image brightness function, for a point gradient. To normalize the Point Gradient Dot Products, divide each Dot Product, (GradientDotX[Frame], GradientDotY[Frame]), by the integral of the image found by GradNormXY=SUM(over X,Y || I[X,Y,1]), so that a cordinate in X and Y of the sub-pixel displacements, can be estimated, yielding: (DisplacementX,DisplacementY) = (GradientDotX[Frame] / GradNormXY, GradientDotY[Frame] / GradNormXY). If forget the exact corrections when using blurred gradeients ... likely divide by the abolute value integral divided by two, like 1.0 for point gradient, and 1/3 for a blurred 3 by 3 gradient. That is up to you to include or exclude, or even investigate. LoneRubberDragon ( talk) 15:03, 17 January 2010 (UTC)
At this point, once the Gradients are normalized into sub-pixel displacements, the low resolution images of the native CCD can be stacked into a high resolution frame buffer, according to the estimated unknown dispalcements, to integrate the multiple frames of saccadic motion, on the high resolution plane, in raw pixel intensity format. LoneRubberDragon ( talk) 15:03, 17 January 2010 (UTC)
Is this down to earth enough to understand, now? You are correct, the article is quite small compared to the so-called Policy Error in posting useful help toward the topic of discussion. Perhaps, Heisenberg Uncertainty, by having such a small article, has allowed a portion of judgement to cross into the Policy Error Window of analysis. The resolution is too low, in your interpretation of the posts Good has led me to place here. And where is the fun for the student, if I simply give you the raw code, in this context of earth? Good led me to Euler and almost Runge-Kutta modeling of the solar system gravity equations in Middle School into ninth grade. If you stop complaining and ask more pointed questions, I may attempt a language translation into your frame of reference more accurately, given the bandwidth of yourself.
LoneRubberDragon (
talk)
20:32, 15 January 2010 (UTC)
The example image of super-resolution says:
But would not 2X super-resolution require only 4 images? Or to put it another way, is not the information content of 9 images equal to 3X (squared) of 1 image? —Preceding unsigned comment added by 202.6.86.1 ( talk) 23:03, 3 October 2010 (UTC)
Because some of the images will have repeated data. This repeated data is required so that the software can figure out how the images fit together. Since neither you nor the software knows the exact angle of the multiple images, each little bit helps.
I hope that makes sense. I wish the article could show you each of the nine shots, as then it would make more sense. Remember, each photo is laid on top of the other, not put together from pieces. — trlkly 12:47, 28 September 2011 (UTC)
The unhyphenated form superresolution, used even at the beginning of the subject [1], seems now to have superseded the less-often used super-resolution -- see Optics Info Base. I propose that the redirect be not from superresolution to super-resolution but the other way. If the abbreviation SR is used in the rest of the entry, there should be no problem.Gwestheimer 16:36, 15 May 2012 (UTC)
Trying for an entry that gives an overall layout of the subject and conveys the essential concepts, I have opted for a separate page entitled "Superresolution" rather than shoehorning the material into the current more technically-oriented page "Super-resolution." This involves undoing the re-direct from "Superresolution" to "Super-resolution" and inserting into each of these pages a reference to the other under "See also."
Gwestheimer 17:34, 19 May 2012 (UTC) — Preceding
unsigned comment added by
Gwestheimer (
talk •
contribs)
The contents of the Super-resolution page were merged into Superresolution on July 24, 2012. For the contribution history and old versions of the merged article please see its history. |
Hello,
Does that Image really display superresolution? As I understood it, super-resolution was when the image was able to identify features smaller than the diameter (radius?) of the Airy disc used for imaging. I thought super-resolution required physical analyses such as time-reversal for active signals or negative refractive index metamaterials. Either that or deconvolution techniques such as the Lucy-Richardson algorithm. It seems to me that the image is merely displaying supersampling or something akin to this, and thus doesn't belong here. Please comment, as I am unsure about this to some extent. User A1 07:39, 31 August 2007 (UTC)
It increases the resolution beyond what the camera is capable of, so I would imagine it's superresolution in at least a loose sense. All I know about the algorithm is that it depends on knowledge of the camera model. (I got a free copy of the software by taking test images with my camera for them to process and put in their libraries.)
By all means upload something else if you know of a better example. — Omegatron 22:55, 31 August 2007 (UTC)
why there isn't a list of programs capable of doing superresolution (dunno what to call this, "doing super resolution" doesn't sounds right for me, dunno for sure) linked from this article?-- TiagoTiago ( talk) 15:04, 15 July 2008 (UTC)
One of the industry standard terms, which I've verified has no Wiki search entry cross referencing, is "microscan". Many military, engineering, and patent doccuemnts on the web, describe "superresolution", using the term microscan, along with the suppoert algorithms of matrix operation sub-pixel motion estimation, for use in uncontrolled microscan application, where absolute sensor position information is not present, to produce a microscan using known pixel displacement, and fourier inverse filtering of known displacement microscan frames, when being integrated into higher resolution images, derived from the pixel frequency point spread function definition. Especially with regards to the low resolution military FLIR camera systems using resolution enhancement applications like microscanning, to allow small payload IR cameras to resolve better with minimal spplication weight and power increases. LoneRubberDragon ( talk) 23:56, 17 August 2008 (UTC)
Perhaps a paragraph, and links can be added, to also point to this standard term, along with super resolution. LoneRubberDragon ( talk) 23:56, 17 August 2008 (UTC)
And you are correct that an optic system that is nyquist limited to the sensor, cannot be used for microscanning. To perform real microscanning, requires an optical system with a nyquist limit of the optical transfer function, of the right resolution fold over the sensor, to produce real microscanning resolution enhancement. Without such an optical system to sensor match, one can only perform morphological edge enhancements to an image, but features lower reolution than the baseband image limited by optical transfer function, will not be brought out by any microscan / superresolution algorithm, like the old saying, you can't enhance a blurry image, because you cannot create what isn't there (without implicit truth data subject modeling information to artificially enhance the image being guessed at).
http://en.wikipedia.org/wiki/Nyquist_limit
http://en.wikipedia.org/wiki/Optical_transfer_function
http://en.wikipedia.org/wiki/Point_spread_function
http://en.wikipedia.org/wiki/Airy_disk
LoneRubberDragon ( talk) 23:56, 17 August 2008 (UTC)
MICROSCAN
http://oai.dtic.mil/oai/oai?verb=getRecord&metadataPrefix=html&identifier=ADA390373 LoneRubberDragon ( talk) 23:56, 19 August 2008 (UTC)
ENHANCEMENT LIMITATIONS
http://en.wikipedia.org/wiki/Dawes_limit
http://en.wikipedia.org/wiki/Rayleigh_criterion
http://en.wikipedia.org/wiki/Airy_disk
http://en.wikipedia.org/wiki/Airy_disk#Mathematical_details
http://en.wikipedia.org/wiki/Diffraction_limited
http://www.quantumfocus.com/publications/2005_Optical_and_Infrared_FA_Microscopy.pdf
http://en.wikipedia.org/wiki/Point_Spread_Function
http://support.svi.nl/wiki/NyquistCalculator
RECONSTRUCTION METHOD BASE
http://en.wikipedia.org/wiki/Wiener_deconvolution
http://en.wikipedia.org/wiki/Deconvolution
http://en.wikipedia.org/wiki/Poisson_distribution
http://en.wikipedia.org/wiki/Signal-to-noise_ratio
LoneRubberDragon ( talk) 23:56, 19 August 2008 (UTC)
Lucy's original paper. [1] ; its quite clever. User A1 ( talk) 12:50, 20 August 2008 (UTC)
Yea, that's a very good appearing document, yet I will have to digest that one more, especially given its age. First face, it appears it has its statistical limitations based on low SNR data sets, just as microscanning does for sensor noise. But, is it not for smoothed-morphology data-modeling constructions, with some possible applications for a sensor's PSF estimation step in microscanning algorithms? Am I groking the paper right? LoneRubberDragon ( talk) 10:47, 25 August 2008 (UTC)
If you scan through Irvine Sensors Corporation, Costa Mesa, CA, SBIR projects, there is a project on Microscan circa 1995-7 +- couple years, which uses gradient matrix motion based image displacement estimation, used for the proper subpixel multiple-frame motion displacement estimates, that are required for making the displaced microbinning image stack. And then ther's an iterative process to estimate (if I recall right) the point spread function from the stacks of the low resolution image sensor that is being microscanned to the optics' and sensor noise figure limits, to perform the wiener type deconvolution in the presence of poisson sensor illuminance noise. If I can find the original article here and code work, I'll post some equations from the core, but my paperwork here is quite behind, arresting proper information flow. LoneRubberDragon ( talk) 10:47, 25 August 2008 (UTC)
It wasn't a mistake, it was intentional. The current "reference" section is longer than the article, and is not being used to support statements in the article as is policy at WP:CITE. If I wanted a list of super resolution articles I would simply run a search at a journal search engine. Readers are in no way benefitting by having a dozen papers thrown in their face. User A1 ( talk) 23:13, 20 November 2009 (UTC)
they are of real value to the reader in their selection and organisation : This whole article is essentially a list, and continually adding to a list of papers makes this more the case. The perfect article is supposedly engaging, self contained and not a list of information. Prose is key, and needs to be good prose that explains the concept to the reader. Lists are not useful for explaining a topic, which is the key point of this article.
Saying "they could just search for them" is not an argument (such an argument would exclude essentially all content at Wikipedia One cannot simply search for encyclopaedic content, in fact I would say very few good articles could be searched for readily -- one would have to simultaneously span the "complex enough to contain information and not so complex that one needs to be an academic" gap -- a gap in knowledge that is very apparent, particularly if you start reading any of those papers. Secondly: there is no "organisation" to this list, not even alphabetical organisation.
I do think they should be divided between "Related works" and "References" actually used to support article statements, which I've done. I would like to see a scientific paper that can be taken seriously which uses references in this manner. This is most appaling for a wikipedia article and in direct opposition to the citation guidelines
You seem to be treating the papers as some kind of advertising, which I'm sure wasn't the intention. I also don't understand your purpose in removing links to papers.. This simply relates to my first point -- I see no value (and indeed a reduction in value) by simply citing literature ad nauseum. This topic needs explaining by someone who actually understands what it is that the topic is about (which i do not, as evidenced by the above).
Simply put I find it most disturbing that [1] There is not a single inline citation. and [2] the so called "references" are longer than the text, which leads to the situation that I have no idea how super-resolution works, short of having to read academic papers, which only an academic, or student would do. Most people simply do not understand academic papers, as they almost always assume a thorough understading of the topic already at hand, and a significant understanding of other principles involved (note how linear algebra, nor matrix nor vector calculus nor cost function etc is not linked in the text?
Apologies if I am a bit blunt, and I realise that I have written more on this talk page than on the article, but I feel unable to improve this article personally, and am watching it deteriorate. User A1 ( talk) 08:58, 21 November 2009 (UTC)
For UserA1:
If the Irvine Sensors Corporation / Wright Patterson Air Force Base systems math is a bit complex, try this basic method. First take a multiple frame slightly-moving-camera image sequence, from a non-nyquist limited CCD sensor - which is a Prerequisite. Use least squares displacement search to stack the frames on top of each other with these "saccadic" movements, to place them within one pixel displacement of the first frame. Then take X and Y differential gradiant functions dot producted with eigen-estimates of the X and Y dispalcement based on the image context function, for each frame against the first image, in order to estimate each frame's sub-pixel displacment measurement to the first frame. The sub-pixel motions of the stacked frames, can then be used to interleave the low resolution baseband images onto a high resolution image map of 2, 3, or 4 times the resolution of the original CCDs. Then you can fill in the blanks with interpolations, for any remaining empty cells of the oversampled high resolution image map. Then you can use an inverse point spread function estimate for the CCD pixels, to adjust the point spread functions of the pixels, using FFT image processing. Noise will be enhanced by performing this inverse PSF correlation operation, but it will approximate a microscan algorithm resolution enhancement. If you want to improve performance against noise in the inverse Pixel-PSF filter, try the Weiner Method Link that you called "Useless". LoneRubberDragon ( talk) 20:32, 15 January 2010 (UTC)
If your camera is not optically nyquist limited more-sharply than the CCD, by the resolution enhancement desired, the algorithm will simply produce a blurry image 2, 3, or 4 times the size of the original data, limited by the optical blur. Also, the more frames you use, the lower the noise effect, and convergence to a stable high resolution answer. Most cameras focus algorithms are fixed, and may only derive near CCD resolution focus, thus preventing all microscanning superresolution capability. Some cameras, I've noticed, are not even focused to the resolution of the native CCD, producing blurry image cross-pixel-talk, before one even begins algorithms. Also, given the undersampled effects by Bayer Patterend Cameras that "cheat", one may also not be able to perform superresolution without using monochromatic only scenes for the algorithm application. LoneRubberDragon ( talk) 20:32, 15 January 2010 (UTC)
http://en.wikipedia.org/wiki/Crosstalk
http://en.wikipedia.org/wiki/Bayer_pattern
Additionally, once a high resolution image is created, with the accompanying differential noise increase, a Median Filter on the high resolution map is often useful to preserve morphology, and smooth the noise enhancement. Also, once a high resolution image has been derived, robust-rank morphological modeling can be performed on the high resolution image to rotoscope the image forms at the higher resolution. LoneRubberDragon ( talk) 13:57, 17 January 2010 (UTC)
Another solution you may try, is Maximum Entropy type of processing. Repeating much of the above, take the high resolution map, and converge each high resolution pixel in a ROI (Region Of Interest), so that the error of the multiple frames represented by the ROI of the high resolution model image, is minimized. It is like seismic analysis, where one models the earth with modifyable voxels that minimize the error of the multiple seismic responses of known seismometers. LoneRubberDragon ( talk) 20:32, 15 January 2010 (UTC)
I should also add, that increasing the resolution beyond 4 times oversampling of the CCD native resolution, creates increasingly long high resolution image model convergence times of analysis, due to the Poisson Pixel noise margins in the differential analysis, and per-CCD-pixel irregularities, without longer integration times, and a more adaptive model of Entropy Maximization modeling, as one example, and the above example. LoneRubberDragon ( talk) 13:36, 17 January 2010 (UTC)
And, if all you seek is creating animated models based on low resolution CCD images, simply use contextual edge detection algorithms, and morphological-rotoscope modeling to create objects that can be scaled arbitrarily like polygons and such, with sharp edges at any resolution. But it is not reolution enhancement per-se, but merely digital animation morphological rotoscoping. The intrinsic resolution of the models will still be merely limited by CCD resolution, with fake edges, that are off by the nyquist model of the native CCD resolution. Also, inter-frame smoothing algorithms can be made to smooth the nyquist limited low resolution modeling "moire effects" of the CCD and inherently low resolution methods. LoneRubberDragon ( talk) 13:40, 17 January 2010 (UTC)
http://en.wikipedia.org/wiki/Rotoscope
http://en.wikipedia.org/wiki/Mathematical_morphology
http://en.wikipedia.org/wiki/Max_Fleischer
http://en.wikipedia.org/wiki/Gradients
http://en.wikipedia.org/wiki/Moire
Okay, Okay, my unction tells me, Here, to help you in detail with the hardest step, related to linear algebra and calculus. LoneRubberDragon ( talk) 15:03, 17 January 2010 (UTC)
Take a stack of constant exposure, and constant image-context frames, with an unknown sub-pixel displacement, I[X,Y,Frame]. For each frame, calculate the image X and Y point gradients thus (for X gradient to example): IGradX[X,Y,Frame] = -1*I[X-1,Y,Frame] + 0*I[X,Y,Frame] + 1*I[X+1,Y,Frame]. Or even take blurred gradients like this extended kernel Gradient, IGradXBlur[X,Y,Frame] = -1*I[X-1,Y-1,Frame] + 0*I[X,Y-1,Frame] + 1*I[X+1,Y-1,Frame] + -1*I[X-1,Y,Frame] + 0*I[X,Y,Frame] + 1*I[X+1,Y,Frame] + -1*I[X-1,Y+1,Frame] + 0*I[X,Y+1,Frame] + 1*I[X+1,Y+1,Frame]. LoneRubberDragon ( talk) 15:03, 17 January 2010 (UTC)
Then take each frame gradient, and dot product it with the first frame, as the reference. Like thus (for X gradeint to example): GradientDotX[Frame] = SUM(over X,Y || I[X,Y,1] * IGradX[X,Y,Frame]). LoneRubberDragon ( talk) 15:03, 17 January 2010 (UTC)
Then you have both an X and Y gradiant dot product for each frame, relative to the first frame, as the reference, appearing as thus: (GradientDotX[Frame], GradientDotY[Frame]). Now here you have to play a little with the math, in order to normalize the Gradient Dot Products, because they have a scalar relative to the integral sum of the image brightness function, for a point gradient. To normalize the Point Gradient Dot Products, divide each Dot Product, (GradientDotX[Frame], GradientDotY[Frame]), by the integral of the image found by GradNormXY=SUM(over X,Y || I[X,Y,1]), so that a cordinate in X and Y of the sub-pixel displacements, can be estimated, yielding: (DisplacementX,DisplacementY) = (GradientDotX[Frame] / GradNormXY, GradientDotY[Frame] / GradNormXY). If forget the exact corrections when using blurred gradeients ... likely divide by the abolute value integral divided by two, like 1.0 for point gradient, and 1/3 for a blurred 3 by 3 gradient. That is up to you to include or exclude, or even investigate. LoneRubberDragon ( talk) 15:03, 17 January 2010 (UTC)
At this point, once the Gradients are normalized into sub-pixel displacements, the low resolution images of the native CCD can be stacked into a high resolution frame buffer, according to the estimated unknown dispalcements, to integrate the multiple frames of saccadic motion, on the high resolution plane, in raw pixel intensity format. LoneRubberDragon ( talk) 15:03, 17 January 2010 (UTC)
Is this down to earth enough to understand, now? You are correct, the article is quite small compared to the so-called Policy Error in posting useful help toward the topic of discussion. Perhaps, Heisenberg Uncertainty, by having such a small article, has allowed a portion of judgement to cross into the Policy Error Window of analysis. The resolution is too low, in your interpretation of the posts Good has led me to place here. And where is the fun for the student, if I simply give you the raw code, in this context of earth? Good led me to Euler and almost Runge-Kutta modeling of the solar system gravity equations in Middle School into ninth grade. If you stop complaining and ask more pointed questions, I may attempt a language translation into your frame of reference more accurately, given the bandwidth of yourself.
LoneRubberDragon (
talk)
20:32, 15 January 2010 (UTC)
The example image of super-resolution says:
But would not 2X super-resolution require only 4 images? Or to put it another way, is not the information content of 9 images equal to 3X (squared) of 1 image? —Preceding unsigned comment added by 202.6.86.1 ( talk) 23:03, 3 October 2010 (UTC)
Because some of the images will have repeated data. This repeated data is required so that the software can figure out how the images fit together. Since neither you nor the software knows the exact angle of the multiple images, each little bit helps.
I hope that makes sense. I wish the article could show you each of the nine shots, as then it would make more sense. Remember, each photo is laid on top of the other, not put together from pieces. — trlkly 12:47, 28 September 2011 (UTC)
The unhyphenated form superresolution, used even at the beginning of the subject [1], seems now to have superseded the less-often used super-resolution -- see Optics Info Base. I propose that the redirect be not from superresolution to super-resolution but the other way. If the abbreviation SR is used in the rest of the entry, there should be no problem.Gwestheimer 16:36, 15 May 2012 (UTC)
Trying for an entry that gives an overall layout of the subject and conveys the essential concepts, I have opted for a separate page entitled "Superresolution" rather than shoehorning the material into the current more technically-oriented page "Super-resolution." This involves undoing the re-direct from "Superresolution" to "Super-resolution" and inserting into each of these pages a reference to the other under "See also."
Gwestheimer 17:34, 19 May 2012 (UTC) — Preceding
unsigned comment added by
Gwestheimer (
talk •
contribs)