Because some of the ideas surrounding super-resolution raise fundamental issues, there is need at the outset to examine the relevant physical and information-theoretical principles. Diffraction Limit The detail of a physical object that an optical instrument can reproduce in an image has limits that are mandated by laws of physics, whether formulated by the diffraction equations in the wave theory of light  or the Uncertainty Principle for photons in quantum mechanics.
New procedures probing electro-magnetic disturbances at the molecular level in the so-called near field  remain fully consistent with Maxwell's equations. A succinct expression of the diffraction limit is given in the spatial-frequency domain. In Fourier optics light distributions are expressed as superpositions of a series of grating light patterns in a range of fringe widths, technically spatial frequencies.
It is generally taught that diffraction theory stipulates an upper limit, the cut-off spatial-frequency, beyond which pattern elements fail to be transferred into the optical image, i. But in fact what is set by diffraction theory is the width of the passband, not a fixed upper limit. No laws of physics are broken when a spatial frequency band beyond the cut-off spatial frequency is swapped for one inside it: this has long been implemented in dark-field microscopy.
Nor are information-theoretical rules broken when superimposing several bands,   disentangling them in the received image needs assumptions of object invariance during multiple exposures, i. Information When the term super-resolution is used in techniques of inferring object details from statistical treatment of the image within standard resolution limits, for example, averaging multiple exposures, it involves an exchange of one kind of information extracting signal from noise for another the assumption that the target has remained invariant.
Resolution and localization True resolution involves the distinction of whether a target, e. When a target is known to be single, its location can be determined with higher precision than the image width by finding the centroid center of gravity of its image light distribution.
Motion-Free Super-Resolution | Subhasis Chaudhuri | Springer
The word ultra-resolution had been proposed for this process  but it did not catch on, and the high-precision localization procedure is typically referred to as super-resolution. In summary: The technical achievements of enhancing the performance of imaging-forming and —sensing devices now classified as super-resolution utilize to the fullest but always stay within the bounds imposed by the laws of physics and information theory. Substituting spatial-frequency bands. Though the bandwidth allowable by diffraction is fixed, it can be positioned anywhere in the spatial-frequency spectrum.
Dark-field illumination in microscopy is an example. See also aperture synthesis. Geometrical SR reconstruction algorithms are possible if and only if the input low resolution images have been under-sampled and therefore contain aliasing. Because of this aliasing, the high-frequency content of the desired reconstruction image is embedded in the low-frequency content of each of the observed images.
Given a sufficient number of observation images, and if the set of observations vary in their phase i. In practice, this frequency-based approach is not used for reconstruction, but even in the case of spatial approaches e.
- Recommended for you.
- Data Mining and Analysis: Fundamental Concepts and Algorithms.
- Sign In / Returning Customer.
- Teaching English creatively.
- Shop with confidence?
- Motion-free super-resolution - CERN Document Server.
There are both single-frame and multiple-frame variants of SR. Multiple-frame SR uses the sub-pixel shifts between multiple low resolution images of the same scene. It creates an improved resolution image fusing information from all low resolution images, and the created higher resolution images are better descriptions of the scene. Single-frame SR methods attempt to magnify the image without introducing blur.
The flow chart of the conventional dynamic SR algorithm is illustrated in Figure 2. Explicit motion estimation is a major factor that affects the performance of the motion-based SR algorithm as mentioned in [ 13 , 14 ]. Various research efforts have been dedicated to enable precise sub-pixel accuracy motion estimation; however, the methods developed are insufficient to guarantee perfect motion compensation and, even though perfect motion estimation is potentially possible, it usually requires a large amount of computation.
Some novel approaches not involving accurate motion estimation were recently suggested in [ 10 — 14 ], but they are not suitable for practical real-time surveillance system applications because of their computation requirements. In this article, we added a validation method in the sequential estimation process to enhance erroneous reconstructed HR images caused by inexplicit motion estimations. When the motion estimation result is inaccurate i. With the dynamic linear modeling described in Section 2, this difference in the pixel intensity can be represented by the distance in Equation 10 :.
Since we assume that all covariance matrices including S t are diagonal, computing the distance of one measured frame at time t , d t which is referred to as the 'Mahalanobis distance' or 'Statistical distance' , is the same as computing the sum of the distances of each pixel in that frame, d k t , in Equation Y t in Equation 12 is the measurement at time t and Y t -1 is the sequence of measurements from the initial time to time t - 1. The theoretical description for this can be found in the sections on the Kalman filter in [ 16 , 17 ].
In the proposed SR algorithm, we attempt to detect any 'misalignment' at the pixel level but not at the frame level, meaning that we want to exclude only those pixels that are misaligned in the measured frame, not all of the pixels in the measured frame that are misaligned.
Shop by category
Whenever the pixel data from the input LR image at each time instant i. In other words, only those pixels whose distance is below the threshold are considered valid. So, this procedure regards the pixels that lie outside of the validation region as outliers, i. This is the so-called 'Measurement Validation' method and it is applied right before the pixel data fusion process in Equations 5 and 6 in our SR approach illustrated in Figure 4.
In the proposed measurement validation method, only valid pixel values should be used in the update equations. In our implementation, after the new measurement is obtained, i. After we determine the misaligned pixels among MN pixels, we can prevent them from being used in the update equations by setting those elements of K t , whose indices correspond to the indices of misaligned pixels, to zero.
The chi-square distribution table gives the probability mass:. In the proposed method, the threshold is set to So, the threshold value is not directly related to the image dynamic range, but to the range of the statistical distance of the image pixel. The bigger the threshold that is selected, the wider the validation region. In other words, the probability that the measured pixels are determined as misaligned will decrease as the threshold becomes larger.
Since the dynamic SR algorithm recursively fuses the pixel data from the sequentially observed images, it is highly likely for an erroneous HR estimation result to occur when the scene or contents of two adjacent frames are totally different. This problem arises frequently when the input LR video contains many different scenes or the motions in it are too large to be estimated. There is no possible motion between different frames from different scenes and, hence, these frames can never be aligned correctly. Even though the measurement validation method can detect and filter out misaligned pixels, fusing pixels from two different scenes is not a desired situation.
Instead of applying one of the conventional scene change detection methods [ 21 , 22 ], we suggest a simple but effective way to detect a sudden change of scene in the input LR video by exploiting the statistical distance already discussed in the previous section. In this article, we set the threshold value, Th to 0. This threshold value is determined experimentally with more than ten real video data containing scene changes. If a sudden scene change is detected with this method, we reset the estimation process i.
The procedure is summarized in Figure 4. We evaluated the performance of the proposed dynamic SR algorithm with synthetic and real video data. The threshold for measurement validation was set to For the deblurring method in the last step of the proposed SR algorithm, we used the classical but effective Wiener filter approach with a constant noise-to-signal ratio NSR to reduce the computation complexity.
The parameter NSR for the Wiener filter was tuned to obtain the best performance in all experiments. In this experiment, we tested the proposed algorithm with synthetic LR video data. We generated LR color videos by simulating the image acquisition procedure described in Section 2. The test video in Figure 5 was downloaded from the website of the author in [ 8 ] b and the test videos in Figures 6 and 7 were captured by a commercial surveillance camera, SHCN, courtesy of Samsung Techwin Co.
The test LR videos are super-resolved by a factor of two through the proposed algorithm and the method in [ 8 ]. The synthetic webcam video data result : a Original frame. The synthetic surveillance video data result : a Original frame. The synthetic video data result : a Bicubic interpolated frame. The PSNR are According to [ 8 ], they used the image registration algorithm in [ 23 ] which is different from the algorithm we exploited.
As mentioned in the earlier sections and previous related studies, the major factor contributing to the reconstruction image result of the multi-frame SR algorithm is the accuracy of the image registration. Thus, if a different image registration algorithm is used in the reference method, we cannot say that the improved HR image result is completely because of the proposed measurement validation.
For a fair comparison, we also implemented the method in [ 8 ] using the frequency-domain image registration algorithm [ 18 ] which is used in the proposed method. Therefore, we compared the proposed method with two reference methods, one from the author's website and the other from our own implementation by modifying the image registration part. In addition, we applied the Wiener filter to the method in [ 8 ], instead of the bilateral-total variation BTV regularization to see the effect of the measurement validation only.
The images in Figure 5 are the 90th frames and the images in Figure 6 are the 60th frames of each input video. In the reconstructed HR frames in Figures 5 and 6 , there are some artifacts because of the motion estimation error, such as periodic teeth along horizontal and vertical lines or stair-case phenomena along diagonal lines.
The motion estimation error may become large when the size of an image is too small, or the motion is too large. Because the only difference between the methods in Figure 5d,e is the image registration algorithm, the slightly better quality of Figure 5e can be attributed to the better performance of the algorithm in [ 18 ]. As shown in Figures 5f and 6f , the image quality of the HR result with the proposed method is enhanced more than the results in Figures 5e and 6e.
When compared to the results obtained with the method in [ 8 ], the jaggedness of the edges and corners is substantially reduced. Even though the same image registration algorithm was used for the results in Figure 5e,f , the result obtained with the proposed method is visually superior. This demonstrates the effectiveness of the proposed measurement validation method. Analogously, the same analysis can be applied to the results in Figure 6. In the experiment corresponding to the results in Figure 7 , we enhanced the spatial resolution of the LR video by a factor of two.
- 3D Printing: A Practical Guide for Librarians.
- About this book.
- What this is about (with context and in not too technical terms).
- American Colossus: The Triumph of Capitalism, 1865-1900.
- Multiscale Modelling of Organic and Hybrid Photovoltaics.
There is little difference in performance between the results obtained with and without the measurement validation Figure 7c,d , respectively because the image registration was quite accurate. To test the performance of the measurement validation, we intentionally added alignment errors to the aligned LR frames beyond the 60th frame. The HR image at the 90th frame without the measurement validation in Figure 8a was significantly degraded because of the registration errors.
On the contrary, the resulting HR image obtained with the measurement validation was less affected by the registration errors as shown in Figure 8b. In Figure 8c , one can see that the number of misaligned pixels determined by the threshold in Equation 13 increases after the 60th frame.
This tells us that the measurement validation method becomes more effective when a large amount of image registration errors occurs.
- A Perfect Hero.
- Donate to arXiv!
- Super-resolution imaging?
- Tales of the Neighborhood: Jewish Narrative Dialogues in Late Antiquity?
- Cooking Up Murder (A Cooking Class Mystery).
The synthetic video data result : a Reconstructed 90th HR frame using the method without measurement validation. We artificially added registration errors from the 60 to 90th frames. In the next experiment, our algorithm is evaluated with real video data captured by a surveillance camera, courtesy of Adyoron Intelligent Systems Ltd. We increased the spatial resolution of the real LR video by a factor of two in the vertical and horizontal directions.
Figure 9d demonstrates the superior performance of the proposed algorithm compared to the conventional methods in Figure 9b,c. Especially, the jagged edges because of the wrong translational motion estimation are clearly reduced in Figure 9c. This is the contribution of the measurement validation process.
Real video data result : a Bicubic interpolated frame. Note that the artifact because of misalignment around the edges are effectively removed in d. In the case of a small input size, the effect of filtering misaligned pixels becomes more remarkable, as shown in the experimental results of Figure In general, precise motion estimation is more difficult when the input image is small, since the number of pixels, i.
The visual quality of the results without the measurement validation in Figure 10c,g is worse than the Bicubic interpolated results in Figure 10b,f. Assuming that a sufficient number of LR frames are available and the proper image registration algorithm is used for compensating the motions existing among the LR frames, multi-frame SR generally outperforms the single image interpolation method. In the extreme case where we do not register the LR frames at all, the estimated HR image result will be worse than the Bicubic interpolation result. However, if we apply the measurement validation while still not registering all LR frames, the HR image result will be almost the same as the initial estimated HR image since most of the unregistered LR pixels will be regarded as invalid.
Thus, if we set the initial estimated HR image as the Bicubic interpolated one of the initial LR frames, the HR image result obtained with the proposed method cannot be worse than the Bicubic interpolation result even though most of the LR data are excluded. I believe experienced researchers will also find this book useful for seeing the overall picture of the super-resolution problem. Image Super-Resolution 2. Review of Current Status 3. Looking Ahead. Du kanske gillar. Spara som favorit. Skickas inom vardagar.