Determine the structural and gradient similarity using super resolution algorithm
Create New

Determine the structural and gradient similarity using super resolution algorithm

Project period

06/08/2017 - 07/07/2017

Views

229

3



Determine the structural and gradient similarity using super resolution algorithm
Determine the structural and gradient similarity using super resolution algorithm

The central aim of Super-Resolution (SR) is to generate a higher resolution image from lower resolution images. High-resolution image offers a high pixel density and thereby more details about the original scene. In this project, image super-resolution based on gradient magnitude and direction. Finally, to allow quality assessment of the results, a comparison of a variety of image quality measures is also performed. Besides the visual quality measurement, image quality measurements including correlation coefficient, peak signal-to-noise ratio, and mean structural similarity, are also presented. Image quality assessment plays an important role in various image processing applications. A great deal of effort has been made in recent years to develop objective image quality metrics that correlate with perceived quality measurement. The main function of the human eyes is to extract structural information from the viewing field, and the human visual system is highly adapted for this purpose. Therefore, a measurement of structural distortion should be a good approximation of perceived image distortion.

Image quality assessment techniques (IQA) is useful in many applications such as image acquisition, watermarking, compression, transmission, restoration, enhancement, and reproduction. The image super-resolution technique aims to construct a high-resolution (HR) image with one or several given low- resolution (LR) images. It has been widely used in various applications, including medical image processing, infrared imaging, face/iris recognition, image editing, etc.  According to the number of available LR images, the super-resolution algorithms can be classified into two categories: Multi-frame super-resolution and single-frame super-resolution approaches.  Traditional interpolation-based methods try to reconstruct the HR image by a base function, including bilinear, bicubic and nearest neighbor algorithms. The full reference IQA metrics such as PSNR, SSIM, etc. are used to evaluate the visual quality of HR images.

Why: Problem statement

The image super-resolution technique aims to construct a high-resolution (HR) image with one or several given low- resolution (LR) images. It has been widely used in various applications, including medical image processing, infrared imaging, face/iris recognition, image editing, etc.  According to the number of available LR images, the super-resolution algorithms can be classified into two categories: multi-frame super-resolution and single-frame super-resolution approaches.  Traditional interpolation-based methods try to reconstruct the HR image by a base function, including bilinear, bicubic and nearest neighbor algorithms. The full reference IQA metrics such as PSNR, SSIM, etc. are used to evaluate the visual quality of HR images. Image quality assessment (IQA) is useful in many applications such as image acquisition, watermarking, compression, transmission, restoration, enhancement, and reproduction. The goal of IQA is to calculate the extent of quality degradation and is thus used to evaluate/compare the performance of processing systems and/or optimize the choice of parameters in processing. Image distortion is often present in almost all images. Different types of distortion are there. For example noise, blur, contrast change, etc. These distortions can degrade the entire quality of the image. For example in image compression, if the captured image contains distortions then it would not match with the original image that is stored in the database. So finding the quality of the image in those areas is very necessary. 

How: Solution description

The possibility of reconstructing a super-resolved image from a set of images was initially proposed by Huang and Tsay, although the general sampling theorems previously formulated by Yen and Papoulis showed the same concept (from a theoretical point of view). When Huang and Tsay originally proposed the idea of the SR reconstruction, they faced the problem, concerning the frequency domain, of demonstrating the possibility of reconstructing an image with the improved resolution from several low-resolution undersampled images without noise and from the same scene, based on the spatial aliasing effect.

They assume a purely translational model and solve the dual problem of registration and restoration (the registration implies estimating the relative shifts among the observations and the restoration implies the estimation of samples on a uniform grid with a higher sampling rate). The restoration stage is an interpolation problem dealing with nonuniform sampling. From the Huang and Tsay proposal until the present days, several research groups have developed different algorithms for this task of reconstruction, obtained from different strategies or analyses of the problem. The great advances experimented by computer technology in the last years have led to a renewed and growing interest in the theory of image restoration. The main approaches are based on the nontraditional treatment of the classical restoration problem, oriented towards new restoration problems of the second generation, and the use of algorithms that are more complex and exhibit a higher computational cost. Based on the resulting image, these new second-generation algorithms can be classified into problems of image restoration, restoration of an image sequence, and reconstruction of an image improved with SR. 

In scientific literature, several algorithms have been proposed for this classical problem and the problems related to it, contributing to the construction of a unified theory that comprises many of the existing restoration methods. In the image restoration theory, mainly three different approaches exist that are widely used to obtain reliable restoration algorithms: maximum likelihood estimators (MLE) maximum a posteriori (MAP) probability, and the projection onto convex sets (POCS). An alternative classification based on the processing approach can be made, where the work on SR can be divided into two main categories: reconstruction-based methods and learning-based methods. The theoretical foundations for reconstruction methods are nonuniform sampling theorems, while learning-based methods employ generative models that are learned from samples. The goal of the former is to reconstruct the original (supersampled) signal while that of the latter is to create the signal based on learned generative models. 

How is it different from competition

Here we define the quality of a super-resolved HR image as for how similar it is to the true HR image. Therefore, similarity measures are used as indicators of quality. The SR reconstruction process introduces distortion, such as noise and artifacts like ringing. A good objective quality metric for SR should account for these artifacts. In this chapter, we introduce three different metrics to compare the results: peak signal-to-noise ratio, correlation coefficient, and structural similarity. All of them are full-reference metrics, which means that a complete reference image is assumed to be known. This will be useful in comparing the various algorithms.

Who are your customers

In this project, the selection of the particular image from the extracting frames in the video to improve quality by using super-resolution algorithm and calculate the structural similarities, psnr value, correlation coefficient, mean squared error by using image quality assessment. Full reference IQA model based on the gradient similarities. The gradient similarity was used to measure structural distortions. Image quality assessment (IQA) is useful in many applications such as image acquisition, watermarking, compression, transmission, restoration, enhancement, and reproduction. The goal of IQA is to calculate the extent of quality degradation and is thus used to evaluate/compare the performance of processing systems and/or optimize the choice of parameters in processing.

Project Phases and Schedule

Phase 1: Data collection
Phase 2: Designing
Phase 3: Testing method
Phase 4: Documentation process

Resources Required

Hardware Requirements
Processor               -    Pentium –III    
Speed                     -    1.1 GHz
RAM                        -    256 MB (min)
Hard Disk                -   20 GB
Floppy Drive           -    1.44 MB
Key Board               -    Standard Windows Keyboard
Mouse                     -    Two or Three Button Mouse
Monitor                    -    SVGA
 
Software Requirements
Operating System              -     Windows95/98/2000/XP
Language                           -     JAVA JDK 1.3
Development IDE               -     NETBEANS IDE
Back End                            -     SQL SERVER 2005

Comments

Leave a Comment

Post a Comment