Results 1 - 10
of
66
Super-Resolution from a Single Image
"... Methods for super-resolution can be broadly classified into two families of methods: (i) The classical multi-image super-resolution (combining images obtained at subpixel misalignments), and (ii) Example-Based super-resolution (learning correspondence between low and high resolution image patches fr ..."
Abstract
-
Cited by 139 (5 self)
- Add to MetaCart
(Show Context)
Methods for super-resolution can be broadly classified into two families of methods: (i) The classical multi-image super-resolution (combining images obtained at subpixel misalignments), and (ii) Example-Based super-resolution (learning correspondence between low and high resolution image patches from a database). In this paper we propose a unified framework for combining these two families of methods. We further show how this combined approach can be applied to obtain super resolution from as little as a single image (with no database or prior examples). Our approach is based on the observation that patches in a natural image tend to redundantly recur many times inside the image, both within the same scale, as well as across different scales. Recurrence of patches within the same image scale (at subpixel misalignments) gives rise to the classical super-resolution, whereas recurrence of patches across different scales of the same image gives rise to example-based super-resolution. Our approach attempts to recover at each pixel its best possible resolution increase based on its patch redundancy within and across scales. 1.
Image superresolution as sparse representation of raw image patches
, 2008
"... This paper addresses the problem of generating a superresolution (SR) image from a single low-resolution input image. We approach this problem from the perspective of compressed sensing. The low-resolution image is viewed as downsampled version of a high-resolution image, whose patches are assumed t ..."
Abstract
-
Cited by 135 (19 self)
- Add to MetaCart
(Show Context)
This paper addresses the problem of generating a superresolution (SR) image from a single low-resolution input image. We approach this problem from the perspective of compressed sensing. The low-resolution image is viewed as downsampled version of a high-resolution image, whose patches are assumed to have a sparse representation with respect to an over-complete dictionary of prototype signalatoms. The principle of compressed sensing ensures that under mild conditions, the sparse representation can be correctly recovered from the downsampled signal. We will demonstrate the effectiveness of sparsity as a prior for regularizing the otherwise ill-posed super-resolution problem. We further show that a small set of randomly chosen raw patches from training images of similar statistical nature to the input image generally serve as a good dictionary, in the sense that the computed representation is sparse and the recovered high-resolution image is competitive or even superior in quality to images produced by other SR methods.
Super-resolution from multiple views using learnt image models,”
- in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,
, 2001
"... ..."
(Show Context)
Space-time super-resolution
- PAMI
, 2005
"... We propose a method for constructing a video sequence of high space-time resolution by combining information from multiple low-resolution video sequences of the same dynamic scene. Super-resolution is performed simultaneously in time and in space. By âtemporal super-resolutionâ we mean recoverin ..."
Abstract
-
Cited by 65 (2 self)
- Add to MetaCart
(Show Context)
We propose a method for constructing a video sequence of high space-time resolution by combining information from multiple low-resolution video sequences of the same dynamic scene. Super-resolution is performed simultaneously in time and in space. By âtemporal super-resolutionâ we mean recovering rapid dynamic events that occur faster than regular frame-rate. Such dynamic events are not visible (or else observed incorrectly) in any of the input sequences, even if these are played in âslow-motionâ. The spatial and temporal dimensions are very different in nature, yet are interrelated. This leads to interesting visual tradeoffs in time and space, and to new video applications. These include: (i) treatment of spatial artifacts (e.g., motionblur) by increasing the temporal resolution, and (ii) combination of input sequences of different space-time resolutions (e.g., NTSC, PAL, and even high quality still images) to generate a high quality video sequence. We further analyze and compare characteristics of temporal super-resolution to those of spatial super-resolution. These include: How many video cameras are needed to obtain increased resolution? What is the upper bound on resolution improvement via super-resolution? What is the optimal camera configuration for various scenarios? What is the temporal analogue to the spatial âringingâ effect?
Computer Vision Applied to Super-Resolution”,
- IEEE Signal Processing Magazine,
, 2003
"... ..."
(Show Context)
Robust fusion of irregularly sampled data using adaptive normalized convolution
- EURASIP Journal on Applied Signal Processing
, 2006
"... We present a novel algorithm for image fusion from irregularly sampled data. The method is based on the framework of normalized convolution (NC), in which the local signal is approximated through a projection onto a subspace. The use of polynomial basis functions in this paper makes NC equivalent to ..."
Abstract
-
Cited by 38 (5 self)
- Add to MetaCart
(Show Context)
We present a novel algorithm for image fusion from irregularly sampled data. The method is based on the framework of normalized convolution (NC), in which the local signal is approximated through a projection onto a subspace. The use of polynomial basis functions in this paper makes NC equivalent to a local Taylor series expansion. Unlike the traditional framework, however, the window function of adaptive NC is adapted to local linear structures. This leads to more samples of the same modality being gathered for the analysis, which in turn improves signal-to-noise ratio and reduces diffusion across discontinuities. A robust signal certainty is also adapted to the sample intensities to minimize the influence of outliers. Excellent fusion capability of adaptive NC is demonstrated through an application of super-resolution image reconstruction. Copyright © 2006 Hindawi Publishing Corporation. All rights reserved. 1.
A sampled texture prior for image super-resolution
- in Advances in Neural Information Processing Systems (NIPS
"... Super-resolution aims to produce a high-resolution image from a set of one or more low-resolution images by recovering or inventing plausible high-frequency image content. Typical approaches try to reconstruct a high-resolution image using the sub-pixel displacements of several low-resolution images ..."
Abstract
-
Cited by 32 (3 self)
- Add to MetaCart
(Show Context)
Super-resolution aims to produce a high-resolution image from a set of one or more low-resolution images by recovering or inventing plausible high-frequency image content. Typical approaches try to reconstruct a high-resolution image using the sub-pixel displacements of several low-resolution images, usually regularized by a generic smoothness prior over the high-resolution image space. Other methods use training data to learn low-to-high-resolution matches, and have been highly successful even in the single-input-image case. Here we present a domain-specific im-age prior in the form of a p.d.f. based upon sampled images, and show that for certain types of super-resolution problems, this sample-based prior gives a significant improvement over other common multiple-image super-resolution techniques. 1
Unwrap mosaics: A new representation for video editing
- PROC. SIGGRAPH ’08
, 2008
"... We introduce a new representation for video which facilitates a number of common editing tasks. The representation has some of the power of a full reconstruction of 3D surface models from video, but is designed to be easy to recover from a priori unseen and uncalibrated footage. By modelling the ima ..."
Abstract
-
Cited by 23 (2 self)
- Add to MetaCart
We introduce a new representation for video which facilitates a number of common editing tasks. The representation has some of the power of a full reconstruction of 3D surface models from video, but is designed to be easy to recover from a priori unseen and uncalibrated footage. By modelling the image-formation process as a 2D-to-2D transformation from an object’s texture map to the image, modulated by an object-space occlusion mask, we can recover a representation which we term the “unwrap mosaic”. Many editing operations can be performed on the unwrap mosaic, and then re-composited into the original sequence, for example resizing objects, repainting textures, copying/cutting/pasting objects, and attaching effects layers to deforming objects.