Results 1  10
of
292
A fast learning algorithm for deep belief nets
 Neural Computation
, 2006
"... We show how to use “complementary priors ” to eliminate the explaining away effects that make inference difficult in denselyconnected belief nets that have many hidden layers. Using complementary priors, we derive a fast, greedy algorithm that can learn deep, directed belief networks one layer at a ..."
Abstract

Cited by 970 (49 self)
 Add to MetaCart
(Show Context)
We show how to use “complementary priors ” to eliminate the explaining away effects that make inference difficult in denselyconnected belief nets that have many hidden layers. Using complementary priors, we derive a fast, greedy algorithm that can learn deep, directed belief networks one layer at a time, provided the top two layers form an undirected associative memory. The fast, greedy algorithm is used to initialize a slower learning procedure that finetunes the weights using a contrastive version of the wakesleep algorithm. After finetuning, a network with three hidden layers forms a very good generative model of the joint distribution of handwritten digit images and their labels. This generative model gives better digit classification than the best discriminative learning algorithms. The lowdimensional manifolds on which the digits lie are modelled by long ravines in the freeenergy landscape of the toplevel associative memory and it is easy to explore these ravines by using the directed connections to display what the associative memory has in mind. 1
Removing camera shake from a single photograph
 ACM Trans. Graph
, 2006
"... Camera shake during exposure leads to objectionable image blur and ruins many photographs. Conventional blind deconvolution methods typically assume frequencydomain constraints on images, or overly simplified parametric forms for the motion path during camera shake. Real camera motions can follow c ..."
Abstract

Cited by 325 (16 self)
 Add to MetaCart
Camera shake during exposure leads to objectionable image blur and ruins many photographs. Conventional blind deconvolution methods typically assume frequencydomain constraints on images, or overly simplified parametric forms for the motion path during camera shake. Real camera motions can follow convoluted paths, and a spatial domain prior can better maintain visually salient image characteristics. We introduce a method to remove the effects of camera shake from seriously blurred images. The method assumes a uniform camera blur over the image and negligible inplane camera rotation. In order to estimate the blur from the camera shake, the user must specify an image region without saturation effects. We show results for a variety of digital photographs taken from personal photo collections.
Robust Higher Order Potentials for Enforcing Label Consistency
, 2009
"... This paper proposes a novel framework for labelling problems which is able to combine multiple segmentations in a principled manner. Our method is based on higher order conditional random fields and uses potentials defined on sets of pixels (image segments) generated using unsupervised segmentation ..."
Abstract

Cited by 259 (34 self)
 Add to MetaCart
This paper proposes a novel framework for labelling problems which is able to combine multiple segmentations in a principled manner. Our method is based on higher order conditional random fields and uses potentials defined on sets of pixels (image segments) generated using unsupervised segmentation algorithms. These potentials enforce label consistency in image regions and can be seen as a generalization of the commonly used pairwise contrast sensitive smoothness potentials. The higher order potential functions used in our framework take the form of the Robust P n model and are more general than the P n Potts model recently proposed by Kohli et al. We prove that the optimal swap and expansion moves for energy functions composed of these potentials can be computed by solving a stmincut problem. This enables the use of powerful graph cut based move making algorithms for performing inference in the framework. We test our method on the problem of multiclass object segmentation by augmenting the conventional CRF used for object segmentation with higher order potentials defined on image regions. Experiments on challenging data sets show that integration of higher order potentials quantitatively and qualitatively improves results leading to much better definition of object boundaries. We
Extracting and Composing Robust Features with Denoising Autoencoders
, 2008
"... Previous work has shown that the difficulties in learning deep generative or discriminative models can be overcome by an initial unsupervised learning step that maps inputs to useful intermediate representations. We introduce and motivate a new training principle for unsupervised learning of a repre ..."
Abstract

Cited by 251 (32 self)
 Add to MetaCart
Previous work has shown that the difficulties in learning deep generative or discriminative models can be overcome by an initial unsupervised learning step that maps inputs to useful intermediate representations. We introduce and motivate a new training principle for unsupervised learning of a representation based on the idea of making the learned representations robust to partial corruption of the input pattern. This approach can be used to train autoencoders, and these denoising autoencoders can be stacked to initialize deep architectures. The algorithm can be motivated from a manifold learning and information theoretic perspective or from a generative model perspective. Comparative experiments clearly show the surprising advantage of corrupting the input of autoencoders on a pattern classification benchmark suite.
Sparse representation for color image restoration
 the IEEE Trans. on Image Processing
, 2007
"... Sparse representations of signals have drawn considerable interest in recent years. The assumption that natural signals, such as images, admit a sparse decomposition over a redundant dictionary leads to efficient algorithms for handling such sources of data. In particular, the design of well adapted ..."
Abstract

Cited by 219 (30 self)
 Add to MetaCart
(Show Context)
Sparse representations of signals have drawn considerable interest in recent years. The assumption that natural signals, such as images, admit a sparse decomposition over a redundant dictionary leads to efficient algorithms for handling such sources of data. In particular, the design of well adapted dictionaries for images has been a major challenge. The KSVD has been recently proposed for this task [1], and shown to perform very well for various grayscale image processing tasks. In this paper we address the problem of learning dictionaries for color images and extend the KSVDbased grayscale image denoising algorithm that appears in [2]. This work puts forward ways for handling nonhomogeneous noise and missing information, paving the way to stateoftheart results in applications such as color image denoising, demosaicing, and inpainting, as demonstrated in this paper. EDICS Category: COLCOLR (Color processing) I.
Highquality Motion Deblurring from a Single Image
, 2008
"... Figure 1 High quality single image motiondeblurring. The left subfigure shows one captured image using a handheld camera under dim light. It is severely blurred by an unknown kernel. The right subfigure shows our deblurred image result computed by estimating both the blur kernel and the unblurre ..."
Abstract

Cited by 184 (6 self)
 Add to MetaCart
Figure 1 High quality single image motiondeblurring. The left subfigure shows one captured image using a handheld camera under dim light. It is severely blurred by an unknown kernel. The right subfigure shows our deblurred image result computed by estimating both the blur kernel and the unblurred latent image. We show several closeups of blurred/unblurred image regions for comparison. We present a new algorithm for removing motion blur from a single image. Our method computes a deblurred image using a unified probabilistic model of both blur kernel estimation and unblurred image restoration. We present an analysis of the causes of common artifacts found in current deblurring methods, and then introduce several novel terms within this probabilistic model that are inspired by our analysis. These terms include a model of the spatial randomness of noise in the blurred image, as well a new local smoothness prior that reduces ringing artifacts by constraining contrast in the unblurred image wherever the blurred image exhibits low contrast. Finally, we describe an efficient optimization scheme that alternates between blur kernel estimation and unblurred image restoration until convergence. As a result of these steps, we are able to produce high quality deblurred results in low computation time. We are even able to produce results of comparable quality to techniques that require additional input images beyond a single blurry photograph, and to methods that require additional hardware.
Image Deblurring with Blurred/Noisy Image Pairs
"... Taking satisfactory photos under dim lighting conditions using a handheld camera is challenging. If the camera is set to a long exposure time, the image is blurred due to camera shake. On the other hand, the image is dark and noisy if it is taken with a short exposure time but with a high camera g ..."
Abstract

Cited by 130 (4 self)
 Add to MetaCart
Taking satisfactory photos under dim lighting conditions using a handheld camera is challenging. If the camera is set to a long exposure time, the image is blurred due to camera shake. On the other hand, the image is dark and noisy if it is taken with a short exposure time but with a high camera gain. By combining information extracted from both blurred and noisy images, however, we show in this paper how to produce a high quality image that cannot be obtained by simply denoising the noisy image, or deblurring the blurred image alone. Our approach is image deblurring with the help of the noisy image. First, both images are used to estimate an accurate blur kernel, which otherwise is difficult to obtain from a single blurred image. Second, and again using both images, a residual deconvolution is proposed to significantly reduce ringing artifacts inherent to image deconvolution. Third, the remaining ringing artifacts in smooth image regions are further suppressed by a gaincontrolled deconvolution process. We demonstrate the effectiveness of our approach using a number of indoor and outdoor images taken by offtheshelf handheld cameras in poor lighting environments.
Optimal spatial adaptation for patchbased image denoising
 IEEE Trans. Image Process
, 2006
"... Abstract—A novel adaptive and patchbased approach is proposed for image denoising and representation. The method is based on a pointwise selection of small image patches of fixed size in the variable neighborhood of each pixel. Our contribution is to associate with each pixel the weighted sum of da ..."
Abstract

Cited by 113 (10 self)
 Add to MetaCart
(Show Context)
Abstract—A novel adaptive and patchbased approach is proposed for image denoising and representation. The method is based on a pointwise selection of small image patches of fixed size in the variable neighborhood of each pixel. Our contribution is to associate with each pixel the weighted sum of data points within an adaptive neighborhood, in a manner that it balances the accuracy of approximation and the stochastic error, at each spatial position. This method is general and can be applied under the assumption that there exists repetitive patterns in a local neighborhood of a point. By introducing spatial adaptivity, we extend the work earlier described by Buades et al. which can be considered as an extension of bilateral filtering to image patches. Finally, we propose a nearly parameterfree algorithm for image denoising. The method is applied to both artificially corrupted (white Gaussian noise) and real images and the performance is very close to, and in some cases even surpasses, that of the already published denoising methods. I.
Fast image deconvolution using hyperlaplacian priors, supplementary material
, 2009
"... The heavytailed distribution of gradients in natural scenes have proven effective priors for a range of problems such as denoising, deblurring and superresolution. These distributions are well modeled by a hyperLaplacian p(x) ∝ e−kxα), typically with 0.5 ≤ α ≤ 0.8. However, the use of sparse ..."
Abstract

Cited by 109 (2 self)
 Add to MetaCart
(Show Context)
The heavytailed distribution of gradients in natural scenes have proven effective priors for a range of problems such as denoising, deblurring and superresolution. These distributions are well modeled by a hyperLaplacian p(x) ∝ e−kxα), typically with 0.5 ≤ α ≤ 0.8. However, the use of sparse distributions makes the problem nonconvex and impractically slow to solve for multimegapixel images. In this paper we describe a deconvolution approach that is several orders of magnitude faster than existing techniques that use hyperLaplacian priors. We adopt an alternating minimization scheme where one of the two phases is a nonconvex problem that is separable over pixels. This perpixel subproblem may be solved with a lookup table (LUT). Alternatively, for two specific values of α, 1/2 and 2/3 an analytic solution can be found, by finding the roots of a cubic and quartic polynomial, respectively. Our approach (using either LUTs or analytic formulae) is able to deconvolve a 1 megapixel image in less than ∼3 seconds, achieving comparable quality to existing methods such as iteratively reweighted least squares (IRLS) that take ∼20 minutes. Furthermore, our method is quite general and can easily be extended to related image processing problems, beyond the deconvolution application demonstrated. 1
MRF energy minimization and beyond via dual decomposition
 IN: IEEE PAMI. (2011
"... This paper introduces a new rigorous theoretical framework to address discrete MRFbased optimization in computer vision. Such a framework exploits the powerful technique of Dual Decomposition. It is based on a projected subgradient scheme that attempts to solve an MRF optimization problem by first ..."
Abstract

Cited by 105 (9 self)
 Add to MetaCart
(Show Context)
This paper introduces a new rigorous theoretical framework to address discrete MRFbased optimization in computer vision. Such a framework exploits the powerful technique of Dual Decomposition. It is based on a projected subgradient scheme that attempts to solve an MRF optimization problem by first decomposing it into a set of appropriately chosen subproblems and then combining their solutions in a principled way. In order to determine the limits of this method, we analyze the conditions that these subproblems have to satisfy and we demonstrate the extreme generality and flexibility of such an approach. We thus show that, by appropriately choosing what subproblems to use, one can design novel and very powerful MRF optimization algorithms. For instance, in this manner we are able to derive algorithms that: 1) generalize and extend stateoftheart messagepassing methods, 2) optimize very tight LPrelaxations to MRF optimization, 3) and take full advantage of the special structure that may exist in particular MRFs, allowing the use of efficient inference techniques such as, e.g, graphcut based methods. Theoretical analysis on the bounds related with the different algorithms derived from our framework and experimental results/comparisons using synthetic and real data for a variety of tasks in computer vision demonstrate the extreme potentials of our approach.