Results 1 - 10
of
21
Decision Tree Fields
"... This paper introduces a new formulation for discrete image labeling tasks, the Decision Tree Field (DTF), that combines and generalizes random forests and conditional random fields (CRF) which have been widely used in computer vision. In a typical CRF model the unary potentials are derived from soph ..."
Abstract
-
Cited by 43 (8 self)
- Add to MetaCart
(Show Context)
This paper introduces a new formulation for discrete image labeling tasks, the Decision Tree Field (DTF), that combines and generalizes random forests and conditional random fields (CRF) which have been widely used in computer vision. In a typical CRF model the unary potentials are derived from sophisticated random forest or boosting based classifiers, however, the pairwise potentials are assumed to (1) have a simple parametric form with a pre-specified and fixed dependence on the image data, and (2) to be defined on the basis of a small and fixed neighborhood. In contrast, in DTF, local interactions between multiple variables are determined by means of decision trees evaluated on the image data, allowing the interactions to be adapted to the image content. This results in powerful graphical models which are able to represent complex label structure. Our key technical contribution is to show that the DTF model can be trained efficiently and jointly using a convex approximate likelihood function, enabling us to learn over a million free model parameters. We show experimentally that for applications which have a rich and complex label structure, our model achieves excellent results. 1.
Bayesian deblurring with integrated noise estimation
- In IEEE Conf. Comput. Vision and Pattern Recognition
"... Conventional non-blind image deblurring algorithms involve natural image priors and maximum a-posteriori (MAP) estimation. As a consequence of MAP estimation, separate pre-processing steps such as noise estimation and training of the regularization parameter are necessary to avoid user interaction. ..."
Abstract
-
Cited by 10 (5 self)
- Add to MetaCart
(Show Context)
Conventional non-blind image deblurring algorithms involve natural image priors and maximum a-posteriori (MAP) estimation. As a consequence of MAP estimation, separate pre-processing steps such as noise estimation and training of the regularization parameter are necessary to avoid user interaction. Moreover, MAP estimates involving standard natural image priors have been found lacking in terms of restoration performance. To address these issues we introduce an integrated Bayesian framework that unifies non-blind deblurring and noise estimation, thus freeing the user of tediously pre-determining a noise level. A sampling-based technique allows to integrate out the unknown noise level and to perform deblurring using the Bayesian mini-mum mean squared error estimate (MMSE), which requires no regularization parameter and yields higher performance than MAP estimates when combined with a learned high-order image prior. A quantitative evaluation demonstrates state-of-the-art results for both non-blind deblurring and noise estimation. 1.
Noise suppression in lowlight images through joint denoising and demosaicing
- in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition
, 2011
"... We address the effects of noise in low-light images in this paper. Color images are usually captured by a sensor with a color filter array (CFA). This requires a demosaicing process to generate a full color image. The captured images typically have low signal-to-noise ratio, and the demosaicing step ..."
Abstract
-
Cited by 6 (1 self)
- Add to MetaCart
(Show Context)
We address the effects of noise in low-light images in this paper. Color images are usually captured by a sensor with a color filter array (CFA). This requires a demosaicing process to generate a full color image. The captured images typically have low signal-to-noise ratio, and the demosaicing step further corrupts the image, which we show to be the leading cause of visually objectionable random noise patterns (splotches). To avoid this problem, we propose a combined framework of denoising and demosaicing, where we use information about the image inferred in the denoising step to perform demosaicing. Our experiments show that such a framework results in sharper low-light images that are devoid of splotches and other noise artifacts. (a) RGB image (b) Demosaiced image Figure 1. Effect of demosaicing on low-light noise characteristics: (a) RGB image with spatially independent Poisson noise; (b) demosaiced version of the noisy image. The simulated noisy RGB image was subsampled to form the Bayer pattern image. Notice how the demosaiced image demonstrates more splotches. 1.
High-Quality Computational Imaging Through Simple Lenses
"... Fig. 1. Our system reliably estimates point spread functions of a given optical system, enabling the capture of high-quality imagery through poorly performing lenses. From left to right: Camera with our lens system containing only a single glass element (the plano-convex lens lying next to the camer ..."
Abstract
-
Cited by 5 (1 self)
- Add to MetaCart
Fig. 1. Our system reliably estimates point spread functions of a given optical system, enabling the capture of high-quality imagery through poorly performing lenses. From left to right: Camera with our lens system containing only a single glass element (the plano-convex lens lying next to the camera in the left image), unprocessed input image, deblurred result. Modern imaging optics are highly complex systems consisting of up to two dozen individual optical elements. This complexity is required in order to compensate for the geometric and chromatic aberrations of a single lens, including geometric distortion, field curvature, wavelength-dependent blur, and color fringing. In this paper, we propose a set of computational photography techniques that remove these artifacts, and thus allow for post-capture correction of images captured through uncompensated, simple optics which are lighter and significantly less expensive. Specifically, we estimate perchannel, spatially-varying point spread functions, and perform non-blind deconvolution with a novel cross-channel term that is designed to specifically eliminate color fringing.
A Dictionary Learning Approach for Poisson Image Deblurring
, 2013
"... The restoration of images corrupted by blur and Poisson noise is a key issue in medical and biological image processing. While most existing methods are based on variational models, generally derived from a Maximum A Posteriori (MAP) formulation, recently sparse representations of images have shown ..."
Abstract
-
Cited by 5 (0 self)
- Add to MetaCart
The restoration of images corrupted by blur and Poisson noise is a key issue in medical and biological image processing. While most existing methods are based on variational models, generally derived from a Maximum A Posteriori (MAP) formulation, recently sparse representations of images have shown to be efficient approaches for image recovery. Following this idea, we propose in this paper a model containing three terms: a patch-based sparse representation prior over a learned dictionary, the pixel-based total variation regularization term and a data-fidelity term capturing the statistics of Poisson noise. The resulting optimization problem can be solved by an alternating minimization technique combined with variable splitting. Extensive experimental results suggest that in terms of visual quality, PSNR value and the method noise, the proposed algorithm outperforms state-of-the-art methods.
Texture Enhanced Image Denoising via Gradient Histogram Preservation
"... Image denoising is a classical yet fundamental problem in low level vision, as well as an ideal test bed to evaluate various statistical image modeling methods. One of the most challenging problems in image denoising is how to preserve the fine scale texture structures while removing noise. Various ..."
Abstract
-
Cited by 4 (2 self)
- Add to MetaCart
(Show Context)
Image denoising is a classical yet fundamental problem in low level vision, as well as an ideal test bed to evaluate various statistical image modeling methods. One of the most challenging problems in image denoising is how to preserve the fine scale texture structures while removing noise. Various natural image priors, such as gradient based prior, nonlocal self-similarity prior, and sparsity prior, have been extensively exploited for noise removal. The denoising algorithms based on these priors, however, tend to smooth the detailed image textures, degrading the image visual quality. To address this problem, in this paper we propose a texture enhanced image denoising (TEID) method by enforcing the gradient distribution of the denoised image to be close to the estimated gradient distribution of the original image. A novel gradient histogram preservation (GHP) algorithm is developed to enhance the texture structures while removing noise. Our experimental results demonstrate that the proposed GHP based TEID can well preserve the texture features of the denoised images, making them look more natural. 1.
H.: Automated video looping with progressive dynamism
- ACM Transactions on Graphics
, 2013
"... Given a short video we create a representation that captures a spectrum of looping videos with varying levels of dynamism, ranging from a static image to a highly animated loop. In such a progressively dynamic video, scene liveliness can be adjusted interactively using a slider control. Applications ..."
Abstract
-
Cited by 4 (2 self)
- Add to MetaCart
Given a short video we create a representation that captures a spectrum of looping videos with varying levels of dynamism, ranging from a static image to a highly animated loop. In such a progressively dynamic video, scene liveliness can be adjusted interactively using a slider control. Applications include background images and slideshows, where the desired level of activity may depend on personal taste or mood. The representation also provides a segmentation of the scene into independently looping regions, enabling interactive local adjustment over dynamism. For a landscape scene, this control might correspond to selective animation and deanimation of grass motion, water ripples, and swaying trees. Converting arbitrary video to looping content is a challenging research problem. Unlike prior work, we explore an optimization in which each pixel automatically determines its own looping period. The resulting nested segmentation of static and dynamic scene regions forms an extremely compact representation.
SPARSE REPRESENTATION BASED BLIND IMAGE DEBLURRING
"... We propose a sparse representation based blind image deblurring method. The proposed method exploits the sparsity property of natural images, by assuming that the patches from the natural images can be sparsely represented by an over-complete dictionary. By incorporating this prior into the deblurri ..."
Abstract
-
Cited by 4 (1 self)
- Add to MetaCart
(Show Context)
We propose a sparse representation based blind image deblurring method. The proposed method exploits the sparsity property of natural images, by assuming that the patches from the natural images can be sparsely represented by an over-complete dictionary. By incorporating this prior into the deblurring process, we can effectively regularize the illposed inverse problem and alleviate the undesirable ring effect which is usually suffered by conventional deblurring methods. Experimental results compared with state-of-theart blind deblurring method demonstrate the effectiveness of the proposed method. Index Terms — blind image deblurring, deconvolution, sparse representation 1.
A Generalized Iterated Shrinkage Algorithm for Non-convex Sparse Coding
"... In many sparse coding based image restoration and im-age classification problems, using non-convex ℓp-norm min-imization (0 ≤ p < 1) can often obtain better results than the convex ℓ1-norm minimization. A number of algorithms, e.g., iteratively reweighted least squares (IRLS), iterative-ly thresh ..."
Abstract
-
Cited by 3 (1 self)
- Add to MetaCart
(Show Context)
In many sparse coding based image restoration and im-age classification problems, using non-convex ℓp-norm min-imization (0 ≤ p < 1) can often obtain better results than the convex ℓ1-norm minimization. A number of algorithms, e.g., iteratively reweighted least squares (IRLS), iterative-ly thresholding method (ITM-ℓp), and look-up table (LUT), have been proposed for non-convex ℓp-norm sparse coding, while some analytic solutions have been suggested for some specific values of p. In this paper, by extending the popular soft-thresholding operator, we propose a generalized iter-ated shrinkage algorithm (GISA) for ℓp-norm non-convex sparse coding. Unlike the analytic solutions, the proposed GISA algorithm is easy to implement, and can be adopted for solving non-convex sparse coding problems with arbi-trary p values. Compared with LUT, GISA is more gen-eral and does not need to compute and store the look-up tables. Compared with IRLS and ITM-ℓp, GISA is theoret-ically more solid and can achieve more accurate solutions. Experiments on image restoration and sparse coding based face recognition are conducted to validate the performance of GISA. 1.
Gradient Histogram Estimation and Preservation for Texture Enhanced Image Denoising
, 2013
"... Natural image statistics plays an important role in image denoising, and various natural image priors, including gradient based, sparse representation based and nonlocal selfsimilarity based ones, have been widely studied and exploited for noise removal. In spite of the great success of many denois ..."
Abstract
-
Cited by 2 (0 self)
- Add to MetaCart
Natural image statistics plays an important role in image denoising, and various natural image priors, including gradient based, sparse representation based and nonlocal selfsimilarity based ones, have been widely studied and exploited for noise removal. In spite of the great success of many denoising algorithms, they tend to smooth the fine scale image textures when removing noise, degrading the image visual quality. To address this problem, in this paper we propose a texture enhanced image denoising method by enforcing the gradient histogram of the denoised image to be close to a reference gradient histogram of the original image. Given the reference gradient histogram, a novel gradient histogram preservation (GHP) algorithm is developed to enhance the texture structures while removing noise. Two region-based variants of GHP are proposed for the denoising of images consisting of regions with different textures. An algorithm is also developed to effectively estimate the reference gradient histogram from the noisy observation of the unknown image. Our experimental results demonstrate that the proposed GHP algorithm can well preserve the texture appearance in the denoised images, making them look more natural.