Results 1  10
of
17
Symmetrizing Smoothing Filters
, 2013
"... We study a general class of nonlinear and shiftvarying smoothing filters that operate based on averaging. This important class of filters includes many wellknown examples such as the bilateral filter, nonlocal means, general adaptive moving average filters, and more. (Many linear filters such as ..."
Abstract

Cited by 9 (6 self)
 Add to MetaCart
(Show Context)
We study a general class of nonlinear and shiftvarying smoothing filters that operate based on averaging. This important class of filters includes many wellknown examples such as the bilateral filter, nonlocal means, general adaptive moving average filters, and more. (Many linear filters such as linear minimum meansquared error smoothing filters, Savitzky–Golay filters, smoothing splines, and wavelet smoothers can be considered special cases.) They are frequently used in both signal and image processing as they are elegant, computationally simple, and high performing. The operators that implement such filters, however, are not symmetric in general. The main contribution of this paper is to provide a provably stable method for symmetrizing the smoothing operators. Specifically, we propose a novel approximation of smoothing operators by symmetric doubly stochastic matrices and show that this approximation is stable and accurate, even more so in higher dimensions. We demonstrate that there are several important advantages to this symmetrization, particularly in image processing/filtering applications such as denoising. In particular, (1) doubly stochastic filters generally lead to improved performance over the baseline smoothing procedure; (2) when the filters are applied iteratively, the symmetric ones can be guaranteed to lead to stable algorithms; and (3) symmetric smoothers allow an orthonormal eigendecomposition which enables us to peer into the complex behavior of such nonlinear and shiftvarying filters in a locally adapted basis using principal components. Finally, a doubly stochastic filter has a simple and intuitive interpretation. Namely, it implies the very natural property that every pixel in the given input image has the same sum total contribution to the output image.
How to SAIFly Boost Denoising Performance
, 2012
"... Spatial domain image filters (e.g. bilateral filter, NLM, LARK) have achieved great success in denoising. However, their overall performance has not generally surpassed the leading transform domain based filters (such as BM3D). One important reason is that spatial domain filters lack an efficient wa ..."
Abstract

Cited by 4 (2 self)
 Add to MetaCart
(Show Context)
Spatial domain image filters (e.g. bilateral filter, NLM, LARK) have achieved great success in denoising. However, their overall performance has not generally surpassed the leading transform domain based filters (such as BM3D). One important reason is that spatial domain filters lack an efficient way to adaptively fine tune their denoising strength; something that is relatively easy to do in transform domain method with shrinkage operators. In the pixel domain, the smoothing strength is usually controlled globally by, for example, tuning a regularization parameter. In this paper, we propose SAIF 1 (Spatially Adaptive Iterative Filtering), a new strategy to control the denoising strength locally for any spatial domain method. This approach is capable of filtering local image content iteratively using the given base filter, while the type of iteration and the iteration number are automatically optimized with respect to estimated risk (i.e. meansquared error). In exploiting the estimated local SNR, we also present a new risk estimator which is different than the oftenemployed SURE method and exceeds its performance in many cases. Experiments illustrate that our strategy can significantly relax the base algorithm’s sensitivity to its tuning (smoothing) parameters, and effectively boost the performance of several existing denoising filters to generate stateoftheart results under both simulated and practical conditions.
Redefining SelfSimilarity in Natural Images for Denoising Using Graph Signal Gradient
"... Abstract—Image denoising is the most basic inverse imaging problem. As an underdetermined problem, appropriate definition of image priors to regularize the problem is crucial. Among recent proposed priors for image denoising are: i) graph Laplacian regularizer where a given pixel patch is assumed ..."
Abstract

Cited by 3 (3 self)
 Add to MetaCart
(Show Context)
Abstract—Image denoising is the most basic inverse imaging problem. As an underdetermined problem, appropriate definition of image priors to regularize the problem is crucial. Among recent proposed priors for image denoising are: i) graph Laplacian regularizer where a given pixel patch is assumed to be smooth in the graphsignal domain; and ii) selfsimilarity prior where image patches are assumed to recur throughout a natural image in nonlocal spatial regions. In our first contribution, we demonstrate that the graph Laplacian regularizer converges to a continuous time functional counterpart, and careful selection of its features can lead to a discriminant signal prior. In our second contribution, we redefine patch selfsimilarity in terms of patch gradients and argue that the new definition results in a more accurate estimate of the graph Laplacian matrix, and thus better image denoising performance. Experiments show that our designed algorithm based on graph Laplacian regularizer and gradientbased selfsimilarity can outperform nonlocal means (NLM) denoising by up to 1.4 dB in PSNR. I.
FAST NONLOCAL FILTERING BY RANDOM SAMPLING: IT WORKS, ESPECIALLY FOR LARGE IMAGES
"... Nonlocal means (NLM) is a popular denoising scheme. Conceptually simple, the algorithm is computationally intensive for large images. We propose to speed up NLM by using random sampling. Our algorithm picks, uniformly at random, a small number of columns of the weight matrix, and uses these “repres ..."
Abstract

Cited by 2 (1 self)
 Add to MetaCart
(Show Context)
Nonlocal means (NLM) is a popular denoising scheme. Conceptually simple, the algorithm is computationally intensive for large images. We propose to speed up NLM by using random sampling. Our algorithm picks, uniformly at random, a small number of columns of the weight matrix, and uses these “representatives ” to compute an approximate result. It also incorporates an extra columnnormalization of the sampled columns, a form of symmetrization that often boosts the denoising performance on real images. Using statistical large deviation theory, we analyze the proposed algorithm and provide guarantees on its performance. We show that the probability of having a large approximation error decays exponentially as the image size increases. Thus, for large images, the random estimates generated by the algorithm are tightly concentrated around their limit values, even if the sampling ratio is small. Numerical results confirm our theoretical analysis: the proposed algorithm reduces the run time of NLM, and thanks to the symmetrization step, actually provides some improvement in peak signaltonoise ratios. Index Terms — Nonlocal means, random sampling, SinkhornKnopp balancing scheme, image denoising
Downsampling of Signals on Graphs Via Maximum Spanning Trees
"... Abstract—Downsampling of signals living on a general weighted graph is not as trivial as of regular signals where we can simply keep every other samples. In this paper we propose a simple, yet effective downsampling scheme in which the underlying graph is approximated by a maximum spanning tree (MST ..."
Abstract

Cited by 2 (0 self)
 Add to MetaCart
(Show Context)
Abstract—Downsampling of signals living on a general weighted graph is not as trivial as of regular signals where we can simply keep every other samples. In this paper we propose a simple, yet effective downsampling scheme in which the underlying graph is approximated by a maximum spanning tree (MST) that naturally defines a graph multiresolution. This MSTbased method significantly outperforms the two previous downsampling schemes, coloringbased and SVDbased, on both random and specific graphs in terms of computations and partition efficiency quantified by the graph cuts. The benefit of using MSTbased downsampling for recently developed criticalsampling graph wavelet transforms in compression of graph signals is demonstrated. Index Terms—Bipartite approximation, downsampling on graphs, graph multiresolution, graph wavelet filter banks, maxcut, maximum spanning tree, signal processing on graphs. I.
OPTIMAL GRAPH LAPLACIAN REGULARIZATION FOR NATURAL IMAGE DENOISING
"... Image denoising is an underdetermined problem, and hence it is important to define appropriate image priors for regularization. One recent popular prior is the graph Laplacian regularizer, where a given pixel patch is assumed to be smooth in the graphsignal domain. The strength and direction of th ..."
Abstract

Cited by 1 (1 self)
 Add to MetaCart
(Show Context)
Image denoising is an underdetermined problem, and hence it is important to define appropriate image priors for regularization. One recent popular prior is the graph Laplacian regularizer, where a given pixel patch is assumed to be smooth in the graphsignal domain. The strength and direction of the resulting graphbased filter are computed from the graph’s edge weights. In this paper, we derive the optimal edge weights for local graphbased filtering using gradient estimates from nonlocal pixel patches that are selfsimilar. To analyze the effects of the gradient estimates on the graph Laplacian regularizer, we first show theoretically that, given graphsignal hD is a set of discrete samples on continuous function h(x, y) in a closed region Ω, graph Laplacian regularizer (hD)TLhD converges to a continuous functional SΩ integrating gradient norm of h in metric space G — i.e., (∇h)TG−1(∇h) — over Ω. We then derive the optimal metric space G?: one that leads to a graph Laplacian regularizer that is discriminant when the gradient estimates are accurate, and robust when the gradient estimates are noisy. Finally, having derived G? we compute the corresponding edge weights to define the Laplacian L used for filtering. Experimental results show that our image denoising algorithm using the perpatch optimal metric space G? outperforms nonlocal means (NLM) by up to 1.5 dB in PSNR. Index Terms — graph Laplacian regularization, metric space, image denoising, inverse imaging problem 1.
Global Image Denoising 1
, 2013
"... Most existing stateoftheart image denoising algorithms are based on exploiting similarity between a relatively modest number of patches. These patchbased methods are strictly dependent on patch matching, and their performance is hamstrung by the ability to reliably find sufficiently similar patc ..."
Abstract
 Add to MetaCart
(Show Context)
Most existing stateoftheart image denoising algorithms are based on exploiting similarity between a relatively modest number of patches. These patchbased methods are strictly dependent on patch matching, and their performance is hamstrung by the ability to reliably find sufficiently similar patches. As the number of patches grows, a point of diminishing returns is reached where the performance improvement due to more patches is offset by the lower likelihood of finding sufficiently close matches. The net effect is that while patchbased methods such as BM3D are excellent overall, they are ultimately limited in how well they can do on (larger) images with increasing complexity. In this work, we address these shortcomings by developing a paradigm for truly global filtering where each pixel is estimated from all pixels in the image. Our objectives in this paper are twofold. First, we give a statistical analysis of our proposed global filter, based on a spectral decomposition of its corresponding operator, and we study the effect of truncation of this spectral decomposition. Second, we derive an approximation to the spectral (principal) components using the Nyström extension. Using these, we demonstrate that this global filter can be implemented efficiently by sampling a fairly small percentage of the pixels in the image. Experiments illustrate that our strategy can effectively globalize any existing denoising filters to estimate each pixel using all pixels in the image, hence improving upon the best patchbased methods.
Geometric Sparsity in High Dimension
, 2012
"... The final copy of this thesis has been examined by the signatories, and we find that both the content and the form meet acceptable presentation standards of scholarly work in the above mentioned discipline. Kaslovsky, Daniel N. (Ph.D., Applied Mathematics) ..."
Abstract
 Add to MetaCart
(Show Context)
The final copy of this thesis has been examined by the signatories, and we find that both the content and the form meet acceptable presentation standards of scholarly work in the above mentioned discipline. Kaslovsky, Daniel N. (Ph.D., Applied Mathematics)