Results 1  10
of
21
Is Denoising Dead?
, 2010
"... Image denoising has been a well studied problem in the field of image processing. Yet researchers continue to focus attention on it to better the current stateoftheart. Recently proposed methods take different approaches to the problem and yet their denoising performances are comparable. A pertin ..."
Abstract

Cited by 49 (13 self)
 Add to MetaCart
Image denoising has been a well studied problem in the field of image processing. Yet researchers continue to focus attention on it to better the current stateoftheart. Recently proposed methods take different approaches to the problem and yet their denoising performances are comparable. A pertinent question then to ask is whether there is a theoretical limit to denoising performance and, more importantly, are we there yet? As camera manufacturers continue to pack increasing numbers of pixels per unit area, an increase in noise sensitivity manifests itself in the form of a noisier image. We study the performance bounds for the image denoising problem. Our work in this paper estimates a lower bound on the mean squared error of the denoised result and compares the performance of current stateoftheart denoising methods with this bound. We show that despite the phenomenal recent progress in the quality of denoising algorithms, some room for improvement still remains for a wide class of general images, and at certain signaltonoise levels. Therefore, image denoising is not dead—yet.
CoherenceBased Performance Guarantees for Estimating a Sparse Vector Under Random Noise
"... We consider the problem of estimating a deterministic sparse vector x0 from underdetermined measurements Ax0 + w, where w represents white Gaussian noise and A is a given deterministic dictionary. We analyze the performance of three sparse estimation algorithms: basis pursuit denoising (BPDN), orth ..."
Abstract

Cited by 43 (15 self)
 Add to MetaCart
(Show Context)
We consider the problem of estimating a deterministic sparse vector x0 from underdetermined measurements Ax0 + w, where w represents white Gaussian noise and A is a given deterministic dictionary. We analyze the performance of three sparse estimation algorithms: basis pursuit denoising (BPDN), orthogonal matching pursuit (OMP), and thresholding. These algorithms are shown to achieve nearoracle performance with high probability, assuming that x0 is sufficiently sparse. Our results are nonasymptotic and are based only on the coherence of A, so that they are applicable to arbitrary dictionaries. Differences in the precise conditions required for the performance guarantees of each algorithm are manifested in the observed performance at high and low signaltonoise ratios. This provides insight on the advantages and drawbacks of ℓ1 relaxation techniques such as BPDN as opposed to greedy approaches such as OMP and thresholding.
Covariance estimation in decomposable Gaussian graphical models
 IEEE TRANSACTIONS ON SIGNAL PROCESSING
, 2010
"... Graphical models are a framework for representing and exploiting prior conditional independence structures within distributions using graphs. In the Gaussian case, these models are directly related to the sparsity of the inverse covariance (concentration) matrix and allow for improved covariance es ..."
Abstract

Cited by 12 (7 self)
 Add to MetaCart
(Show Context)
Graphical models are a framework for representing and exploiting prior conditional independence structures within distributions using graphs. In the Gaussian case, these models are directly related to the sparsity of the inverse covariance (concentration) matrix and allow for improved covariance estimation with lower computational complexity. We consider concentration estimation with the meansquared error (MSE) as the objective, in a special type of model known as decomposable. This model includes, for example, the well known banded structure and other cases encountered in practice. Our first contribution is the derivation and analysis of the minimum variance unbiased estimator (MVUE) in decomposable graphical models. We provide a simple closed form solution to the MVUE and compare it with the classical maximum likelihood estimator (MLE) in terms of performance and complexity. Next, we extend the celebrated Stein’s unbiased risk estimate (SURE) to graphical models. Using SURE, we prove that the MSE of the MVUE is always smaller or equal to that of the biased MLE, and that the MVUE itself is dominated by other approaches. In addition, we propose the use of SURE as a constructive mechanism for deriving new covariance estimators. Similarly to the classical MLE, all of our proposed estimators have simple closed form solutions but result in a significant reduction in MSE.
Hidden relationships: Bayesian estimation with partial knowledge
 IEEE Transactions on Signal Processing
"... Abstract—We address the problem of Bayesian estimation where the statistical relation between the signal and measurements is only partially known. We propose modeling partial Bayesian knowledge by using an auxiliary random vector called instrument. The statistical relations between the instrument an ..."
Abstract

Cited by 8 (4 self)
 Add to MetaCart
(Show Context)
Abstract—We address the problem of Bayesian estimation where the statistical relation between the signal and measurements is only partially known. We propose modeling partial Bayesian knowledge by using an auxiliary random vector called instrument. The statistical relations between the instrument and the signal and between the instrument and the measurements, are known. However, the joint probability function of the signal and measurements is unknown. Two types of statistical relations are considered, corresponding to secondorder moment and complete distribution function knowledge. We propose two approaches for estimation in partial knowledge scenarios. The first is based on replacing the orthogonality principle by an oblique counterpart and is proven to coincide with the method of instrumental variables from statistics, although developed in a different context. The second is based on a worstcase design strategy and is shown to be advantageous in many aspects. We provide a thorough analysis showing in which situations each of the methods is preferable and propose a nonparametric method for approximating the estimators from a set of examples. Finally, we demonstrate our approach in the context of enhancement of facial images that have undergone unknown degradation and image zooming. Index Terms—Bayesian estimation, instrumental variables, minimax regret, nonparametric regression, partial knowledge. I.
Unbiased Estimation of a Sparse Vector in White Gaussian Noise
, 2010
"... We consider unbiased estimation of a sparse nonrandom vector corrupted by additive white Gaussian noise. We show that while there are infinitely many unbiased estimators for this problem, none of them has uniformly minimum variance. Therefore, we focus on locally minimum variance unbiased (LMVU) est ..."
Abstract

Cited by 7 (1 self)
 Add to MetaCart
(Show Context)
We consider unbiased estimation of a sparse nonrandom vector corrupted by additive white Gaussian noise. We show that while there are infinitely many unbiased estimators for this problem, none of them has uniformly minimum variance. Therefore, we focus on locally minimum variance unbiased (LMVU) estimators. We derive simple closedform lower and upper bounds on the variance of LMVU estimators or, equivalently, on the Barankin bound (BB). Our bounds allow an estimation of the threshold region separating the lowSNR and highSNR regimes, and they indicate the asymptotic behavior of the BB at high SNR. We also develop numerical lower and upper bounds which are tighter than the closedform bounds and thus characterize the BB more accurately. Numerical studies compare our characterization of the BB with established biased estimation schemes, and demonstrate that while unbiased estimators perform poorly at low SNR, they may perform better than biased estimators at high SNR. An interesting conclusion of our analysis is that the highSNR behavior of the BB depends solely on the value of the smallest nonzero component of the sparse vector, and that this type of dependence is also exhibited by
A lower bound on the Bayesian MSE based on the optimal bias function
 IEEE TRANSACTIONS ON INFORMATION THEORY
, 2009
"... A lower bound on the minimum meansquared error (MSE) in a Bayesian estimation problem is proposed in this paper. This bound utilizes a wellknown connection to the deterministic estimation setting. Using the prior distribution, the bias function which minimizes the Cramér–Rao bound can be determin ..."
Abstract

Cited by 6 (0 self)
 Add to MetaCart
A lower bound on the minimum meansquared error (MSE) in a Bayesian estimation problem is proposed in this paper. This bound utilizes a wellknown connection to the deterministic estimation setting. Using the prior distribution, the bias function which minimizes the Cramér–Rao bound can be determined, resulting in a lower bound on the Bayesian MSE. The bound is developed for the general case of a vector parameter with an arbitrary probability distribution, and is shown to be asymptotically tight in both the high and low signaltonoise ratio (SNR) regimes. A numerical study demonstrates several cases in which the proposed technique is both simpler to compute and tighter than alternative methods.
Performance Bounds on Image Denoising or “Is Denoising Dead?”
, 2009
"... Image denoising has been a well studied problem in the field of image processing. Yet researchers continue to focus attention on it in order to better the current state of the art. Recently proposed methods take widely different approaches to the problem and yet their denoising performances are comp ..."
Abstract
 Add to MetaCart
Image denoising has been a well studied problem in the field of image processing. Yet researchers continue to focus attention on it in order to better the current state of the art. Recently proposed methods take widely different approaches to the problem and yet their denoising performances are comparable. A pertinent question then to ask is whether there is a theoretical limit to denoising performance and more importantly, are we there yet? As camera manufacturers continue to pack increasing numbers of pixels per unit area, we study the performance limits of denoising algorithms that they rely on to reduce the effects of the resultant increase in noise sensitivity. Our work in this paper estimates a lower bound on the mean squared error of the denoised result and compares the performance of current state of the art denoising methods with this bound. We show that despite the phenomenal recent progress in the quality of denoising algorithms, much room for improvement still remains, particularly for difficult to denoise images, and at low signaltonoise levels. Therefore, denoising is not dead – yet.
1 A Lower Bound on the Bayesian MSE Based on the Optimal Bias Function
, 804
"... A lower bound on the minimum meansquared error (MSE) in a Bayesian estimation problem is proposed in this paper. The bound is based on the approach of Young and Westerberg, in which a connection to the deterministic estimation setting is utilized. Using the prior distribution, the bias function whi ..."
Abstract
 Add to MetaCart
(Show Context)
A lower bound on the minimum meansquared error (MSE) in a Bayesian estimation problem is proposed in this paper. The bound is based on the approach of Young and Westerberg, in which a connection to the deterministic estimation setting is utilized. Using the prior distribution, the bias function which minimizes the Cramér–Rao bound can be determined, resulting in a lower bound on the Bayesian MSE. The bound is developed for the general case of a vector parameter with an arbitrary probability distribution, and is shown to be asymptotically tight in both the high and low signaltonoise ratio regimes. A numerical study demonstrates several cases in which the proposed technique is both simpler to compute and tighter than alternative methods. Index Terms Bayesian bounds, Bayesian estimation, minimum meansquared error estimation, optimal bias, performance bounds. I.
Minimum Variance Estimation of a Sparse Vector Within the Linear Gaussian Model: An
"... Abstract — We consider minimum variance estimation within the sparse linear Gaussian model (SLGM). A sparse vector is to be estimated from a linearly transformed version embedded in Gaussian noise. Our analysis is based on the theory of reproducing kernel Hilbert spaces (RKHS). After a characterizat ..."
Abstract
 Add to MetaCart
(Show Context)
Abstract — We consider minimum variance estimation within the sparse linear Gaussian model (SLGM). A sparse vector is to be estimated from a linearly transformed version embedded in Gaussian noise. Our analysis is based on the theory of reproducing kernel Hilbert spaces (RKHS). After a characterization of the RKHS associated with the SLGM, we derive a lower bound on the minimum variance achievable by estimators with a prescribed bias function, including the important special case of unbiased estimation. This bound is obtained via an orthogonal projection of the prescribed mean function onto a subspace of the RKHS associated with the SLGM. It provides an approximation to the minimum achievable variance (Barankin bound) that is tighter than any known bound. Our bound holds for an arbitrary system matrix, including the overdetermined and underdetermined cases. We specialize
VideoBased Tracking of Single Molecules Exhibiting Directed InFrame Motion
"... Abstract: Trajectories of individual molecules moving within complex environments such as cell cytoplasm and membranes or semiflexible polymer networks provide invaluable information on the organization and dynamics of these systems. However, when such trajectories are obtained from a sequence of m ..."
Abstract
 Add to MetaCart
(Show Context)
Abstract: Trajectories of individual molecules moving within complex environments such as cell cytoplasm and membranes or semiflexible polymer networks provide invaluable information on the organization and dynamics of these systems. However, when such trajectories are obtained from a sequence of microscopy images, they can be distorted due to the fact that the tracked molecule exhibits appreciable directed motion during the singleframe acquisition. We propose a new model of image formation for mobile molecules that takes the linear inframe motion into account and develop an algorithm based on the maximum likelihood approach for retrieving the position and velocity of the molecules from singleframe data. The position and velocity information obtained from individual frames are further fed into a Kalman filter for interframe tracking of molecules that allows prediction of the trajectory of the molecule and further improves the precision of the position and velocity estimates. We evaluate the performance of our algorithm by calculations of the CramerRao Lower Bound, simulations, and model experiments with a piezostage.We demonstrate tracking of molecules moving as fast as 7 pixels/frame ~12.6 mm/s! within a mean error of 0.42 pixel ~37.43 nm!. Key words: single molecule microscopy, single molecule tracking, maximum likelihood estimation, Kalman filtering