Results 1  10
of
43
MonteCarlo Sure: A blackbox optimization of regularization parameters for general denoising algorithms
 IEEE TRANSACTIONS ON IMAGE PROCESSING
, 2008
"... We consider the problem of optimizing the parameters of a given denoising algorithm for restoration of a signal corrupted by white Gaussian noise. To achieve this, we propose to minimize Stein’s unbiased risk estimate (SURE) which provides a means of assessing the true meansquared error (MSE) pure ..."
Abstract

Cited by 49 (5 self)
 Add to MetaCart
We consider the problem of optimizing the parameters of a given denoising algorithm for restoration of a signal corrupted by white Gaussian noise. To achieve this, we propose to minimize Stein’s unbiased risk estimate (SURE) which provides a means of assessing the true meansquared error (MSE) purely from the measured data without need for any knowledge about the noisefree signal. Specifically, we present a novel MonteCarlo technique which enables the user to calculate SURE for an arbitrary denoising algorithm characterized by some specific parameter setting. Our method is a blackbox approach which solely uses the response of the denoising operator to additional input noise and does not ask for any information about its functional form. This, therefore, permits the use of SURE for optimization of a wide variety of denoising algorithms. We justify our claims by presenting experimental results for SUREbased optimization of a series of popular imagedenoising algorithms such as totalvariation denoising, wavelet softthresholding, and Wiener filtering/smoothing splines. In the process, we also compare the performance of these methods. We demonstrate numerically that SURE computed using the new approach accurately predicts the true MSE for all the considered algorithms. We also show that SURE uncovers the optimal values of the parameters in all cases.
Nonideal sampling and interpolation from noisy observations in shiftinvariant spaces
 IEEE Trans. Signal Processing
, 2006
"... Abstract—Digital analysis and processing of signals inherently relies on the existence of methods for reconstructing a continuoustime signal from a sequence of corrupted discretetime samples. In this paper, a general formulation of this problem is developed that treats the interpolation problem fro ..."
Abstract

Cited by 43 (22 self)
 Add to MetaCart
(Show Context)
Abstract—Digital analysis and processing of signals inherently relies on the existence of methods for reconstructing a continuoustime signal from a sequence of corrupted discretetime samples. In this paper, a general formulation of this problem is developed that treats the interpolation problem from ideal, noisy samples, and the deconvolution problem in which the signal is filtered prior to sampling, in a unified way. The signal reconstruction is performed in a shiftinvariant subspace spanned by the integer shifts of a generating function, where the expansion coefficients are obtained by processing the noisy samples with a digital correction filter. Several alternative approaches to designing the correction filter are suggested, which differ in their assumptions on the signal and noise. The classical deconvolution solutions (leastsquares, Tikhonov, and Wiener) are adapted to our particular situation, and new methods that are optimal in a minimax sense are also proposed. The solutions often have a similar structure and can be computed simply and efficiently by digital filtering. Some concrete examples of reconstruction filters are presented, as well as simple guidelines for selecting the free parameters (e.g., regularization) of the various algorithms. Index Terms—Deconvolution, interpolation, minimax reconstruction, sampling. I.
Infimal convolution regularizations with discrete l1type functionals
 Comm. Math. Sci
, 2011
"... Dedicated to Prof. Dr. Lothar Berg on the occasion of his 80th birthday ..."
Abstract

Cited by 23 (0 self)
 Add to MetaCart
(Show Context)
Dedicated to Prof. Dr. Lothar Berg on the occasion of his 80th birthday
Stochastic Models for Sparse and PiecewiseSmooth Signals
"... Abstract—We introduce an extended family of continuousdomain stochastic models for sparse, piecewisesmooth signals. These are specified as solutions of stochastic differential equations, or, equivalently, in terms of a suitable innovation model; the latter is analogous conceptually to the classica ..."
Abstract

Cited by 22 (17 self)
 Add to MetaCart
(Show Context)
Abstract—We introduce an extended family of continuousdomain stochastic models for sparse, piecewisesmooth signals. These are specified as solutions of stochastic differential equations, or, equivalently, in terms of a suitable innovation model; the latter is analogous conceptually to the classical interpretation of a Gaussian stationary process as filtered white noise. The two specific features of our approach are 1) signal generation is driven by a random stream of Dirac impulses (Poisson noise) instead of Gaussian white noise, and 2) the class of admissible whitening operators is considerably larger than what is allowed in the conventional theory of stationary processes. We provide a complete characterization of these finiterateofinnovation signals within Gelfand’s framework of generalized stochastic processes. We then focus on the class of scaleinvariant whitening operators which correspond to unstable systems. We show that these can be solved by introducing proper boundary conditions, which leads to the specification of random, splinetype signals that are piecewisesmooth. These processes are the Poisson counterpart of fractional Brownian motion; they are nonstationary and have the sametype spectral signature. We prove that the generalized Poisson processes have a sparse representation in a waveletlike basis subject to some mild matching condition. We also present a limit example of sparse process that yields a MAP signal estimator that is equivalent to the popular TVdenoising algorithm. Index Terms—Fractals, innovation models, Poisson processes, sparsity, splines, stochastic differential equations, stochastic processes,
Selfsimilarity: Part II  Optimal estimation of fractallike processes
 IEEE SIGNAL PROCESSING MAGAZINE, THE IEEE TRANSACTIONS ON IMAGE PROCESSING (1992 TO 1995), AND THE IEEE SIGNAL PROCESSING LETTERS
, 2007
"... In a companion paper (see SelfSimilarity: Part I—Splines and Operators), we characterized the class of scaleinvariant convolution operators: the generalized fractional derivatives of order. We used these operators to specify regularization functionals for a series of Tikhonovlike leastsquares da ..."
Abstract

Cited by 18 (17 self)
 Add to MetaCart
In a companion paper (see SelfSimilarity: Part I—Splines and Operators), we characterized the class of scaleinvariant convolution operators: the generalized fractional derivatives of order. We used these operators to specify regularization functionals for a series of Tikhonovlike leastsquares data fitting problems and proved that the general solution is a fractional spline of twice the order. We investigated the deterministic properties of these smoothing splines and proposed a fast Fourier transform (FFT)based implementation. Here, we present an alternative stochastic formulation to further justify these fractional spline estimators. As suggested by the title, the relevant processes are those that are statistically selfsimilar; that is, fractional Brownian motion (fBm) and its higher order extensions. To overcome the technical difficulties due to the nonstationary character of fBm, we adopt a distributional formulation due to Gel’fand. This allows us to rigorously specify an innovation model for these fractal processes, which rests on the property that they can be whitened by suitable fractional differentiation. Using the characteristic form of the fBm, we then derive the conditional probability density function (PDF) @ @ A A, where a @ AC ‘ “ are the noisy samples of the fBm @ A with Hurst exponent. We find that the conditional mean is a fractional spline of degree P, which proves that this class of functions is indeed optimal for the estimation of fractallike processes. The result also yields the optimal [minimum meansquare error (MMSE)] parameters for the smoothing spline estimator, as well as the connection with kriging and Wiener filtering.
Nonideal Sampling and Regularization Theory
, 2008
"... Shannon’s sampling theory and its variants provide effective solutions to the problem of reconstructing a signal from its samples in some “shiftinvariant” space, which may or may not be bandlimited. In this paper, we present some further justification for this type of representation, while addressi ..."
Abstract

Cited by 15 (3 self)
 Add to MetaCart
Shannon’s sampling theory and its variants provide effective solutions to the problem of reconstructing a signal from its samples in some “shiftinvariant” space, which may or may not be bandlimited. In this paper, we present some further justification for this type of representation, while addressing the issue of the specification of the best reconstruction space. We consider a realistic setting where a multidimensional signal is prefiltered prior to sampling, and the samples are corrupted by additive noise. We adopt a variational approach to the reconstruction problem and minimize a data fidelity term subject to a Tikhonovlike (continuous domain) 2regularization to obtain the continuousspace solution. We present theoretical justification for the minimization of this cost functional and show that the globally minimal continuousspace solution belongs to a shiftinvariant space generated by a function (generalized Bspline) that is generally not bandlimited. When the sampling is ideal, we recover some of the classical smoothing spline estimators. The optimal reconstruction space is characterized by a condition that links the generating function to the regularization operator and implies the existence of a Bsplinelike basis. To make the scheme practical, we specify the generating functions corresponding to the most popular families of regularization operators (derivatives, iterated Laplacian), as well as a new, generalized one that leads to a new brand of Matérn splines. We conclude the paper by proposing a stochastic interpretation of the reconstruction algorithm and establishing an equivalence with the minimax and minimum mean square error (MMSE/Wiener) solutions of the generalized sampling problem.
Splines in higher order TV regularization
 International Journal of Computer Vision
"... Splines play an important role as solutions of various interpolation and approximation problems that minimize special functionals in some smoothness spaces. In this paper, we show in a strictly discrete setting that splines of degree m − 1 solve also a minimization problem with quadratic data term a ..."
Abstract

Cited by 15 (4 self)
 Add to MetaCart
(Show Context)
Splines play an important role as solutions of various interpolation and approximation problems that minimize special functionals in some smoothness spaces. In this paper, we show in a strictly discrete setting that splines of degree m − 1 solve also a minimization problem with quadratic data term and mth order total variation (TV) regularization term. In contrast to problems with quadratic regularization terms involving mth order derivatives, the spline knots are not known in advance but depend on the input data and the regularization parameter λ. More precisely, the spline knots are determined by the contact points of the m–th discrete antiderivative of the solution with the tube of width 2λ around the m–th discrete antiderivative of the input data. We point out that the dual formulation of our minimization problem can be considered as support vector regression problem in the discrete counterpart of the Sobolev space W m 2,0. From this point of view, the
Performance Bounds and Design Criteria for Estimating Finite Rate of Innovation Signals
"... Abstract—In this paper, we consider the problem of estimating finite rate of innovation (FRI) signals from noisy measurements, and specifically analyze the interaction between FRI techniques and the underlying sampling methods. We first obtain a fundamental limit on the estimation accuracy attainabl ..."
Abstract

Cited by 15 (6 self)
 Add to MetaCart
(Show Context)
Abstract—In this paper, we consider the problem of estimating finite rate of innovation (FRI) signals from noisy measurements, and specifically analyze the interaction between FRI techniques and the underlying sampling methods. We first obtain a fundamental limit on the estimation accuracy attainable regardless of the sampling method. Next, we provide a bound on the performance achievable using any specific samplingapproach. Essential differences between the noisy and noisefree cases arise from this analysis. In particular, we identify settings in which noisefree recovery techniques deteriorate substantially under slight noise levels, thus quantifying the numerical instability inherent in such methods. This instability, which is only present in some families of FRI signals, is shown to be related toaspecific typeofstructure,which can be characterized by viewing the signal model as a union of subspaces. Finally, we develop a methodology for choosing the optimal sampling kernels for linear reconstruction, based on a generalization of the Karhunen–Loève transform. The results are illustrated for several types of timedelay estimation problems. Index Terms—Cramér–Rao bound (CRB), finite rate of innovation (FRI), sampling, timedelay estimation, union of subspaces. I.
Beyond Bandlimited Sampling: Nonlinearities, Smoothness and Sparsity
, 2008
"... Digital applications have developed rapidly over the last few decades. Since many sources of information are of analog or continuoustime nature, discretetime signal processing (DSP) inherently relies on sampling a continuoustime signal to obtain a discretetime representation. Consequently, sampl ..."
Abstract

Cited by 11 (10 self)
 Add to MetaCart
Digital applications have developed rapidly over the last few decades. Since many sources of information are of analog or continuoustime nature, discretetime signal processing (DSP) inherently relies on sampling a continuoustime signal to obtain a discretetime representation. Consequently, sampling theories lie at the heart of signal processing devices and communication systems. Examples include sampling rate conversion for software radio [1] and between audio formats [2], biomedical imaging [3], lens distortion correction and the formation of image mosaics [4], and superresolution of image sequences [5]. To accommodate high operating rates while retaining low computational cost, efficient analogtodigital (ADC) and digitaltoanalog (DAC) converters must be developed. Many of the limitations encountered in current converters is due to a traditional assumption that the sampling stage needs to acquire the data at the ShannonNyquist rate, corresponding to twice the signal bandwidth [6], [7], [8]. To avoid aliasing, a sharp lowpass filter (LPF) must be implemented prior to sampling. The reconstructed signal is also a bandlimited function, generated by integer shifts of the sinc interpolation kernel. A major drawback of this paradigm is that many natural signals are better represented in alternative bases other than the Fourier basis [9], [10], [11], or possess further structure in the Fourier domain. In addition, ideal pointwise sampling, as assumed by the Shannon theorem, cannot be implemented. More practical ADCs introduce
Activelets: Wavelets for sparse representation of hemodynamic responses
 Signal Processing
, 2011
"... We propose a new framework to extract the activityrelated component in the BOLD functional Magnetic Resonance Imaging (fMRI) signal. As opposed to traditional fMRI signal analysis techniques, we do not impose any prior knowledge of the event timing. Instead, our basic assumption is that the activa ..."
Abstract

Cited by 8 (4 self)
 Add to MetaCart
(Show Context)
We propose a new framework to extract the activityrelated component in the BOLD functional Magnetic Resonance Imaging (fMRI) signal. As opposed to traditional fMRI signal analysis techniques, we do not impose any prior knowledge of the event timing. Instead, our basic assumption is that the activation pattern is a sequence of short and sparselydistributed stimuli, as is the case in slow eventrelated fMRI. We introduce new wavelet bases, termed “activelets”, which sparsify the activityrelated BOLD signal. These wavelets mimic the behavior of the differential operator underlying the hemodynamic system. To recover the sparse representation, we deploy a sparsesolution search algorithm. The feasibility of the method is evaluated using both synthetic and experimental fMRI data. The importance of the activelet basis and the nonlinear sparse recovery algorithm is demonstrated by comparison against classical Bspline wavelets and linear regularization, respectively.