Results 1  10
of
208
Robust Anisotropic Diffusion
, 1998
"... Relations between anisotropic diffusion and robust statistics are described in this paper. Specifically, we show that anisotropic diffusion can be seen as a robust estimation procedure that estimates a piecewise smooth image from a noisy input image. The "edgestopping" function in the ani ..."
Abstract

Cited by 361 (17 self)
 Add to MetaCart
(Show Context)
Relations between anisotropic diffusion and robust statistics are described in this paper. Specifically, we show that anisotropic diffusion can be seen as a robust estimation procedure that estimates a piecewise smooth image from a noisy input image. The "edgestopping" function in the anisotropic diffusion equation is closely related to the error norm and influence function in the robust estimation framework. This connection leads to a new "edgestopping" function based on Tukey's biweight robust estimator, that preserves sharper boundaries than previous formulations and improves the automatic stopping of the diffusion. The robust statistical interpretation also provides a means for detecting the boundaries (edges) between the piecewise smooth regions in an image that has been smoothed with anisotropic diffusion. Additionally, we derive a relationship between anisotropic diffusion and regularization with line processes. Adding constraints on the spatial organization of the ...
Iterative Methods For Total Variation Denoising
 SIAM J. SCI. COMPUT
"... Total Variation (TV) methods are very effective for recovering "blocky", possibly discontinuous, images from noisy data. A fixed point algorithm for minimizing a TVpenalized least squares functional is presented and compared with existing minimization schemes. A variant of the cellcenter ..."
Abstract

Cited by 341 (7 self)
 Add to MetaCart
Total Variation (TV) methods are very effective for recovering "blocky", possibly discontinuous, images from noisy data. A fixed point algorithm for minimizing a TVpenalized least squares functional is presented and compared with existing minimization schemes. A variant of the cellcentered finite difference multigrid method of Ewing and Shen is implemented for solving the (large, sparse) linear subproblems. Numerical results are presented for one and twodimensional examples; in particular, the algorithm is applied to actual data obtained from confocal microscopy.
Deterministic edgepreserving regularization in computed imaging
 IEEE Trans. Image Processing
, 1997
"... Abstract—Many image processing problems are ill posed and must be regularized. Usually, a roughness penalty is imposed on the solution. The difficulty is to avoid the smoothing of edges, which are very important attributes of the image. In this paper, we first give conditions for the design of such ..."
Abstract

Cited by 311 (27 self)
 Add to MetaCart
Abstract—Many image processing problems are ill posed and must be regularized. Usually, a roughness penalty is imposed on the solution. The difficulty is to avoid the smoothing of edges, which are very important attributes of the image. In this paper, we first give conditions for the design of such an edgepreserving regularization. Under these conditions, we show that it is possible to introduce an auxiliary variable whose role is twofold. First, it marks the discontinuities and ensures their preservation from smoothing. Second, it makes the criterion halfquadratic. The optimization is then easier. We propose a deterministic strategy, based on alternate minimizations on the image and the auxiliary variable. This leads to the definition of an original reconstruction algorithm, called ARTUR. Some theoretical properties of ARTUR are discussed. Experimental results illustrate the behavior of the algorithm. These results are shown in the field of tomography, but this method can be applied in a large number of applications in image processing. I.
A new alternating minimization algorithm for total variation image reconstruction
 SIAM J. IMAGING SCI
, 2008
"... We propose, analyze and test an alternating minimization algorithm for recovering images from blurry and noisy observations with total variation (TV) regularization. This algorithm arises from a new halfquadratic model applicable to not only the anisotropic but also isotropic forms of total variati ..."
Abstract

Cited by 224 (26 self)
 Add to MetaCart
We propose, analyze and test an alternating minimization algorithm for recovering images from blurry and noisy observations with total variation (TV) regularization. This algorithm arises from a new halfquadratic model applicable to not only the anisotropic but also isotropic forms of total variation discretizations. The periteration computational complexity of the algorithm is three Fast Fourier Transforms (FFTs). We establish strong convergence properties for the algorithm including finite convergence for some variables and relatively fast exponential (or qlinear in optimization terminology) convergence for the others. Furthermore, we propose a continuation scheme to accelerate the practical convergence of the algorithm. Extensive numerical results show that our algorithm performs favorably in comparison to several stateoftheart algorithms. In particular, it runs orders of magnitude faster than the Lagged Diffusivity algorithm for totalvariationbased deblurring. Some extensions of our algorithm are also discussed.
A Variational Method In Image Recovery
 SIAM J. Numer. Anal
, 1997
"... This paper is concerned with a classical denoising and deblurring problem in image recovery. Our approach is based on a variational method. By using the LegendreFenchel transform, we show how the nonquadratic criterion to be minimized can be split into a sequence of halfquadratic problems easier t ..."
Abstract

Cited by 127 (23 self)
 Add to MetaCart
(Show Context)
This paper is concerned with a classical denoising and deblurring problem in image recovery. Our approach is based on a variational method. By using the LegendreFenchel transform, we show how the nonquadratic criterion to be minimized can be split into a sequence of halfquadratic problems easier to solve numerically. First we prove an existence and uniqueness result, and then we describe the algorithm for computing the solution and we give a proof of convergence. Finally, we present some experimental results for synthetic and real images.
Fast image deconvolution using hyperlaplacian priors, supplementary material
, 2009
"... The heavytailed distribution of gradients in natural scenes have proven effective priors for a range of problems such as denoising, deblurring and superresolution. These distributions are well modeled by a hyperLaplacian p(x) ∝ e−kxα), typically with 0.5 ≤ α ≤ 0.8. However, the use of sparse ..."
Abstract

Cited by 109 (2 self)
 Add to MetaCart
(Show Context)
The heavytailed distribution of gradients in natural scenes have proven effective priors for a range of problems such as denoising, deblurring and superresolution. These distributions are well modeled by a hyperLaplacian p(x) ∝ e−kxα), typically with 0.5 ≤ α ≤ 0.8. However, the use of sparse distributions makes the problem nonconvex and impractically slow to solve for multimegapixel images. In this paper we describe a deconvolution approach that is several orders of magnitude faster than existing techniques that use hyperLaplacian priors. We adopt an alternating minimization scheme where one of the two phases is a nonconvex problem that is separable over pixels. This perpixel subproblem may be solved with a lookup table (LUT). Alternatively, for two specific values of α, 1/2 and 2/3 an analytic solution can be found, by finding the roots of a cubic and quartic polynomial, respectively. Our approach (using either LUTs or analytic formulae) is able to deconvolve a 1 megapixel image in less than ∼3 seconds, achieving comparable quality to existing methods such as iteratively reweighted least squares (IRLS) that take ∼20 minutes. Furthermore, our method is quite general and can easily be extended to related image processing problems, beyond the deconvolution application demonstrated. 1
MINIMIZERS OF COSTFUNCTIONS INVOLVING NONSMOOTH DATAFIDELITY TERMS. APPLICATION TO THE PROCESSING OF OUTLIERS
, 2002
"... We present a theoretical study of the recovery of an unknown vector x ∈ Rp (such as a signal or an image) from noisy data y ∈ Rq by minimizing with respect to x a regularized costfunction F(x, y) = Ψ(x, y) + αΦ(x), where Ψ is a datafidelity term, Φ is a smooth regularization term, and α> 0 i ..."
Abstract

Cited by 105 (19 self)
 Add to MetaCart
We present a theoretical study of the recovery of an unknown vector x ∈ Rp (such as a signal or an image) from noisy data y ∈ Rq by minimizing with respect to x a regularized costfunction F(x, y) = Ψ(x, y) + αΦ(x), where Ψ is a datafidelity term, Φ is a smooth regularization term, and α> 0 is a parameter. Typically, Ψ(x, y) = ‖Ax − y‖2, where A is a linear operator. The datafidelity terms Ψ involved in regularized costfunctions are generally smooth functions; only a few papers make an exception to this and they consider restricted situations. Nonsmooth datafidelity terms are avoided in image processing. In spite of this, we consider both smooth and nonsmooth datafidelity terms. Our goal is to capture essential features exhibited by the local minimizers of regularized costfunctions in relation to the smoothness of the datafidelity term. In order to fix the context of our study, we consider Ψ(x, y) = i ψ(aTi x − yi), where aTi are the rows of A and ψ is Cm on R \ {0}. We show that if ψ′(0−) < ψ′(0+), then typical data y give rise to local minimizers x ̂ of F(., y) which fit exactly a certain number of the data entries: there is a possibly large set h ̂ of indexes such that aTi x ̂ = yi for every i ∈ ĥ. In contrast, if ψ is
Hermeneutics: Interpretation Theory
 in Schleiermacher, Dilthey, Heidegger and Gadamer, Northwestern University Studies in Phenomenology & Existential Philosophy
, 1969
"... Report on proposed doctoral thesis: ..."
(Show Context)
Recovering Edges in IllPosed Inverse Problems: Optimality of Curvelet Frames
, 2000
"... We consider a model problem of recovering a function f(x1,x2) from noisy Radon data. The function f to be recovered is assumed smooth apart from a discontinuity along a C2 curve – i.e. an edge. We use the continuum white noise model, with noise level ɛ. Traditional linear methods for solving such in ..."
Abstract

Cited by 80 (14 self)
 Add to MetaCart
(Show Context)
We consider a model problem of recovering a function f(x1,x2) from noisy Radon data. The function f to be recovered is assumed smooth apart from a discontinuity along a C2 curve – i.e. an edge. We use the continuum white noise model, with noise level ɛ. Traditional linear methods for solving such inverse problems behave poorly in the presence of edges. Qualitatively, the reconstructions are blurred near the edges; quantitatively, they give in our model Mean Squared Errors (MSEs) that tend to zero with noise level ɛ only as O(ɛ1/2)asɛ → 0. A recent innovation – nonlinear shrinkage in the wavelet domain – visually improves edge sharpness and improves MSE convergence to O(ɛ2/3). However, as we show here, this rate is not optimal. In fact, essentially optimal performance is obtained by deploying the recentlyintroduced tight frames of curvelets in this setting. Curvelets are smooth, highly anisotropic elements ideally suited for detecting and synthesizing curved edges. To deploy them in the Radon setting, we construct a curveletbased biorthogonal decomposition
ConjugateGradient Preconditioning Methods for ShiftVariant PET Image Reconstruction
 IEEE Tr. Im. Proc
, 2002
"... Gradientbased iterative methods often converge slowly for tomographic image reconstruction and image restoration problems, but can be accelerated by suitable preconditioners. Diagonal preconditioners offer some improvement in convergence rate, but do not incorporate the structure of the Hessian mat ..."
Abstract

Cited by 79 (33 self)
 Add to MetaCart
Gradientbased iterative methods often converge slowly for tomographic image reconstruction and image restoration problems, but can be accelerated by suitable preconditioners. Diagonal preconditioners offer some improvement in convergence rate, but do not incorporate the structure of the Hessian matrices in imaging problems. Circulant preconditioners can provide remarkable acceleration for inverse problems that are approximately shiftinvariant, i.e. for those with approximately blockToeplitz or blockcirculant Hessians. However, in applications with nonuniform noise variance, such as arises from Poisson statistics in emission tomography and in quantumlimited optical imaging, the Hessian of the weighted leastsquares objective function is quite shiftvariant, and circulant preconditioners perform poorly. Additional shiftvariance is caused by edgepreserving regularization methods based on nonquadratic penalty functions. This paper describes new preconditioners that approximate more accurately the Hessian matrices of shiftvariant imaging problems. Compared to diagonal or circulant preconditioning, the new preconditioners lead to significantly faster convergence rates for the unconstrained conjugategradient (CG) iteration. We also propose a new efficient method for the linesearch step required by CG methods. Applications to positron emission tomography (PET) illustrate the method.