Results 1  10
of
52
A new alternating minimization algorithm for total variation image reconstruction
 SIAM J. IMAGING SCI
, 2008
"... We propose, analyze and test an alternating minimization algorithm for recovering images from blurry and noisy observations with total variation (TV) regularization. This algorithm arises from a new halfquadratic model applicable to not only the anisotropic but also isotropic forms of total variati ..."
Abstract

Cited by 224 (26 self)
 Add to MetaCart
(Show Context)
We propose, analyze and test an alternating minimization algorithm for recovering images from blurry and noisy observations with total variation (TV) regularization. This algorithm arises from a new halfquadratic model applicable to not only the anisotropic but also isotropic forms of total variation discretizations. The periteration computational complexity of the algorithm is three Fast Fourier Transforms (FFTs). We establish strong convergence properties for the algorithm including finite convergence for some variables and relatively fast exponential (or qlinear in optimization terminology) convergence for the others. Furthermore, we propose a continuation scheme to accelerate the practical convergence of the algorithm. Extensive numerical results show that our algorithm performs favorably in comparison to several stateoftheart algorithms. In particular, it runs orders of magnitude faster than the Lagged Diffusivity algorithm for totalvariationbased deblurring. Some extensions of our algorithm are also discussed.
A variational formulation for framebased inverse problems
 Inverse Problems
, 2007
"... A convex variational framework is proposed for solving inverse problems in Hilbert spaces with a priori information on the representation of the target solution in a frame. The objective function to be minimized consists of a separable term penalizing each frame coefficient individually and of a smo ..."
Abstract

Cited by 60 (21 self)
 Add to MetaCart
(Show Context)
A convex variational framework is proposed for solving inverse problems in Hilbert spaces with a priori information on the representation of the target solution in a frame. The objective function to be minimized consists of a separable term penalizing each frame coefficient individually and of a smooth term modeling the data formation model as well as other constraints. Sparsityconstrained and Bayesian formulations are examined as special cases. A splitting algorithm is presented to solve this problem and its convergence is established in infinitedimensional spaces under mild conditions on the penalization functions, which need not be differentiable. Numerical simulations demonstrate applications to framebased image restoration. 1
Maximum correntropy criterion for robust face recognition
 IEEE Trans. Pattern Anal. Mach. Intell
"... Abstract—In this paper, we present a sparse correntropy framework for computing robust sparse representations of face images for recognition. Compared with the stateoftheart l1normbased sparse representation classifier (SRC), which assumes that noise also has a sparse representation, our sparse ..."
Abstract

Cited by 28 (9 self)
 Add to MetaCart
(Show Context)
Abstract—In this paper, we present a sparse correntropy framework for computing robust sparse representations of face images for recognition. Compared with the stateoftheart l1normbased sparse representation classifier (SRC), which assumes that noise also has a sparse representation, our sparse algorithm is developed based on the maximum correntropy criterion, which is much more insensitive to outliers. In order to develop a more tractable and practical approach, we in particular impose nonnegativity constraint on the variables in the maximum correntropy criterion and develop a halfquadratic optimization technique to approximately maximize the objective function in an alternating way so that the complex optimization problem is reduced to learning a sparse representation through a weighted linear least squares problem with nonnegativity constraint at each iteration. Our extensive experiments demonstrate that the proposed method is more robust and efficient in dealing with the occlusion and corruption problems in face recognition as compared to the related stateoftheart methods. In particular, it shows that the proposed method can improve both recognition accuracy and receiver operator characteristic (ROC) curves, while the computational cost is much lower than the SRC algorithms. Index Terms—Information theoretical learning, correntropy, linear least squares, halfquadratic optimization, sparse representation, Mestimator, face recognition, occlusion and corruption. Ç 1
A splittingbased iterative algorithm for accelerated statistical Xray CT reconstruction
 Medical Imaging, IEEE Transactions on
, 2012
"... Abstract—Statistical image reconstruction using penalized weighted leastsquares (PWLS) criteria can improve imagequality in Xray CT. However, the huge dynamic range of the statistical weights leads to a highly shiftvariant inverse problem making it difficult to precondition and accelerate existi ..."
Abstract

Cited by 27 (8 self)
 Add to MetaCart
(Show Context)
Abstract—Statistical image reconstruction using penalized weighted leastsquares (PWLS) criteria can improve imagequality in Xray CT. However, the huge dynamic range of the statistical weights leads to a highly shiftvariant inverse problem making it difficult to precondition and accelerate existing iterative algorithms that attack the statistical model directly. We propose to alleviate the problem by using a variablesplitting scheme that separates the shiftvariant and (“nearly”) invariant components of the statistical data model and also decouples the regularization term. This leads to an equivalent constrained problem that we tackle using the classical methodofmultipliers framework with alternating minimization. The specific form of our splitting yields an alternating direction method of multipliers (ADMM) algorithm with an innerstep involving a “nearly ” shiftinvariant linear system that is suitable for FFTbased preconditioning using conetype filters. The proposed method can efficiently handle a variety of convex regularization criteria including smooth edgepreserving regularizers and nonsmooth sparsitypromoting ones based on the ℓ1norm and total variation. Numerical experiments with synthetic and real in vivo human data illustrate that conefilter preconditioners accelerate the proposed ADMM resulting in fast convergence of ADMM compared to conventional (nonlinear conjugate gradient, ordered subsets) and stateoftheart (MFISTA, splitBregman) algorithms that are applicable for CT.
The Equivalence of HalfQuadratic Minimization and the Gradient Linearization Iteration
"... A popular way to restore images comprising edges is to minimize a costfunction combining a quadratic datafidelity term and an edgepreserving (possibly nonconvex) regularization term. Mainly because of the latter term, the calculation of the solution is slow and cumbersome. Halfquadratic (HQ) min ..."
Abstract

Cited by 20 (4 self)
 Add to MetaCart
(Show Context)
A popular way to restore images comprising edges is to minimize a costfunction combining a quadratic datafidelity term and an edgepreserving (possibly nonconvex) regularization term. Mainly because of the latter term, the calculation of the solution is slow and cumbersome. Halfquadratic (HQ) minimization (multiplicative form) was pioneered by Geman and Reynolds (1992) in order to alleviate the computational task in the context of image reconstruction with nonconvex regularization. By promoting the idea of locally homogeneous image models with a continuousvalued line process, they reformulated the optimization problem in terms of an augmented cost function which is quadratic with respect to the image and separable with respect to the line process. Hence the name “half quadratic”. Since then, a large amount of papers were dedicated to HQ minimization and important results including edgepreservation along with convex regularization and convergence have been obtained. In this paper we show that HQ minimization (multiplicative form) is equivalent to the most simple and basic method where the gradient of the costfunction is linearized at each iteration step. In fact, both methods give exactly the same iterations. Furthermore, connections of HQ minimization with other methods, such as the quasiNewton method and the generalized Weiszfeld’s method, are straightforward.
Robust principal component analysis based on maximum correntropy criterion
 IEEE Trans. Image Process
, 2011
"... Abstract—Principal component analysis (PCA) minimizes the mean square error (MSE) and is sensitive to outliers. In this paper, we present a new rotationalinvariant PCA based on maximum correntropy criterion (MCC). A halfquadratic optimization algorithm is adopted to compute the correntropy objecti ..."
Abstract

Cited by 10 (2 self)
 Add to MetaCart
(Show Context)
Abstract—Principal component analysis (PCA) minimizes the mean square error (MSE) and is sensitive to outliers. In this paper, we present a new rotationalinvariant PCA based on maximum correntropy criterion (MCC). A halfquadratic optimization algorithm is adopted to compute the correntropy objective. At each iteration, the complex optimization problem is reduced to a quadratic problem that can be efficiently solved by a standard optimization method. The proposed method exhibits the following benefits: 1) it is robust to outliers through the mechanism of MCC which can be more theoretically solid than a heuristic rule based on MSE; 2) it requires no assumption about the zeromean of data for processing and can estimate data mean during optimization; and 3) its optimal solution consists of principal eigenvectors of a robust covariance matrix corresponding to the largest eigenvalues. In addition, kernel techniques are further introduced in the proposed method to deal with nonlinearly distributed data. Numerical results demonstrate that the proposed method can outperform robust rotationalinvariant PCAs based on norm when outliers occur. Index Terms—Correntropy, halfquadratic optimization, principal component analysis (PCA), robust. I.
Convergence of Conjugate Gradient Methods with a ClosedForm Stepsize Formula
, 2008
"... Conjugate gradient methods are efficient methods for minimizing differentiable objective functions in large dimension spaces. However, converging line search strategies are usually not easy to choose, nor to implement. Sun and colleagues (Ann. Oper. Res. 103:161–173, 2001; J. Comput. Appl. Math. 14 ..."
Abstract

Cited by 10 (5 self)
 Add to MetaCart
Conjugate gradient methods are efficient methods for minimizing differentiable objective functions in large dimension spaces. However, converging line search strategies are usually not easy to choose, nor to implement. Sun and colleagues (Ann. Oper. Res. 103:161–173, 2001; J. Comput. Appl. Math. 146:37–45, 2002) introduced a simple stepsize formula. However, the associated convergence domain happens to be overrestrictive, since it precludes the optimal stepsize in the convex quadratic case. Here, we identify this stepsize formula with one iteration of the Weiszfeld algorithm in the scalar case. More generally, we propose to make use of a finite number of iterates of such an algorithm to compute the stepsize. In this framework, we establish a new convergence domain, that incorporates the optimal stepsize in the convex quadratic case.
A Majorize–Minimize Strategy for Subspace Optimization Applied to Image Restoration
"... © 2011 IEEE. Personal use of this material is permitted. However, permission to reprint/republish this material for advertising or promotional purposes or for creating new collective works for resale or redistribution to servers or lists, or to reuse any copyrighted component of this work in other w ..."
Abstract

Cited by 10 (2 self)
 Add to MetaCart
(Show Context)
© 2011 IEEE. Personal use of this material is permitted. However, permission to reprint/republish this material for advertising or promotional purposes or for creating new collective works for resale or redistribution to servers or lists, or to reuse any copyrighted component of this work in other works must be obtained from the IEEE. Abstract—This paper proposes accelerated subspace optimization methods in the context of image restoration. Subspace optimization methods belong to the class of iterative descent algorithms for unconstrained optimization. At each iteration of such methods, a stepsize vector allowing the best combination of several search directions is computed through a multidimensional search. It is usually obtained by an inner iterative secondorder method ruled by a stopping criterion that guarantees the convergence of the outer algorithm. As an alternative, we propose an original multidimensional search strategy based on the majorize–minimize principle. It leads to a closedform stepsize formula that ensures the convergence of the subspace algorithm whatever the number of inner iterations. The practical efficiency of the proposed scheme is illustrated in the context of edgepreserving image restoration. Index Terms—Conjugate gradient, image restoration, memory gradient, quadratic majorization, stepsize strategy, subspace optimization. I.
Robust estimation and wavelet thresholding in partial linear models
, 2006
"... This paper is concerned with a semiparametric partially linear regression model with unknown regression coefficients, an unknown nonparametric function for the nonlinear component, and unobservable Gaussian distributed random errors. We present a wavelet thresholding based estimation procedure to e ..."
Abstract

Cited by 8 (1 self)
 Add to MetaCart
This paper is concerned with a semiparametric partially linear regression model with unknown regression coefficients, an unknown nonparametric function for the nonlinear component, and unobservable Gaussian distributed random errors. We present a wavelet thresholding based estimation procedure to estimate the components of the partial linear model by establishing a connection between an l1penalty based wavelet estimator of the nonparametric component and Huber’s Mestimation of a standard linear model with outliers. Some general results on the large sample properties of the estimates of both the parametric and the nonparametric part of the model are established. Simulations and a real example are used to illustrate the general results and to compare the proposed methodology with other methods available in the recent literature. Keywords: Seminonparametric models, partly linear models, wavelet thresholding, backfitting, Mestimation, penalized leastsquares.
Sparse Signal Estimation by Maximally Sparse Convex Optimization
 IEEE TRANSACTIONS ON SIGNAL PROCESSING
, 2014
"... This paper addresses the problem of sparsity penalized least squares for applications in sparse signal processing, e.g. sparse deconvolution. This paper aims to induce sparsity more strongly than L1 norm regularization, while avoiding nonconvex optimization. For this purpose, this paper describes ..."
Abstract

Cited by 8 (4 self)
 Add to MetaCart
This paper addresses the problem of sparsity penalized least squares for applications in sparse signal processing, e.g. sparse deconvolution. This paper aims to induce sparsity more strongly than L1 norm regularization, while avoiding nonconvex optimization. For this purpose, this paper describes the design and use of nonconvex penalty functions (regularizers) constrained so as to ensure the convexity of the total cost function, F, to be minimized. The method is based on parametric penalty functions, the parameters of which are constrained to ensure convexity of F. It is shown that optimal parameters can be obtained by semidefinite programming (SDP). This maximally sparse convex (MSC) approach yields maximally nonconvex sparsityinducing penalty functions constrained such that the total cost function, F, is convex. It is demonstrated that iterative MSC (IMSC) can yield solutions substantially more sparse than the standard convex sparsityinducing approach, i.e., L1 norm minimization.