Results 1  10
of
196
Robust principal component analysis?
 Journal of the ACM,
, 2011
"... Abstract This paper is about a curious phenomenon. Suppose we have a data matrix, which is the superposition of a lowrank component and a sparse component. Can we recover each component individually? We prove that under some suitable assumptions, it is possible to recover both the lowrank and the ..."
Abstract

Cited by 569 (26 self)
 Add to MetaCart
(Show Context)
Abstract This paper is about a curious phenomenon. Suppose we have a data matrix, which is the superposition of a lowrank component and a sparse component. Can we recover each component individually? We prove that under some suitable assumptions, it is possible to recover both the lowrank and the sparse components exactly by solving a very convenient convex program called Principal Component Pursuit; among all feasible decompositions, simply minimize a weighted combination of the nuclear norm and of the 1 norm. This suggests the possibility of a principled approach to robust principal component analysis since our methodology and results assert that one can recover the principal components of a data matrix even though a positive fraction of its entries are arbitrarily corrupted. This extends to the situation where a fraction of the entries are missing as well. We discuss an algorithm for solving this optimization problem, and present applications in the area of video surveillance, where our methodology allows for the detection of objects in a cluttered background, and in the area of face recognition, where it offers a principled way of removing shadows and specularities in images of faces.
A Singular Value Thresholding Algorithm for Matrix Completion
, 2008
"... This paper introduces a novel algorithm to approximate the matrix with minimum nuclear norm among all matrices obeying a set of convex constraints. This problem may be understood as the convex relaxation of a rank minimization problem, and arises in many important applications as in the task of reco ..."
Abstract

Cited by 555 (22 self)
 Add to MetaCart
This paper introduces a novel algorithm to approximate the matrix with minimum nuclear norm among all matrices obeying a set of convex constraints. This problem may be understood as the convex relaxation of a rank minimization problem, and arises in many important applications as in the task of recovering a large matrix from a small subset of its entries (the famous Netflix problem). Offtheshelf algorithms such as interior point methods are not directly amenable to large problems of this kind with over a million unknown entries. This paper develops a simple firstorder and easytoimplement algorithm that is extremely efficient at addressing problems in which the optimal solution has low rank. The algorithm is iterative and produces a sequence of matrices {X k, Y k} and at each step, mainly performs a softthresholding operation on the singular values of the matrix Y k. There are two remarkable features making this attractive for lowrank matrix completion problems. The first is that the softthresholding operation is applied to a sparse matrix; the second is that the rank of the iterates {X k} is empirically nondecreasing. Both these facts allow the algorithm to make use of very minimal storage space and keep the computational cost of each iteration low. On
CurveletWavelet Regularized Split Bregman Iteration for Compressed Sensing
"... Compressed sensing is a new concept in signal processing. Assuming that a signal can be represented or approximated by only a few suitably chosen terms in a frame expansion, compressed sensing allows to recover this signal from much fewer samples than the ShannonNyquist theory requires. Many images ..."
Abstract

Cited by 119 (6 self)
 Add to MetaCart
(Show Context)
Compressed sensing is a new concept in signal processing. Assuming that a signal can be represented or approximated by only a few suitably chosen terms in a frame expansion, compressed sensing allows to recover this signal from much fewer samples than the ShannonNyquist theory requires. Many images can be sparsely approximated in expansions of suitable frames as wavelets, curvelets, wave atoms and others. Generally, wavelets represent pointlike features while curvelets represent linelike features well. For a suitable recovery of images, we propose models that contain weighted sparsity constraints in two different frames. Given the incomplete measurements f = Φu + ɛ with the measurement matrix Φ ∈ R K×N, K<<N, we consider a jointly sparsityconstrained optimization problem of the form argmin{‖ΛcΨcu‖1 + ‖ΛwΨwu‖1 + u 1 2‖f − Φu‖22}. Here Ψcand Ψw are the transform matrices corresponding to the two frames, and the diagonal matrices Λc, Λw contain the weights for the frame coefficients. We present efficient iteration methods to solve the optimization problem, based on Alternating Split Bregman algorithms. The convergence of the proposed iteration schemes will be proved by showing that they can be understood as special cases of the DouglasRachford Split algorithm. Numerical experiments for compressed sensing based Fourierdomain random imaging show good performances of the proposed curveletwavelet regularized split Bregman (CWSpB) methods,whereweparticularlyuseacombination of wavelet and curvelet coefficients as sparsity constraints.
Fast Linearized Bregman Iteration for Compressed Sensing
 and Sparse Denoising, 2008. UCLA CAM Reprots
, 2008
"... Abstract. Finding a solution of a linear equation Au = f with various minimization properties arises from many applications. One of such applications is compressed sensing, where an efficient and robusttonoise algorithm to find a minimal ℓ1 norm solution is needed. This means that the algorithm sh ..."
Abstract

Cited by 96 (20 self)
 Add to MetaCart
(Show Context)
Abstract. Finding a solution of a linear equation Au = f with various minimization properties arises from many applications. One of such applications is compressed sensing, where an efficient and robusttonoise algorithm to find a minimal ℓ1 norm solution is needed. This means that the algorithm should be tailored for large scale and completely dense matrices A, while Au and A T u can be computed by fast transforms and the solution to seek is sparse. Recently, a simple and fast algorithm based on linearized Bregman iteration was proposed in [28, 32] for this purpose. This paper is to analyze the convergence of linearized Bregman iterations and the minimization properties of their limit. Based on our analysis here, we derive also a new algorithm that is proven to be convergent with a rate. Furthermore, the new algorithm is as simple and fast as the algorithm given in [28, 32] in approximating a minimal ℓ1 norm solution of Au = f as shown by numerical simulations. Hence, it can be used as another choice of an efficient tool in compressed sensing. 1. Introduction. Let A ∈ R m×n with n> m and f ∈ R m be given. The aim of a basis pursuit problem is to find u ∈ R n by solving the following constrained minimization problem min
Bregmanized Nonlocal Regularization for Deconvolution and Sparse Reconstruction
, 2009
"... We propose two algorithms based on Bregman iteration and operator splitting technique for nonlocal TV regularization problems. The convergence of the algorithms is analyzed and applications to deconvolution and sparse reconstruction are presented. ..."
Abstract

Cited by 88 (8 self)
 Add to MetaCart
(Show Context)
We propose two algorithms based on Bregman iteration and operator splitting technique for nonlocal TV regularization problems. The convergence of the algorithms is analyzed and applications to deconvolution and sparse reconstruction are presented.
Bregman iterative algorithms for ℓ1minimization with applications to compressed sensing
 SIAM J. IMAGING SCI
, 2008
"... We propose simple and extremely efficient methods for solving the basis pursuit problem min{‖u‖1: Au = f,u ∈ R n}, which is used in compressed sensing. Our methods are based on Bregman iterative regularization, and they give a very accurate solution after solving only a very small number of 1 insta ..."
Abstract

Cited by 84 (15 self)
 Add to MetaCart
(Show Context)
We propose simple and extremely efficient methods for solving the basis pursuit problem min{‖u‖1: Au = f,u ∈ R n}, which is used in compressed sensing. Our methods are based on Bregman iterative regularization, and they give a very accurate solution after solving only a very small number of 1 instances of the unconstrained problem minu∈Rn μ‖u‖1 + 2 ‖Au−fk ‖ 2 2 for given matrix A and vector f k. We show analytically that this iterative approach yields exact solutions in a finite number of steps and present numerical results that demonstrate that as few as two to six iterations are sufficient in most cases. Our approach is especially useful for many compressed sensing applications where matrixvector operations involving A and A ⊤ can be computed by fast transforms. Utilizing a fast fixedpoint continuation solver that is based solely on such operations for solving the above unconstrained subproblem, we were able to quickly solve huge instances of compressed sensing problems on a standard PC.
Compressed sensing with quantized measurements
 IEEE Signal Proc. Lett
"... Abstract We consider the problem of estimating a sparse signal from a set of quantized, Gaussian noise corrupted measurements, where each measurement corresponds to an interval of values. We give two methods for (approximately) solving this problem, each based on minimizing a differentiable convex ..."
Abstract

Cited by 61 (0 self)
 Add to MetaCart
(Show Context)
Abstract We consider the problem of estimating a sparse signal from a set of quantized, Gaussian noise corrupted measurements, where each measurement corresponds to an interval of values. We give two methods for (approximately) solving this problem, each based on minimizing a differentiable convex function plus an ℓ 1 regularization term. Using a first order method developed by Yin et al, we demonstrate the performance of the methods through numerical simulation. We find that, using these methods, compressed sensing can be carried out even when the quantization is very coarse, e.g., 1 or 2 bits per measurement.
Geometric Applications of the Split Bregman Method: Segmentation and Surface Reconstruction
, 2009
"... Variational models for image segmentation have many applications, but can be slow to compute. Recently, globally convex segmentation models have been introduced which are very reliable, but contain TVregularizers, making them difficult to compute. The previously introduced Split Bregman method is a ..."
Abstract

Cited by 58 (7 self)
 Add to MetaCart
(Show Context)
Variational models for image segmentation have many applications, but can be slow to compute. Recently, globally convex segmentation models have been introduced which are very reliable, but contain TVregularizers, making them difficult to compute. The previously introduced Split Bregman method is a technique for fast minimization of L1 regularized functionals, and has been applied to denoising and compressed sensing problems. By applying the Split Bregman concept to image segmentation problems, we build fast solvers which can outperform more conventional schemes, such as duality based methods and graphcuts. We also consider the related problem of surface reconstruction from unorganized data points, which is used for constructing level set representations in 3 dimensions.
Fast algorithms for nonconvex compressive sensing: MRI reconstruction from very few data
 Int. Symp. Biomedical Imaing
, 2009
"... Compressive sensing is the reconstruction of sparse images or signals from very few samples, by means of solving a tractable optimization problem. In the context of MRI, this can allow reconstruction from many fewer kspace samples, thereby reducing scanning time. Previous work has shown that noncon ..."
Abstract

Cited by 51 (2 self)
 Add to MetaCart
(Show Context)
Compressive sensing is the reconstruction of sparse images or signals from very few samples, by means of solving a tractable optimization problem. In the context of MRI, this can allow reconstruction from many fewer kspace samples, thereby reducing scanning time. Previous work has shown that nonconvex optimization reduces still further the number of samples required for reconstruction, while still being tractable. In this work, we extend recent Fourierbased algorithms for convex optimization to the nonconvex setting, and obtain methods that combine the reconstruction abilities of previous nonconvex approaches with the computational speed of stateoftheart convex methods. Index Terms — Magnetic resonance imaging, image reconstruction, compressive sensing, nonconvex optimization.
Augmented Lagrangian method, dual methods, and split Bregman iteration for ROF, vectorial TV, and high order models
 SIAM Journal on Imaging Sciences
"... E. Fatemi, Physica D, 60(1992), pp. 259–268] based on total variation (TV) minimization has proven to be very useful. A lot of efforts have been devoted to obtain fast numerical schemes and overcome the nondifferentiability of the model. Methods considered to be particularly efficient for the ROF m ..."
Abstract

Cited by 51 (10 self)
 Add to MetaCart
(Show Context)
E. Fatemi, Physica D, 60(1992), pp. 259–268] based on total variation (TV) minimization has proven to be very useful. A lot of efforts have been devoted to obtain fast numerical schemes and overcome the nondifferentiability of the model. Methods considered to be particularly efficient for the ROF model include the dual methods of ChanGolubMulet (CGM) [T.F. Chan, G.H. Golub, and P. Mulet, SIAM J. Sci. Comput., 20(1999), pp. 1964–1977] and Chambolle [A. Chambolle, J. Math. Imaging