Results 1  10
of
119
An accelerated proximal gradient algorithm for nuclear norm regularized least squares problems
, 2009
"... ..."
(Show Context)
Stable principal component pursuit
 In Proc. of International Symposium on Information Theory
, 2010
"... We consider the problem of recovering a target matrix that is a superposition of lowrank and sparse components, from a small set of linear measurements. This problem arises in compressed sensing of structured highdimensional signals such as videos and hyperspectral images, as well as in the analys ..."
Abstract

Cited by 94 (3 self)
 Add to MetaCart
(Show Context)
We consider the problem of recovering a target matrix that is a superposition of lowrank and sparse components, from a small set of linear measurements. This problem arises in compressed sensing of structured highdimensional signals such as videos and hyperspectral images, as well as in the analysis of transformation invariant lowrank structure recovery. We analyze the performance of the natural convex heuristic for solving this problem, under the assumption that measurements are chosen uniformly at random. We prove that this heuristic exactly recovers lowrank and sparse terms, provided the number of observations exceeds the number of intrinsic degrees of freedom of the component signals by a polylogarithmic factor. Our analysis introduces several ideas that may be of independent interest for the more general problem of compressed sensing and decomposing superpositions of multiple structured signals. 1
Phase Retrieval via Matrix Completion
, 2011
"... This paper develops a novel framework for phase retrieval, a problem which arises in Xray crystallography, diffraction imaging, astronomical imaging and many other applications. Our approach, called PhaseLift, combines multiple structured illuminations together with ideas from convex programming to ..."
Abstract

Cited by 71 (10 self)
 Add to MetaCart
This paper develops a novel framework for phase retrieval, a problem which arises in Xray crystallography, diffraction imaging, astronomical imaging and many other applications. Our approach, called PhaseLift, combines multiple structured illuminations together with ideas from convex programming to recover the phase from intensity measurements, typically from the modulus of the diffracted wave. We demonstrate empirically that any complexvalued object can be recovered from the knowledge of the magnitude of just a few diffracted patterns by solving a simple convex optimization problem inspired by the recent literature on matrix completion. More importantly, we also demonstrate that our noiseaware algorithms are stable in the sense that the reconstruction degrades gracefully as the signaltonoise ratio decreases. Finally, we introduce some theory showing that one can design very simple structured illumination patterns such that three diffracted figures uniquely determine the phase of the object we wish to recover.
A primaldual splitting method for convex optimization involving Lipschitzian, proximable and linear composite terms
, 2013
"... We propose a new firstorder splitting algorithm for solving jointly the primal and dual formulations of largescale convex minimization problems involving the sum of a smooth function with Lipschitzian gradient, a nonsmooth proximable function, and linear composite functions. This is a full splitti ..."
Abstract

Cited by 60 (10 self)
 Add to MetaCart
We propose a new firstorder splitting algorithm for solving jointly the primal and dual formulations of largescale convex minimization problems involving the sum of a smooth function with Lipschitzian gradient, a nonsmooth proximable function, and linear composite functions. This is a full splitting approach, in the sense that the gradient and the linear operators involved are applied explicitly without any inversion, while the nonsmooth functions are processed individually via their proximity operators. This work brings together and notably extends several classical splitting schemes, like the forward–backward and Douglas–Rachford methods, as well as the recent primal–dual method of Chambolle and Pock designed for problems with linear composite terms.
Squareroot lasso: pivotal recovery of sparse signals via conic programming
 Biometrika
, 2011
"... ar ..."
(Show Context)
Phase Retrieval from Coded Diffraction Patterns
, 2013
"... This paper considers the question of recovering the phase of an object from intensityonly measurements, a problem which naturally appears in Xray crystallography and related disciplines. We study a physically realistic setup where one can modulate the signal of interest and then collect the inten ..."
Abstract

Cited by 21 (5 self)
 Add to MetaCart
This paper considers the question of recovering the phase of an object from intensityonly measurements, a problem which naturally appears in Xray crystallography and related disciplines. We study a physically realistic setup where one can modulate the signal of interest and then collect the intensity of its diffraction pattern, each modulation thereby producing a sort of coded diffraction pattern. We show that PhaseLift, a recent convex programming technique, recovers the phase information exactly from a number of random modulations, which is polylogarithmic in the number of unknowns. Numerical experiments with noiseless and noisy data complement our theoretical analysis and illustrate our approach.
Adaptive Restart for Accelerated Gradient Schemes
, 2012
"... In this paper we demonstrate a simple heuristic adaptive restart technique that can dramatically improve the convergence rate of accelerated gradient schemes. The analysis of the technique relies on the observation that these schemes exhibit two modes of behavior depending on how much momentum is ap ..."
Abstract

Cited by 21 (0 self)
 Add to MetaCart
In this paper we demonstrate a simple heuristic adaptive restart technique that can dramatically improve the convergence rate of accelerated gradient schemes. The analysis of the technique relies on the observation that these schemes exhibit two modes of behavior depending on how much momentum is applied. In what we refer to as the ‘high momentum’ regime the iterates generated by an accelerated gradient scheme exhibit a periodic behavior, where the period is proportional to the square root of the local condition number of the objective function. This suggestsarestarttechnique whereby we resetthe momentum wheneverwe observeperiodic behaviour. We provide analysis to show that in many cases adaptively restarting allows us to recover the optimal rate of convergence with no prior knowledge of function parameters.
A significance test for the lasso
"... In the sparse linear regression setting, we consider testing the significance of the predictor variable that enters the current lasso model, in the sequence of models visited along the lasso solution path. We propose a simple test statistic based on lasso fitted values, called the covariance test st ..."
Abstract

Cited by 17 (1 self)
 Add to MetaCart
In the sparse linear regression setting, we consider testing the significance of the predictor variable that enters the current lasso model, in the sequence of models visited along the lasso solution path. We propose a simple test statistic based on lasso fitted values, called the covariance test statistic, and show that when the true model is linear, this statistic has an Exp(1) asymptotic distribution under the null hypothesis (the null being that all truly active variables are contained in the current lasso model). Our proof of this result for the special case of the first predictor to enter the model (i.e., testing for a single significant predictor variable against the global null) requires only weak assumptions on the predictor matrix X. On the other hand, our proof for a general step in the lasso path places further technical assumptions on X and the generative model, but still allows for the important highdimensional case p> n, and does not necessarily require that the current lasso model achieves perfect recovery of the truly active variables. Of course, for testing the significance of an additional variable between two nested linear models, one typically uses the chisquared test, comparing the drop in residual sum of squares (RSS) to a χ 2 1 distribution. But when this additional variable is not fixed, and has been chosen adaptively or greedily, this test is no longer appropriate: adaptivity makes the drop in RSS stochastically much larger than χ 2 1 under the null hypothesis. Our analysis explicitly accounts for adaptivity, as it must, since the lasso builds an adaptive sequence of linear models as the tuning parameter λ decreases. In this analysis, shrinkage plays a key role: though additional variables are chosen adaptively, the coefficients of lasso active variables are shrunken due to the ℓ1 penalty. Therefore the test statistic (which is based on lasso fitted values) is in a sense balanced by these two opposing properties—adaptivity and shrinkage—and its null distribution is tractable and asymptotically Exp(1).
A Direct Algorithm for 1D Total Variation Denoising
"... Abstract—A very fast noniterative algorithm is proposed for denoising or smoothing onedimensional discrete signals, by solving the total variation regularized leastsquares problem or the related fused lasso problem. A C code implementation is available on the web page of the author. Index Terms—To ..."
Abstract

Cited by 16 (0 self)
 Add to MetaCart
Abstract—A very fast noniterative algorithm is proposed for denoising or smoothing onedimensional discrete signals, by solving the total variation regularized leastsquares problem or the related fused lasso problem. A C code implementation is available on the web page of the author. Index Terms—Total variation, denoising, nonlinear smoothing, fused lasso, regularized leastsquares, nonparametric regression, convex nonsmooth optimization, taut string signal y cumulative sum sequence r hal00675043, version 4 11 Aug 2013 I.
Robust Subspace Clustering
, 2013
"... Subspace clustering refers to the task of finding a multisubspace representation that best fits a collection of points taken from a highdimensional space. This paper introduces an algorithm inspired by sparse subspace clustering (SSC) [17] to cluster noisy data, and develops some novel theory demo ..."
Abstract

Cited by 16 (1 self)
 Add to MetaCart
(Show Context)
Subspace clustering refers to the task of finding a multisubspace representation that best fits a collection of points taken from a highdimensional space. This paper introduces an algorithm inspired by sparse subspace clustering (SSC) [17] to cluster noisy data, and develops some novel theory demonstrating its correctness. In particular, the theory uses ideas from geometric functional analysis to show that the algorithm can accurately recover the underlying subspaces under minimal requirements on their orientation, and on the number of samples per subspace. Synthetic as well as real data experiments complement our theoretical study, illustrating our approach and demonstrating its effectiveness.