Results 1  10
of
50
Simultaneously Structured Models with Application to Sparse and Lowrank Matrices
, 2014
"... The topic of recovery of a structured model given a small number of linear observations has been wellstudied in recent years. Examples include recovering sparse or groupsparse vectors, lowrank matrices, and the sum of sparse and lowrank matrices, among others. In various applications in signal p ..."
Abstract

Cited by 41 (5 self)
 Add to MetaCart
The topic of recovery of a structured model given a small number of linear observations has been wellstudied in recent years. Examples include recovering sparse or groupsparse vectors, lowrank matrices, and the sum of sparse and lowrank matrices, among others. In various applications in signal processing and machine learning, the model of interest is known to be structured in several ways at the same time, for example, a matrix that is simultaneously sparse and lowrank. Often norms that promote each individual structure are known, and allow for recovery using an orderwise optimal number of measurements (e.g., `1 norm for sparsity, nuclear norm for matrix rank). Hence, it is reasonable to minimize a combination of such norms. We show that, surprisingly, if we use multiobjective optimization with these norms, then we can do no better, orderwise, than an algorithm that exploits only one of the present structures. This result suggests that to fully exploit the multiple structures, we need an entirely new convex relaxation, i.e. not one that is a function of the convex relaxations used for each structure. We then specialize our results to the case of sparse and lowrank matrices. We show that a nonconvex formulation of the problem can recover the model from very few measurements, which is on the order of the degrees of freedom of the matrix, whereas the convex problem obtained from a combination of the `1 and nuclear norms requires many more measurements. This proves an orderwise gap between the performance of the convex and nonconvex recovery problems in this case. Our framework applies to arbitrary structureinducing norms as well as to a wide range of measurement ensembles. This allows us to give performance bounds for problems such as sparse phase retrieval and lowrank tensor completion.
The squarederror of generalized LASSO: A precise analysis
 In 51st Annual Allerton Conference on Communication, Control, and Computing, Allerton Park & Retreat
"... We consider the problem of estimating an unknown signal x0 from noisy linear observations y = Ax0 + z ∈ Rm. In many practical instances of this problem, x0 has a certain structure that can be captured by a structure inducing function f (·). For example, `1 norm can be used to encourage a sparse solu ..."
Abstract

Cited by 14 (6 self)
 Add to MetaCart
We consider the problem of estimating an unknown signal x0 from noisy linear observations y = Ax0 + z ∈ Rm. In many practical instances of this problem, x0 has a certain structure that can be captured by a structure inducing function f (·). For example, `1 norm can be used to encourage a sparse solution. To estimate x0 with the aid of a convex f (·), we consider three variations of the widely used LASSO estimator and provide sharp characterizations of their performances. Our study falls under a generic framework, where the entries of the measurement matrix A and the noise vector z have zeromean normal distributions with variances 1 and σ2, respectively. For the LASSO estimator x∗, we ask: “What is the precise estimation error as a function of the noise level σ, the number of observations m and the structure of the signal?". In particular, we attempt to calculate the Normalized Square Error (NSE) defined as ‖x ∗−x0‖22 σ2. We show that, the structure of the signal x0 and choice of the function f (·) enter the error formulae through the summary parameters D f (x0,R+) and D f (x0,λ), which are defined as the “Gaussian squareddistances ” to the subdifferential cone and to the λscaled subdifferential of f at x0, respectively. The first estimator assumes apriori knowledge of f (x0) and is given by arg minx {‖y−Ax‖2 subject to f (x) ≤ f (x0)}. We prove that its worst case NSE is achieved when σ → 0 and concentrates around D f (x0,R+)m−D f (x0,R+). Secondly, we consider arg minx {‖y−Ax‖2 + λ f (x)}, for
Beyond incoherence: stable and robust sampling strategies for compressive imaging,” preprint
, 2012
"... In many signal processing applications, one wishes to acquire images that are sparse in transform domains such as spatial finite differences or wavelets using frequency domain samples. For such applications, overwhelming empirical evidence suggests that superior image reconstruction can be obtained ..."
Abstract

Cited by 9 (1 self)
 Add to MetaCart
(Show Context)
In many signal processing applications, one wishes to acquire images that are sparse in transform domains such as spatial finite differences or wavelets using frequency domain samples. For such applications, overwhelming empirical evidence suggests that superior image reconstruction can be obtained through variable density sampling strategies that concentrate on lower frequencies. The wavelet and Fourier transform domains are not incoherent because loworder wavelets and loworder frequencies are correlated, so compressed sensing theory does not immediately imply sampling strategies and reconstruction guarantees. In this paper we turn to a more refined notion of coherence – the socalled local coherence – measuring for each sensing vector separately how correlated it is to the sparsity basis. For Fourier measurements and Haar wavelet sparsity, the local coherence can be controlled, so for matrices comprised of frequencies sampled from suitable powerlaw densities, we can prove the restricted isometry property with nearoptimal embedding dimensions. Consequently, the variabledensity sampling strategies we provide — which are independent of the ambient dimension up to logarithmic factors — allow for image reconstructions that are stable to sparsity defects and robust to measurement noise. Our results cover both reconstruction by ℓ1minimization and by total variation minimization. 1
Stable and robust sampling strategies for compressive imaging
, 2013
"... In many signal processing applications, one wishes to acquire images that are sparse in transform domains such as spatial finite differences or wavelets using frequency domain samples. For such applications, overwhelming empirical evidence suggests that superior image reconstruction can be obtained ..."
Abstract

Cited by 8 (1 self)
 Add to MetaCart
(Show Context)
In many signal processing applications, one wishes to acquire images that are sparse in transform domains such as spatial finite differences or wavelets using frequency domain samples. For such applications, overwhelming empirical evidence suggests that superior image reconstruction can be obtained through variable density sampling strategies that concentrate on lower frequencies. The wavelet and Fourier transform domains are not incoherent because loworder wavelets and loworder frequencies are correlated, so compressive sensing theory does not immediately imply sampling strategies and reconstruction guarantees. In this paper we turn to a more refined notion of coherence – the socalled local coherence – measuring for each sensing vector separately how correlated it is to the sparsity basis. For Fourier measurements and Haar wavelet sparsity, the local coherence can be controlled and bounded explicitly, so for matrices comprised of frequencies sampled from a suitable inverse square powerlaw density, we can prove the restricted isometry property with nearoptimal embedding dimensions. Consequently, the variabledensity sampling strategy we provide allows for image reconstructions that are stable to sparsity defects and robust to measurement noise. Our results cover both reconstruction by ℓ1minimization and by total variation minimization. The local coherence framework developed in this paper should be of independent interest in sparse recovery problems more generally, as it implies that for optimal sparse recovery results, it suffices to have bounded average coherence from sensing basis to sparsity basis – as opposed to bounded maximal coherence – as long as the sampling strategy is adapted accordingly. 1
Analysis `1recovery with frames and Gaussian measurements
, 2013
"... This paper provides novel results for the recovery of signals from undersampled measurements based on analysis `1minimization, when the analysis operator is given by a frame. We both provide socalled uniform and nonuniform recovery guarantees for cosparse (analysissparse) signals using Gaussian ..."
Abstract

Cited by 4 (0 self)
 Add to MetaCart
(Show Context)
This paper provides novel results for the recovery of signals from undersampled measurements based on analysis `1minimization, when the analysis operator is given by a frame. We both provide socalled uniform and nonuniform recovery guarantees for cosparse (analysissparse) signals using Gaussian random measurement matrices. The nonuniform result relies on a recovery condition via tangent cones and the uniform recovery guarantee is based on an analysis version of the null space property. Examining these conditions for Gaussian random matrices leads to precise bounds on the number of measurements required for successful recovery. In the special case of standard sparsity, our result improves a bound due to Rudelson and Vershynin concerning the exact reconstruction of sparse signals from Gaussian measurements with respect to the constant and extends it to stability under passing to approximately sparse signals and to robustness under noise on the measurements.
Guarantees of total variation minimization for signal recovery. arXiv preprint arXiv:1301.6791
, 2013
"... In this paper, we consider using total variation (TV) minimization to recover signals whose gradients have a sparse support, from a small number of measurements. We establish a proof for the performance guarantee of TV minimization in recovering onedimensional signal with sparse gradient support. T ..."
Abstract

Cited by 4 (0 self)
 Add to MetaCart
(Show Context)
In this paper, we consider using total variation (TV) minimization to recover signals whose gradients have a sparse support, from a small number of measurements. We establish a proof for the performance guarantee of TV minimization in recovering onedimensional signal with sparse gradient support. This answers the open question of proving the fidelity of TV minimization in such a setting. We have shown that, when the number of Gaussian measurements M & NK logN, the TV minimization guarantees the exact recovery of any signal of size N with at most K nonzero gradients with high probability; when M. NK, the TV minimization cannot find the original signal with a moderate probability. Last but not least, when M grows linearly with the signal dimension, we will also show that the recoverable sparsity K grows linearly with the signal dimension as well. 1
Joint image reconstruction and segmentation using the Potts model. submitted, preprint arXiv:1405.5850
, 2014
"... Abstract. We propose a new algorithmic approach to the nonsmooth and nonconvex Potts problem (also called piecewiseconstant MumfordShah problem) for inverse imaging problems. We derive a suitable splitting into specific subproblems that can all be solved efficiently. Our method does not require ..."
Abstract

Cited by 3 (2 self)
 Add to MetaCart
(Show Context)
Abstract. We propose a new algorithmic approach to the nonsmooth and nonconvex Potts problem (also called piecewiseconstant MumfordShah problem) for inverse imaging problems. We derive a suitable splitting into specific subproblems that can all be solved efficiently. Our method does not require a priori knowledge on the gray levels nor on the number of segments of the reconstruction. Further, it avoids anisotropic artifacts such as geometric staircasing. We demonstrate the suitability of our method for joint image reconstruction and segmentation from limited data in xray and photoacoustic tomography. For instance, our method is able to reconstruct the SheppLogan phantom from 7 angular views only. We demonstrate the practical applicability in an experiment with real PET data.
1On the Effective Measure of Dimension in the Analysis Cosparse Model
"... Abstract—Many applications have benefited remarkably from lowdimensional models in the recent decade. The fact that many signals, though high dimensional, are intrinsically low dimensional has given the possibility to recover them stably from a relatively small number of their measurements. For exa ..."
Abstract

Cited by 3 (2 self)
 Add to MetaCart
(Show Context)
Abstract—Many applications have benefited remarkably from lowdimensional models in the recent decade. The fact that many signals, though high dimensional, are intrinsically low dimensional has given the possibility to recover them stably from a relatively small number of their measurements. For example, in compressed sensing with the standard (synthesis) sparsity prior and in matrix completion, the number of measurements needed is proportional (up to a logarithmic factor) to the signal’s manifold dimension. Recently, a new natural lowdimensional signal model has been proposed: the cosparse analysis prior. In the noiseless case, it is possible to recover signals from this model, using a combinatorial search, from a number of measurements proportional to the signal’s manifold
A consistent histogram estimator for exchangeable graph models.
 Journal of Machine Learning Research Workshop and Conference Proceedings,
, 2014
"... Abstract Exchangeable graph models (ExGM) subsume a number of popular network models. The mathematical object that characterizes an ExGM is termed a graphon. Finding scalable estimators of graphons, provably consistent, remains an open issue. In this paper, we propose a histogram estimator of a gra ..."
Abstract

Cited by 3 (0 self)
 Add to MetaCart
Abstract Exchangeable graph models (ExGM) subsume a number of popular network models. The mathematical object that characterizes an ExGM is termed a graphon. Finding scalable estimators of graphons, provably consistent, remains an open issue. In this paper, we propose a histogram estimator of a graphon that is provably consistent and numerically efficient. The proposed estimator is based on a sortingandsmoothing (SAS) algorithm, which first sorts the empirical degree of a graph, then smooths the sorted graph using total variation minimization. The consistency of the SAS algorithm is proved by leveraging sparsity concepts from compressed sensing.
Fundamental performance limits for ideal decoders in highdimensional linear inverse problems. arXiv:1311.6239
, 2013
"... The primary challenge in linear inverse problems is to design stable and robust “decoders” to reconstruct highdimensional vectors from a lowdimensional observation through a linear operator. Sparsity, lowrank, and related assumptions are typically exploited to design decoders which performance is ..."
Abstract

Cited by 3 (1 self)
 Add to MetaCart
(Show Context)
The primary challenge in linear inverse problems is to design stable and robust “decoders” to reconstruct highdimensional vectors from a lowdimensional observation through a linear operator. Sparsity, lowrank, and related assumptions are typically exploited to design decoders which performance is then bounded based on some measure of deviation from the idealized model, typically using a norm. This paper focuses on characterizing the fundamental performance limits that can be expected from an ideal decoder given a general model, i.e., a general subset of “simple ” vectors of interest. First, we extend the socalled notion of instance optimality of a decoder to settings where one only wishes to reconstruct some part of the original high dimensional vector from a lowdimensional observation. This covers practical settings such as medical imaging of a region of interest, or audio source separation when one is only interested in estimating the contribution of a specific instrument to a musical recording. We define instance optimality relatively to a model much beyond the traditional framework of sparse recovery, and characterize the existence of an instance optimal decoder in terms of joint properties of the model and the considered linear operator. Noiseless and noiserobust settings are both considered. We show somewhat surprisingly that the existence of noiseaware instance optimal decoders for all noise levels implies the existence of a noiseblind decoder. A consequence of our results is that for models that are rich enough to contain an orthonormal basis, the existence of an `2/`2 instance optimal decoder is only possible when the linear operator is not substantially dimensionreducing. This covers wellknown cases (sparse vectors, lowrank matrices) as well as a number of seemingly new situations (structured sparsity and sparse inverse covariance matrices for instance). We exhibit an operatordependent norm which, under a modelspecific generalization of the Restricted Isometry Property (RIP), always yields a feasible instance optimality property. This norm can be upper bounded by an atomic norm relative to the considered model. 1