Results 1  10
of
129
Probing the Pareto frontier for basis pursuit solutions
, 2008
"... The basis pursuit problem seeks a minimum onenorm solution of an underdetermined leastsquares problem. Basis pursuit denoise (BPDN) fits the leastsquares problem only approximately, and a single parameter determines a curve that traces the optimal tradeoff between the leastsquares fit and the ..."
Abstract

Cited by 365 (5 self)
 Add to MetaCart
The basis pursuit problem seeks a minimum onenorm solution of an underdetermined leastsquares problem. Basis pursuit denoise (BPDN) fits the leastsquares problem only approximately, and a single parameter determines a curve that traces the optimal tradeoff between the leastsquares fit and the onenorm of the solution. We prove that this curve is convex and continuously differentiable over all points of interest, and show that it gives an explicit relationship to two other optimization problems closely related to BPDN. We describe a rootfinding algorithm for finding arbitrary points on this curve; the algorithm is suitable for problems that are large scale and for those that are in the complex domain. At each iteration, a spectral gradientprojection method approximately minimizes a leastsquares problem with an explicit onenorm constraint. Only matrixvector operations are required. The primaldual solution of this problem gives function and derivative information needed for the rootfinding method. Numerical experiments on a comprehensive set of test problems demonstrate that the method scales well to large problems.
A unified framework for highdimensional analysis of Mestimators with decomposable regularizers
"... ..."
Structured compressed sensing: From theory to applications
 IEEE TRANS. SIGNAL PROCESS
, 2011
"... Compressed sensing (CS) is an emerging field that has attracted considerable research interest over the past few years. Previous review articles in CS limit their scope to standard discretetodiscrete measurement architectures using matrices of randomized nature and signal models based on standard ..."
Abstract

Cited by 104 (16 self)
 Add to MetaCart
(Show Context)
Compressed sensing (CS) is an emerging field that has attracted considerable research interest over the past few years. Previous review articles in CS limit their scope to standard discretetodiscrete measurement architectures using matrices of randomized nature and signal models based on standard sparsity. In recent years, CS has worked its way into several new application areas. This, in turn, necessitates a fresh look on many of the basics of CS. The random matrix measurement operator must be replaced by more structured sensing architectures that correspond to the characteristics of feasible acquisition hardware. The standard sparsity prior has to be extended to include a much richer class of signals and to encode broader data models, including continuoustime signals. In our overview, the theme is exploiting signal and measurement structure in compressive sensing. The prime focus is bridging theory and practice; that is, to pinpoint the potential of structured CS strategies to emerge from the math to the hardware. Our summary highlights new directions as well as relations to more traditional CS, with the hope of serving both as a review to practitioners wanting to join this emerging field, and as a reference for researchers that attempts to put some of the existing ideas in perspective of practical applications.
Shifting Inequality and Recovery of Sparse Signals
 IEEE Transactions on Signal Processing
"... Abstract—In this paper, we present a concise and coherent analysis of the constrained `1 minimization method for stable recovering of highdimensional sparse signals both in the noiseless case and noisy case. The analysis is surprisingly simple and elementary, while leads to strong results. In parti ..."
Abstract

Cited by 64 (12 self)
 Add to MetaCart
(Show Context)
Abstract—In this paper, we present a concise and coherent analysis of the constrained `1 minimization method for stable recovering of highdimensional sparse signals both in the noiseless case and noisy case. The analysis is surprisingly simple and elementary, while leads to strong results. In particular, it is shown that the sparse recovery problem can be solved via `1 minimization under weaker conditions than what is known in the literature. A key technical tool is an elementary inequality, called Shifting Inequality, which, for a given nonnegative decreasing sequence, bounds the `2 norm of a subsequence in terms of the `1 norm of another subsequence by shifting the elements to the upper end. Index Terms — 1 minimization, restricted isometry property, shifting inequality, sparse recovery. I.
Compressed sensing: how sharp is the restricted isometry property?
, 2009
"... Compressed sensing is a recent technique by which signals can be measured at a rate proportional to their information content, combining the important task of compression directly into the measurement process. Since its introduction in 2004 there have been hundreds of manuscripts on compressed sens ..."
Abstract

Cited by 51 (7 self)
 Add to MetaCart
(Show Context)
Compressed sensing is a recent technique by which signals can be measured at a rate proportional to their information content, combining the important task of compression directly into the measurement process. Since its introduction in 2004 there have been hundreds of manuscripts on compressed sensing, a large fraction of which have focused on the design and analysis of algorithms to recover a signal from its compressed measurements. The Restricted Isometry Property (RIP) has become a ubiquitous property assumed in their analysis. We present the best known bounds on the RIP, and in the process illustrate the way in which the combinatorial nature of compressed sensing is controlled. Our quantitative bounds on the RIP allow precise statements as to how aggressively a signal can be undersampled, the essential question for practitioners.
Stable image reconstruction using total variation minimization
 SIAM Journal on Imaging Sciences
, 2013
"... This article presents nearoptimal guarantees for accurate and robust image recovery from undersampled noisy measurements using total variation minimization, and our results may be the first of this kind. In particular, we show that from O(s log(N)) nonadaptive linear measurements, an image can be ..."
Abstract

Cited by 50 (2 self)
 Add to MetaCart
(Show Context)
This article presents nearoptimal guarantees for accurate and robust image recovery from undersampled noisy measurements using total variation minimization, and our results may be the first of this kind. In particular, we show that from O(s log(N)) nonadaptive linear measurements, an image can be reconstructed to within the best sterm approximation of its gradient, up to a logarithmic factor. Along the way, we prove a strengthened Sobolev inequality for functions lying in the null space of a suitably incoherent matrix. 1
Compressive simultaneous fullwaveform simulation
, 2008
"... The fact that the computational complexity of wavefield simulation is proportional to the size of the discretized model and acquisition geometry, and not to the complexity of the simulated wavefield, is a major impediment within seismic imaging. By turning simulation into a compressive sensing prob ..."
Abstract

Cited by 41 (20 self)
 Add to MetaCart
The fact that the computational complexity of wavefield simulation is proportional to the size of the discretized model and acquisition geometry, and not to the complexity of the simulated wavefield, is a major impediment within seismic imaging. By turning simulation into a compressive sensing problem—where simulated data is recovered from a relatively small number of independent simultaneous sources—we remove this impediment by showing that compressively sampling a simulation is equivalent to compressively sampling the sources, followed by solving a reduced system. As in compressive sensing, this allows for a reduction in sampling rate and hence in simulation costs. We demonstrate this principle for the timeharmonic Helmholtz solver. The solution is computed by inverting the reduced system, followed by a recovery of the full wavefield with a sparsity promoting program. Depending on the wavefield’s sparsity, this approach can lead to significant cost reductions, in particular when combined with the implicit preconditioned Helmholtz solver, which is known to converge even for decreasing mesh sizes and increasing angular frequencies. These properties make our scheme a viable alternative to explicit timedomain finitedifferences.
Kronecker Compressive Sensing
"... Compressive sensing (CS) is an emerging approach for acquisition of signals having a sparse or compressible representation in some basis. While the CS literature has mostly focused on problems involving 1D signals and 2D images, many important applications involve signals that are multidimensional ..."
Abstract

Cited by 38 (2 self)
 Add to MetaCart
(Show Context)
Compressive sensing (CS) is an emerging approach for acquisition of signals having a sparse or compressible representation in some basis. While the CS literature has mostly focused on problems involving 1D signals and 2D images, many important applications involve signals that are multidimensional; in this case, CS works best with representations that encapsulate the structure of such signals in every dimension. We propose the use of Kronecker product matrices in CS for two purposes. First, we can use such matrices as sparsifying bases that jointly model the different types of structure present in the signal. Second, the measurement matrices used in distributed settings can be easily expressed as Kronecker product matrices. The Kronecker product formulation in these two settings enables the derivation of analytical bounds for sparse approximation of multidimensional signals and CS recovery performance as well as a means to evaluate novel distributed measurement schemes.
Confidence intervals and hypothesis testing for highdimensional regression. arXiv: 1306.3171
"... Fitting highdimensional statistical models often requires the use of nonlinear parameter estimation procedures. As a consequence, it is generally impossible to obtain an exact characterization of the probability distribution of the parameter estimates. This in turn implies that it is extremely cha ..."
Abstract

Cited by 31 (3 self)
 Add to MetaCart
(Show Context)
Fitting highdimensional statistical models often requires the use of nonlinear parameter estimation procedures. As a consequence, it is generally impossible to obtain an exact characterization of the probability distribution of the parameter estimates. This in turn implies that it is extremely challenging to quantify the uncertainty associated with a certain parameter estimate. Concretely, no commonly accepted procedure exists for computing classical measures of uncertainty and statistical significance as confidence intervals or pvalues. We consider here a broad class regression problems, and propose an efficient algorithm for constructing confidence intervals and pvalues. The resulting confidence intervals have nearly optimal size. When testing for the null hypothesis that a certain parameter is vanishing, our method has nearly optimal power. Our approach is based on constructing a ‘debiased ’ version of regularized Mestimators. The new construction improves over recent work in the field in that it does not assume a special structure on the design matrix. Furthermore, proofs are remarkably simple. We test our method on a diabetes prediction problem. 1
TimeFrequency Energy Distributions Meet Compressed Sensing
, 2010
"... Abstract—In the case of multicomponent signals with amplitude and frequency modulations, the idealized representation which consists of weighted trajectories on the timefrequency (TF) plane, is intrinsically sparse. Recent advances in optimal recovery from sparsity constraints thus suggest to revis ..."
Abstract

Cited by 30 (0 self)
 Add to MetaCart
(Show Context)
Abstract—In the case of multicomponent signals with amplitude and frequency modulations, the idealized representation which consists of weighted trajectories on the timefrequency (TF) plane, is intrinsically sparse. Recent advances in optimal recovery from sparsity constraints thus suggest to revisit the issue of TF localization by exploiting sparsity, as adapted to the specific context of (quadratic) TF distributions. Based on classical results in TF analysis, it is argued that the relevant information is mostly concentrated in a restricted subset of Fourier coefficients of the WignerVille distribution neighbouring the origin of the ambiguity plane. Using this incomplete information as the primary constraint, the desired distribution follows as the minimum ℓ1norm solution in the transformed TF domain. Possibilities and limitations of the approach are demonstrated via controlled numerical experiments, its performance is assessed in various configurations and the results are compared with standard techniques. It is shown that improved representations can be obtained, though at a computational cost which is significantly increased. Index Terms—timefrequency, localization, sparsity. EDICS Category: SSPNSSP