Results 1  10
of
33
Simultaneously Structured Models with Application to Sparse and Lowrank Matrices
, 2014
"... The topic of recovery of a structured model given a small number of linear observations has been wellstudied in recent years. Examples include recovering sparse or groupsparse vectors, lowrank matrices, and the sum of sparse and lowrank matrices, among others. In various applications in signal p ..."
Abstract

Cited by 41 (5 self)
 Add to MetaCart
The topic of recovery of a structured model given a small number of linear observations has been wellstudied in recent years. Examples include recovering sparse or groupsparse vectors, lowrank matrices, and the sum of sparse and lowrank matrices, among others. In various applications in signal processing and machine learning, the model of interest is known to be structured in several ways at the same time, for example, a matrix that is simultaneously sparse and lowrank. Often norms that promote each individual structure are known, and allow for recovery using an orderwise optimal number of measurements (e.g., `1 norm for sparsity, nuclear norm for matrix rank). Hence, it is reasonable to minimize a combination of such norms. We show that, surprisingly, if we use multiobjective optimization with these norms, then we can do no better, orderwise, than an algorithm that exploits only one of the present structures. This result suggests that to fully exploit the multiple structures, we need an entirely new convex relaxation, i.e. not one that is a function of the convex relaxations used for each structure. We then specialize our results to the case of sparse and lowrank matrices. We show that a nonconvex formulation of the problem can recover the model from very few measurements, which is on the order of the degrees of freedom of the matrix, whereas the convex problem obtained from a combination of the `1 and nuclear norms requires many more measurements. This proves an orderwise gap between the performance of the convex and nonconvex recovery problems in this case. Our framework applies to arbitrary structureinducing norms as well as to a wide range of measurement ensembles. This allows us to give performance bounds for problems such as sparse phase retrieval and lowrank tensor completion.
GESPAR: Efficient phase retrieval of sparse signals
, 2013
"... Abstract—We consider the problem of phase retrieval, namely, recovery of a signal from the magnitude of its Fourier transform, or of any other linear transform. Due to the loss of Fourier phase information, this problem is illposed. Therefore, prior information on the signal is needed in order to ..."
Abstract

Cited by 29 (9 self)
 Add to MetaCart
Abstract—We consider the problem of phase retrieval, namely, recovery of a signal from the magnitude of its Fourier transform, or of any other linear transform. Due to the loss of Fourier phase information, this problem is illposed. Therefore, prior information on the signal is needed in order to enable its recovery. In this workwe consider the case in which the signal is known to be sparse, i.e., it consists of a small number of nonzero elements in an appropriate basis. We propose a fast local search method for recovering a sparse signal from measurements of its Fourier transform (or other linear transform) magnitude which we refer to as GESPAR: GrEedy Sparse PhAse Retrieval. Our algorithm does not require matrix lifting, unlike previous approaches, and therefore is potentially suitable for large scale problems such as images. Simulation results indicate that GESPAR is fast and more accurate than existing techniques in a variety of settings. Index Terms — Nonconvex optimization, phase retrieval, sparse signal processing.
Phase Retrieval with Application to Optical Imaging
, 2015
"... The problem of phase retrieval, i.e., the recovery of a function given the magnitude of its ..."
Abstract

Cited by 18 (6 self)
 Add to MetaCart
The problem of phase retrieval, i.e., the recovery of a function given the magnitude of its
Exact and stable covariance estimation from quadratic sampling via convex programming. to appear
 IEEE Transactions on Information Theory
, 2015
"... Statistical inference and information processing of highdimensional data often require efficient and accurate estimation of their secondorder statistics. With rapidly changing data, limited processing power and storage at the sensor suite, it is desirable to extract the covariance structure from ..."
Abstract

Cited by 10 (3 self)
 Add to MetaCart
Statistical inference and information processing of highdimensional data often require efficient and accurate estimation of their secondorder statistics. With rapidly changing data, limited processing power and storage at the sensor suite, it is desirable to extract the covariance structure from a single pass over the data stream and a small number of measurements. In this paper, we explore a quadratic random measurement model which imposes a minimal memory requirement and low computational complexity during the sampling process, and is shown to be optimal in preserving lowdimensional covariance structures. Specifically, four popular structural assumptions of covariance matrices, namely low rank, Toeplitz low rank, sparsity, jointly rankone and sparse structure, are investigated. We show that a covariance matrix with either structure can be perfectly recovered from a nearoptimal number of subGaussian quadratic measurements, via efficient convex relaxation algorithms for the respective structure. The proposed algorithm has a variety of potential applications in streaming data processing, highfrequency wireless communication, phase space tomography in optics, noncoherent subspace detection, etc. Our method admits universally accurate covariance estimation in the absence of noise, as soon as the number of measurements exceeds the theoretic sampling limits. We also demonstrate the robustness of this approach against noise and imperfect structural assumptions. Our analysis is established upon a novel notion called the mixednorm restricted isometry property (RIP`2/`1), as well as the conventional RIP`2/`2 for nearisotropic and bounded measurements. Besides, our results improve upon bestknown phase retrieval (including both dense and sparse signals) guarantees using PhaseLift with a significantly simpler approach. 1
CPRL – An Extension of Compressive Sensing to the Phase Retrieval Problem
"... While compressive sensing (CS) has been one of the most vibrant research fields in the past few years, most development only applies to linear models. This limits its application in many areas where CS could make a difference. This paper presents a novel extension of CS to the phase retrieval proble ..."
Abstract

Cited by 8 (2 self)
 Add to MetaCart
(Show Context)
While compressive sensing (CS) has been one of the most vibrant research fields in the past few years, most development only applies to linear models. This limits its application in many areas where CS could make a difference. This paper presents a novel extension of CS to the phase retrieval problem, where intensity measurements of a linear system are used to recover a complex sparse signal. We propose a novel solution using a lifting technique – CPRL, which relaxes the NPhard problem to a nonsmooth semidefinite program. Our analysis shows that CPRL inherits many desirable properties from CS, such as guarantees for exact recovery. We further provide scalable numerical solvers to accelerate its implementation. 1
Distributed sparse signal recovery for sensor networks
 in Proc. IEEE Int. Conf. on Acoust., Speech, and Sig. Proc. (ICASSP
"... We propose a distributed algorithm for sparse signal recovery in sensor networks based on Iterative Hard Thresholding (IHT). Every agent has a set of measurements of a signal x, and the objective is for the agents to recover x from their collective measurements at a minimal communication cost and ..."
Abstract

Cited by 7 (3 self)
 Add to MetaCart
(Show Context)
We propose a distributed algorithm for sparse signal recovery in sensor networks based on Iterative Hard Thresholding (IHT). Every agent has a set of measurements of a signal x, and the objective is for the agents to recover x from their collective measurements at a minimal communication cost and with low computational complexity. A naı̈ve distributed implementation of IHT would require global communication of every agent’s full state in each iteration. We find that we can dramatically reduce this communication cost by leveraging solutions to the distributed topK problem in the database literature. Evaluations show that our algorithm requires up to three orders of magnitude less total bandwidth than the bestknown distributed basis pursuit method. Index Terms — compressed sensing, distributed algorithm, iterative hard thresholding, topK 1.
1 Distributed Compressed Sensing For Static and TimeVarying Networks
"... Abstract—We consider the problem of innetwork compressed sensing from distributed measurements. Every agent has a set of measurements of a signal x, and the objective is for the agents to recover x from their collective measurements using only communication with neighbors in the network. Our distri ..."
Abstract

Cited by 2 (1 self)
 Add to MetaCart
Abstract—We consider the problem of innetwork compressed sensing from distributed measurements. Every agent has a set of measurements of a signal x, and the objective is for the agents to recover x from their collective measurements using only communication with neighbors in the network. Our distributed approach to this problem is based on the centralized Iterative Hard Thresholding algorithm (IHT). We first present a distributed IHT algorithm for static networks that leverages standard tools from distributed computing to execute innetwork computations with minimized bandwidth consumption. Next, we address distributed signal recovery in networks with timevarying topologies. The network dynamics necessarily introduce inaccuracies to our innetwork computations. To accommodate these inaccuracies, we show how centralized IHT can be extended to include inexact computations while still providing the same recovery guarantees as the original IHT algorithm. We then leverage these new theoretical results to develop a distributed version of IHT for timevarying networks. Evaluations show that our distributed algorithms for both static and timevarying networks outperform previously proposed solutions in time and bandwidth by several orders of magnitude. Index Terms—compressed sensing, distributed algorithm, iterative hard thresholding, distributed consensus I.
A new and improved quantitative recovery analysis for iterative hard thresholding algorithms . . .
, 2013
"... ..."
Sparse Phase Retrieval from ShortTime Fourier Measurements
"... Abstract—We consider the classical 1D phase retrieval problem. In order to overcome the difficulties associated with phase retrieval from measurements of the Fourier magnitude, we treat recovery from the magnitude of the shorttime Fourier transform (STFT). We first show that the redundancy offere ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
(Show Context)
Abstract—We consider the classical 1D phase retrieval problem. In order to overcome the difficulties associated with phase retrieval from measurements of the Fourier magnitude, we treat recovery from the magnitude of the shorttime Fourier transform (STFT). We first show that the redundancy offered by the STFT enables unique recovery for arbitrary nonvanishing inputs, under mild conditions. An efficient algorithm for recovery of a sparse input from the STFT magnitude is then suggested, based on an adaptation of the recently proposed GESPAR algorithm. We demonstrate through simulations that using the STFT leads to improved performance over recovery from the oversampled Fourier magnitude with the same number of measurements. Index Terms—GESPAR, phase retrieval, shorttime Fourier transform, sparsity.