Results 1  10
of
38
Simultaneously Structured Models with Application to Sparse and Lowrank Matrices
, 2014
"... The topic of recovery of a structured model given a small number of linear observations has been wellstudied in recent years. Examples include recovering sparse or groupsparse vectors, lowrank matrices, and the sum of sparse and lowrank matrices, among others. In various applications in signal p ..."
Abstract

Cited by 41 (5 self)
 Add to MetaCart
The topic of recovery of a structured model given a small number of linear observations has been wellstudied in recent years. Examples include recovering sparse or groupsparse vectors, lowrank matrices, and the sum of sparse and lowrank matrices, among others. In various applications in signal processing and machine learning, the model of interest is known to be structured in several ways at the same time, for example, a matrix that is simultaneously sparse and lowrank. Often norms that promote each individual structure are known, and allow for recovery using an orderwise optimal number of measurements (e.g., `1 norm for sparsity, nuclear norm for matrix rank). Hence, it is reasonable to minimize a combination of such norms. We show that, surprisingly, if we use multiobjective optimization with these norms, then we can do no better, orderwise, than an algorithm that exploits only one of the present structures. This result suggests that to fully exploit the multiple structures, we need an entirely new convex relaxation, i.e. not one that is a function of the convex relaxations used for each structure. We then specialize our results to the case of sparse and lowrank matrices. We show that a nonconvex formulation of the problem can recover the model from very few measurements, which is on the order of the degrees of freedom of the matrix, whereas the convex problem obtained from a combination of the `1 and nuclear norms requires many more measurements. This proves an orderwise gap between the performance of the convex and nonconvex recovery problems in this case. Our framework applies to arbitrary structureinducing norms as well as to a wide range of measurement ensembles. This allows us to give performance bounds for problems such as sparse phase retrieval and lowrank tensor completion.
Compressive Phase Retrieval via Generalized Approximate Message Passing
"... Abstract—In this paper, we propose a novel approach to compressive phase retrieval based on loopy belief propagation and, in particular, on the generalized approximate message passing (GAMP) algorithm. Numerical results show that the proposed PRGAMP algorithm has excellent phasetransition behavior ..."
Abstract

Cited by 30 (8 self)
 Add to MetaCart
(Show Context)
Abstract—In this paper, we propose a novel approach to compressive phase retrieval based on loopy belief propagation and, in particular, on the generalized approximate message passing (GAMP) algorithm. Numerical results show that the proposed PRGAMP algorithm has excellent phasetransition behavior, noise robustness, and runtime. In particular, for successful recovery of synthetic BernoullicircularGaussian signals, PRGAMP requires ≈ 4 times the number of measurements as a phaseoracle version of GAMP and, at moderate to large SNR, the NMSE of PRGAMP is only ≈ 3 dB worse than that of phaseoracle GAMP. A comparison to the recently proposed convexrelation approach known as “CPRL ” reveals PRGAMP’s superior phase transition and ordersofmagnitude faster runtimes, especially as the problem dimensions increase. When applied to the recovery of a 65kpixel grayscale image from 32k randomly masked magnitude measurements, numerical results show a median PRGAMP runtime of only 13.4 seconds. A. Phase retrieval I.
GESPAR: Efficient phase retrieval of sparse signals
, 2013
"... Abstract—We consider the problem of phase retrieval, namely, recovery of a signal from the magnitude of its Fourier transform, or of any other linear transform. Due to the loss of Fourier phase information, this problem is illposed. Therefore, prior information on the signal is needed in order to ..."
Abstract

Cited by 29 (9 self)
 Add to MetaCart
Abstract—We consider the problem of phase retrieval, namely, recovery of a signal from the magnitude of its Fourier transform, or of any other linear transform. Due to the loss of Fourier phase information, this problem is illposed. Therefore, prior information on the signal is needed in order to enable its recovery. In this workwe consider the case in which the signal is known to be sparse, i.e., it consists of a small number of nonzero elements in an appropriate basis. We propose a fast local search method for recovering a sparse signal from measurements of its Fourier transform (or other linear transform) magnitude which we refer to as GESPAR: GrEedy Sparse PhAse Retrieval. Our algorithm does not require matrix lifting, unlike previous approaches, and therefore is potentially suitable for large scale problems such as images. Simulation results indicate that GESPAR is fast and more accurate than existing techniques in a variety of settings. Index Terms — Nonconvex optimization, phase retrieval, sparse signal processing.
Phase Retrieval via Wirtinger Flow: Theory and Algorithms
, 2014
"... We study the problem of recovering the phase from magnitude measurements; specifically, we wish to reconstruct a complexvalued signal x ∈ Cn about which we have phaseless samples of the form yr = ∣⟨ar,x⟩∣2, r = 1,...,m (knowledge of the phase of these samples would yield a linear system). This pape ..."
Abstract

Cited by 24 (4 self)
 Add to MetaCart
We study the problem of recovering the phase from magnitude measurements; specifically, we wish to reconstruct a complexvalued signal x ∈ Cn about which we have phaseless samples of the form yr = ∣⟨ar,x⟩∣2, r = 1,...,m (knowledge of the phase of these samples would yield a linear system). This paper develops a nonconvex formulation of the phase retrieval problem as well as a concrete solution algorithm. In a nutshell, this algorithm starts with a careful initialization obtained by means of a spectral method, and then refines this initial estimate by iteratively applying novel update rules, which have low computational complexity, much like in a gradient descent scheme. The main contribution is that this algorithm is shown to rigorously allow the exact retrieval of phase information from a nearly minimal number of random measurements. Indeed, the sequence of successive iterates provably converges to the solution at a geometric rate so that the proposed scheme is efficient both in terms of computational and data resources. In theory, a variation on this scheme leads to a nearlinear time algorithm for a physically realizable model based on coded diffraction patterns. We illustrate the effectiveness of our methods with various experiments on image data. Underlying our analysis are insights for the analysis of nonconvex optimization schemes that may have implications for computational problems beyond phase retrieval.
Phase retrieval using alternating minimization
 In NIPS
, 2013
"... Phase retrieval problems involve solving linear equations, but with missing sign (or phase, for complex numbers) information. Over the last two decades, a popular generic empirical approach to the many variants of this problem has been one of alternating minimization; i.e. alternating between estima ..."
Abstract

Cited by 24 (1 self)
 Add to MetaCart
(Show Context)
Phase retrieval problems involve solving linear equations, but with missing sign (or phase, for complex numbers) information. Over the last two decades, a popular generic empirical approach to the many variants of this problem has been one of alternating minimization; i.e. alternating between estimating the missing phase information, and the candidate solution. In this paper, we show that a simple alternating minimization algorithm geometrically converges to the solution of one such problem – finding a vector x from y,A, where y = ATx  and z  denotes a vector of elementwise magnitudes of z – under the assumption that A is Gaussian. Empirically, our algorithm performs similar to recently proposed convex techniques for this variant (which are based on “lifting ” to a convex matrix problem) in sample complexity and robustness to noise. However, our algorithm is much more efficient and can scale to large problems. Analytically, we show geometric convergence to the solution, and sample complexity that is off by log factors from obvious lower bounds. We also establish close to optimal scaling for the case when the unknown vector is sparse. Our work represents the only known theoretical guarantee for alternating minimization for any variant of phase retrieval problems in the nonconvex setting. 1
Phase Retrieval from Coded Diffraction Patterns
, 2013
"... This paper considers the question of recovering the phase of an object from intensityonly measurements, a problem which naturally appears in Xray crystallography and related disciplines. We study a physically realistic setup where one can modulate the signal of interest and then collect the inten ..."
Abstract

Cited by 21 (5 self)
 Add to MetaCart
This paper considers the question of recovering the phase of an object from intensityonly measurements, a problem which naturally appears in Xray crystallography and related disciplines. We study a physically realistic setup where one can modulate the signal of interest and then collect the intensity of its diffraction pattern, each modulation thereby producing a sort of coded diffraction pattern. We show that PhaseLift, a recent convex programming technique, recovers the phase information exactly from a number of random modulations, which is polylogarithmic in the number of unknowns. Numerical experiments with noiseless and noisy data complement our theoretical analysis and illustrate our approach.
Phase Retrieval with Application to Optical Imaging
, 2015
"... The problem of phase retrieval, i.e., the recovery of a function given the magnitude of its ..."
Abstract

Cited by 18 (6 self)
 Add to MetaCart
The problem of phase retrieval, i.e., the recovery of a function given the magnitude of its
Exact and stable covariance estimation from quadratic sampling via convex programming. to appear
 IEEE Transactions on Information Theory
, 2015
"... Statistical inference and information processing of highdimensional data often require efficient and accurate estimation of their secondorder statistics. With rapidly changing data, limited processing power and storage at the sensor suite, it is desirable to extract the covariance structure from ..."
Abstract

Cited by 10 (3 self)
 Add to MetaCart
(Show Context)
Statistical inference and information processing of highdimensional data often require efficient and accurate estimation of their secondorder statistics. With rapidly changing data, limited processing power and storage at the sensor suite, it is desirable to extract the covariance structure from a single pass over the data stream and a small number of measurements. In this paper, we explore a quadratic random measurement model which imposes a minimal memory requirement and low computational complexity during the sampling process, and is shown to be optimal in preserving lowdimensional covariance structures. Specifically, four popular structural assumptions of covariance matrices, namely low rank, Toeplitz low rank, sparsity, jointly rankone and sparse structure, are investigated. We show that a covariance matrix with either structure can be perfectly recovered from a nearoptimal number of subGaussian quadratic measurements, via efficient convex relaxation algorithms for the respective structure. The proposed algorithm has a variety of potential applications in streaming data processing, highfrequency wireless communication, phase space tomography in optics, noncoherent subspace detection, etc. Our method admits universally accurate covariance estimation in the absence of noise, as soon as the number of measurements exceeds the theoretic sampling limits. We also demonstrate the robustness of this approach against noise and imperfect structural assumptions. Our analysis is established upon a novel notion called the mixednorm restricted isometry property (RIP`2/`1), as well as the conventional RIP`2/`2 for nearisotropic and bounded measurements. Besides, our results improve upon bestknown phase retrieval (including both dense and sparse signals) guarantees using PhaseLift with a significantly simpler approach. 1
CPRL – An Extension of Compressive Sensing to the Phase Retrieval Problem
"... While compressive sensing (CS) has been one of the most vibrant research fields in the past few years, most development only applies to linear models. This limits its application in many areas where CS could make a difference. This paper presents a novel extension of CS to the phase retrieval proble ..."
Abstract

Cited by 8 (2 self)
 Add to MetaCart
(Show Context)
While compressive sensing (CS) has been one of the most vibrant research fields in the past few years, most development only applies to linear models. This limits its application in many areas where CS could make a difference. This paper presents a novel extension of CS to the phase retrieval problem, where intensity measurements of a linear system are used to recover a complex sparse signal. We propose a novel solution using a lifting technique – CPRL, which relaxes the NPhard problem to a nonsmooth semidefinite program. Our analysis shows that CPRL inherits many desirable properties from CS, such as guarantees for exact recovery. We further provide scalable numerical solvers to accelerate its implementation. 1