Results 1  10
of
263
Democracy in Action: Quantization, Saturation, and Compressive Sensing
"... Recent theoretical developments in the area of compressive sensing (CS) have the potential to significantly extend the capabilities of digital data acquisition systems such as analogtodigital converters and digital imagers in certain applications. A key hallmark of CS is that it enables subNyquis ..."
Abstract

Cited by 59 (22 self)
 Add to MetaCart
Recent theoretical developments in the area of compressive sensing (CS) have the potential to significantly extend the capabilities of digital data acquisition systems such as analogtodigital converters and digital imagers in certain applications. A key hallmark of CS is that it enables subNyquist sampling for signals, images, and other data. In this paper, we explore and exploit another heretofore relatively unexplored hallmark, the fact that certain CS measurement systems are democractic, which means that each measurement carries roughly the same amount of information about the signal being acquired. Using the democracy property, we rethink how to quantize the compressive measurements in practical CS systems. If we were to apply the conventional wisdom gained from conventional ShannonNyquist uniform sampling, then we would scale down the analog signal amplitude (and therefore increase the quantization error) to avoid the gross saturation errors that occur when the signal amplitude exceeds the quantizer’s dynamic range. In stark contrast, we demonstrate that a CS system achieves the best performance when it operates at a significantly nonzero saturation rate. We develop two methods to recover signals from saturated CS measurements. The first directly exploits the democracy property by simply discarding the saturated measurements. The second integrates saturated measurements as constraints into standard linear programming and greedy recovery techniques. Finally, we develop a simple automatic gain control system that uses the saturation rate to optimize the input gain.
Two proposals for robust PCA using semidefinite programming
, 2010
"... The performance of principal component analysis (PCA) suffers badly in the presence of outliers. This paper proposes two novel approaches for robust PCA based on semidefinite programming. The first method, maximum mean absolute deviation rounding (MDR), seeks directions of large spread in the data ..."
Abstract

Cited by 47 (2 self)
 Add to MetaCart
The performance of principal component analysis (PCA) suffers badly in the presence of outliers. This paper proposes two novel approaches for robust PCA based on semidefinite programming. The first method, maximum mean absolute deviation rounding (MDR), seeks directions of large spread in the data while damping the effect of outliers. The second method produces a lowleverage decomposition (LLD) of the data that attempts to form a lowrank model for the data by separating out corrupted observations. This paper also presents efficient computational methods for solving these SDPs. Numerical experiments confirm the value of these new techniques.
SpaRCS: Recovering lowrank and sparse matrices from compressive measurements
, 2011
"... We consider the problem of recovering a matrix M that is the sum of a lowrank matrix L and a sparse matrix S from a small set of linear measurements of the form y = A(M) =A(L + S). This model subsumes three important classes of signal recovery problems: compressive sensing, affine rank minimization ..."
Abstract

Cited by 46 (4 self)
 Add to MetaCart
(Show Context)
We consider the problem of recovering a matrix M that is the sum of a lowrank matrix L and a sparse matrix S from a small set of linear measurements of the form y = A(M) =A(L + S). This model subsumes three important classes of signal recovery problems: compressive sensing, affine rank minimization, and robust principal component analysis. We propose a natural optimization problem for signal recovery under this model and develop a new greedy algorithm called SpaRCS to solve it. Empirically, SpaRCS inherits a number of desirable properties from the stateoftheart CoSaMP and ADMiRA algorithms, including exponential convergence and efficient implementation. Simulation results with video compressive sensing, hyperspectral imaging, and robust matrix completion data sets demonstrate both the accuracy and efficacy of the algorithm. 1
ANGULAR SYNCHRONIZATION BY EIGENVECTORS AND SEMIDEFINITE PROGRAMMING: ANALYSIS AND APPLICATION TO CLASS AVERAGING IN CRYOELECTRON MICROSCOPY
, 2009
"... The angular synchronization problem is to obtain an accurate estimation (up to a constant additive phase) for a set of unknown angles θ1,..., θn from m noisy measurements of their offsets θi − θj mod 2π. Of particular interest is angle recovery in the presence of many outlier measurements that are ..."
Abstract

Cited by 46 (18 self)
 Add to MetaCart
(Show Context)
The angular synchronization problem is to obtain an accurate estimation (up to a constant additive phase) for a set of unknown angles θ1,..., θn from m noisy measurements of their offsets θi − θj mod 2π. Of particular interest is angle recovery in the presence of many outlier measurements that are uniformly distributed in [0,2π) and carry no information on the true offsets. We introduce an efficient recovery algorithm for the unknown angles from the top eigenvector of a specially designed Hermitian matrix. The eigenvector method is extremely stable and succeeds even when the number of outliers is exceedingly large. For example, we successfully estimate n = 400 angles from a full set of m = `400 ´ offset measurements of which 90 % are outliers in less than a second 2 on a commercial laptop. We use random matrix theory to prove that the eigenvector method q gives
ACADO Toolkit  An OpenSource Framework for Automatic Control and Dynamic Optimization
, 2012
"... ..."
Is face recognition really a compressive sensing problem
 in CVPR
, 2011
"... Compressive Sensing has become one of the standard methods of face recognition within the literature. We show, however, that the sparsity assumption which underpins much of this work is not supported by the data. This lack of sparsity in the data means that compressive sensing approach cannot be gua ..."
Abstract

Cited by 30 (3 self)
 Add to MetaCart
(Show Context)
Compressive Sensing has become one of the standard methods of face recognition within the literature. We show, however, that the sparsity assumption which underpins much of this work is not supported by the data. This lack of sparsity in the data means that compressive sensing approach cannot be guaranteed to recover the exact signal, and therefore that sparse approximations may not deliver the robustness or performance desired. In this vein we show that a simple ℓ2 approach to the face recognition problem is not only significantly more accurate than the stateoftheart approach, it is also more robust, and much faster. These results are demonstrated on the publicly available YaleB and AR face datasets but have implications for the application of Compressive Sensing more broadly.
Jointsparse recovery from multiple measurements
, 2009
"... The jointsparse recovery problem aims to recover, from sets of compressed measurements, unknown sparse matrices with nonzero entries restricted to a subset of rows. This is an extension of the singlemeasurementvector (SMV) problem widely studied in compressed sensing. We analyze the recovery prop ..."
Abstract

Cited by 27 (0 self)
 Add to MetaCart
(Show Context)
The jointsparse recovery problem aims to recover, from sets of compressed measurements, unknown sparse matrices with nonzero entries restricted to a subset of rows. This is an extension of the singlemeasurementvector (SMV) problem widely studied in compressed sensing. We analyze the recovery properties for two types of recovery algorithms. First, we show that recovery using sumofnorm minimization cannot exceed the uniform recovery rate of sequential SMV using ℓ1 minimization, and that there are problems that can be solved with one approach but not with the other. Second, we analyze the performance of the ReMBo algorithm [M. Mishali and Y. Eldar, IEEE Trans. Sig. Proc., 56 (2008)] in combination with ℓ1 minimization, and show how recovery improves as more measurements are taken. From this analysis it follows that having more measurements than number of nonzero rows does not improve the potential theoretical recovery rate. 1
On the implementation and usage of SDPT3  a Matlab software package for semidefinitequadraticlinear programming, version 4.0
, 2006
"... ..."
Eigenvector synchronization, graph rigidity and the molecule problem
, 2012
"... ..."
(Show Context)
NONPARAMETRIC ANALYSIS OF RANDOM UTILITY MODELS: TESTING
, 2012
"... This paper aims at formulating econometric tools for investigating stochastic rationality, using the Random Utility Models (RUM) to deal with unobserved heterogeneity nonparametrically. Theoretical implications of the RUM have been studied in the literature, and in particular this paper utilizes t ..."
Abstract

Cited by 14 (0 self)
 Add to MetaCart
This paper aims at formulating econometric tools for investigating stochastic rationality, using the Random Utility Models (RUM) to deal with unobserved heterogeneity nonparametrically. Theoretical implications of the RUM have been studied in the literature, and in particular this paper utilizes the axiomatic treatment by McFadden and Richter (McFadden and Richter, 1991, McFadden, 2005). A set of econometric methods to test stochastic rationality given a crosssectional data is developed. This also provides means to conduct policy analysis with minimal assumptions. In terms of econometric methodology, it offers a procedure to deal with nonstandard features implied by inequality restrictions. This might be of interest on its own right, both theoretically and practically.