Results 1  10
of
103
Democracy in Action: Quantization, Saturation, and Compressive Sensing
"... Recent theoretical developments in the area of compressive sensing (CS) have the potential to significantly extend the capabilities of digital data acquisition systems such as analogtodigital converters and digital imagers in certain applications. A key hallmark of CS is that it enables subNyquis ..."
Abstract

Cited by 59 (22 self)
 Add to MetaCart
Recent theoretical developments in the area of compressive sensing (CS) have the potential to significantly extend the capabilities of digital data acquisition systems such as analogtodigital converters and digital imagers in certain applications. A key hallmark of CS is that it enables subNyquist sampling for signals, images, and other data. In this paper, we explore and exploit another heretofore relatively unexplored hallmark, the fact that certain CS measurement systems are democractic, which means that each measurement carries roughly the same amount of information about the signal being acquired. Using the democracy property, we rethink how to quantize the compressive measurements in practical CS systems. If we were to apply the conventional wisdom gained from conventional ShannonNyquist uniform sampling, then we would scale down the analog signal amplitude (and therefore increase the quantization error) to avoid the gross saturation errors that occur when the signal amplitude exceeds the quantizer’s dynamic range. In stark contrast, we demonstrate that a CS system achieves the best performance when it operates at a significantly nonzero saturation rate. We develop two methods to recover signals from saturated CS measurements. The first directly exploits the democracy property by simply discarding the saturated measurements. The second integrates saturated measurements as constraints into standard linear programming and greedy recovery techniques. Finally, we develop a simple automatic gain control system that uses the saturation rate to optimize the input gain.
Xampling: Signal acquisition and processing in union of subspaces
, 2011
"... We introduce Xampling, a unified framework for signal acquisition and processing of signals in a union of subspaces. The main functions of this framework are two: Analog compression that narrows down the input bandwidth prior to sampling with commercial devices followed by a nonlinear algorithm that ..."
Abstract

Cited by 43 (21 self)
 Add to MetaCart
(Show Context)
We introduce Xampling, a unified framework for signal acquisition and processing of signals in a union of subspaces. The main functions of this framework are two: Analog compression that narrows down the input bandwidth prior to sampling with commercial devices followed by a nonlinear algorithm that detects the input subspace prior to conventional signal processing. A representative union model of spectrally sparse signals serves as a testcase to study these Xampling functions. We adopt three metrics for the choice of analog compression: robustness to model mismatch, required hardware accuracy, and software complexities. We conduct a comprehensive comparison between two subNyquist acquisition strategies for spectrally sparse signals, the random demodulator and the modulated wideband converter (MWC), in terms of these metrics and draw operative conclusions regarding the choice of analog compression. We then address low rate signal processing and develop an algorithm for that purpose that enables convenient signal processing at subNyquist rates from samples obtained by the MWC. We conclude by showing that a variety of other sampling approaches for different union classes fit nicely into our framework.
A Short Note on Compressed Sensing with Partially Known Signal Support
, 2010
"... This short note studies a variation of the Compressed Sensing paradigm introduced recently by Vaswani et al., i.e. the recovery of sparse signals from a certain number of linear measurements when the signal support is partially known. The reconstruction method is based on a convex minimization progr ..."
Abstract

Cited by 27 (0 self)
 Add to MetaCart
(Show Context)
This short note studies a variation of the Compressed Sensing paradigm introduced recently by Vaswani et al., i.e. the recovery of sparse signals from a certain number of linear measurements when the signal support is partially known. The reconstruction method is based on a convex minimization program coined innovative Basis Pursuit DeNoise (or i BPDN). Under the common ℓ2fidelity constraint made on the available measurements, this optimization promotes the (ℓ1) sparsity of the candidate signal over the complement of this known part. In particular, this paper extends the results of Vaswani et al. to the cases of compressible signals and noisy measurements. Our proof relies on a small adaption of the results of Candes in 2008 for characterizing the stability of the Basis Pursuit DeNoise (BPDN) program. We emphasize also an interesting link between our method and the recent work of Davenport et al. on the δstable embeddings and the cancelthenrecover strategy applied to our problem. For both approaches, reconstructions are indeed stabilized when the sensing matrix respects the Restricted Isometry Property for the same sparsity order. We conclude by sketching an easy numerical method relying on monotone operator splitting and proximal methods that iteratively solves i BPDN.
On the Observability of Linear Systems from Random, Compressive Measurements
"... Abstract — Recovering or estimating the initial state of a highdimensional system can require a potentially large number of measurements. In this paper, we explain how this burden can be significantly reduced for certain linear systems when randomized measurement operators are employed. Our work bui ..."
Abstract

Cited by 18 (6 self)
 Add to MetaCart
(Show Context)
Abstract — Recovering or estimating the initial state of a highdimensional system can require a potentially large number of measurements. In this paper, we explain how this burden can be significantly reduced for certain linear systems when randomized measurement operators are employed. Our work builds upon recent results from the field of Compressive Sensing (CS), in which a highdimensional signal containing few nonzero entries can be efficiently recovered from a small number of random measurements. In particular, we develop concentration of measure bounds for the observability matrix and explain circumstances under which this matrix can satisfy the Restricted Isometry Property (RIP), which is central to much analysis in CS. We also illustrate our results with a simple case study of a diffusion system. Aside from permitting recovery of sparse initial states, our analysis has potential applications in solving inference problems such as detection and classification of more general initial states. I.
Concentration of Measure for Block Diagonal Matrices with Applications to Compressive Sensing
, 2010
"... Theoretical analysis of randomized, compressive operators often depends on a concentration of measure inequality for the operator in question. Typically, such inequalities quantify the likelihood that a random matrix will preserve the norm of a signal after multiplication. When this likelihood is ve ..."
Abstract

Cited by 16 (7 self)
 Add to MetaCart
Theoretical analysis of randomized, compressive operators often depends on a concentration of measure inequality for the operator in question. Typically, such inequalities quantify the likelihood that a random matrix will preserve the norm of a signal after multiplication. When this likelihood is very high for any signal, the random matrices have a variety of known uses in dimensionality reduction and Compressive Sensing. Concentration of measure results are wellestablished for unstructured compressive matrices, populated with independent and identically distributed (i.i.d.) random entries. Many realworld acquisition systems, however, are subject to architectural constraints that make such matrices impractical. In this paper we derive concentration of measure bounds for two types of block diagonal compressive matrices, one in which the blocks along the main diagonal are random and independent, and one in which the blocks are random but equal. For both types of matrices, we show that the likelihood of norm preservation depends on certain properties of the signal being measured, but that for the best case signals, both types of block diagonal matrices can offer concentration performance on par with their unstructured, i.i.d. counterparts. We support our theoretical results with illustrative simulations as well as (analytical and empirical) investigations of several signal classes that are highly amenable to measurement using block diagonal matrices. Finally, we discuss applications of these results in establishing performance guarantees for solving signal processing tasks in the compressed domain (e.g., signal detection), and in establishing the Restricted Isometry Property for the Toeplitz matrices that arise in compressive channel sensing.
Two are better than one: Fundamental parameters of frame coherence
, 2011
"... This paper investigates two parameters that measure the coherence of a frame: worstcase and average coherence. We first use worstcase and average coherence to derive nearoptimal probabilistic guarantees on both sparse signal detection and reconstruction in the presence of noise. Next, we provide ..."
Abstract

Cited by 16 (8 self)
 Add to MetaCart
This paper investigates two parameters that measure the coherence of a frame: worstcase and average coherence. We first use worstcase and average coherence to derive nearoptimal probabilistic guarantees on both sparse signal detection and reconstruction in the presence of noise. Next, we provide a catalog of nearly tight frames with small worstcase and average coherence. Later, we find a new lower bound on worstcase coherence; we compare it to the Welch bound and use it to interpret recently reported signal reconstruction results. Finally, we give an algorithm that transforms frames in a way that decreases average coherence without changing the spectral norm or worstcase coherence.
Random observations on random observations: Sparse signal acquisition and processing
 RICE UNIVERSITY
, 2010
"... ..."
Concentration of Measure Inequalities for Compressive Toeplitz Matrices with Applications to Detection and System Identification
"... Abstract — In this paper, we derive concentration of measure inequalities for compressive Toeplitz matrices (having fewer rows than columns) with entries drawn from an independent and identically distributed (i.i.d.) Gaussian random sequence. These inequalities show that the norm of a vector mapped ..."
Abstract

Cited by 11 (7 self)
 Add to MetaCart
(Show Context)
Abstract — In this paper, we derive concentration of measure inequalities for compressive Toeplitz matrices (having fewer rows than columns) with entries drawn from an independent and identically distributed (i.i.d.) Gaussian random sequence. These inequalities show that the norm of a vector mapped by a Toeplitz matrix to a lower dimensional space concentrates around its mean with a tail probability bound that decays exponentially in the dimension of the range space divided by a factor that is a function of the sample covariance of the vector. Motivated by the emerging field of Compressive Sensing (CS), we apply these inequalities to problems involving the analysis of highdimensional systems from convolutionbased compressive measurements. We discuss applications such as system identification, namely the estimation of the impulse response of a system, in cases where one can assume that the impulse response is highdimensional, but sparse. We also consider the problem of detecting a change in the dynamic behavior of a system, where the change itself can be modeled by a system with a sparse impulse response. I.
Anomaly detection and reconstruction from random projections
 IEEE Transactions on Image Processing
"... Abstract—Compressedsensing methodology typically employs random projections simultaneously with signal acquisition to accomplish dimensionality reduction within a sensor device. The effect of such random projections on the preservation of anomalous data is investigated. The popular RX anomaly det ..."
Abstract

Cited by 9 (2 self)
 Add to MetaCart
Abstract—Compressedsensing methodology typically employs random projections simultaneously with signal acquisition to accomplish dimensionality reduction within a sensor device. The effect of such random projections on the preservation of anomalous data is investigated. The popular RX anomaly detector is derived for the case in which global anomalies are to be identified directly in the randomprojection domain, and it is determined via both random simulation, as well as empirical observation that strongly anomalous vectors are likely to be identifiable by the projectiondomain RX detector even in lowdimensional projections. Finally, a reconstruction procedure for hyperspectral imagery is developed wherein projectiondomain anomaly detection is employed to partition the data set, permitting anomaly and normal pixel classes to be separately reconstructed in order to improve the representation of the anomaly pixels. Index Terms—Anomaly detection, compressed sensing (CS), hyperspectral data, principal component analysis (PCA). I.
Matched Filtering from Limited Frequency Samples
, 2011
"... In this paper, we study a simple correlationbased strategy for estimating the unknown delay and amplitude of a signal based on a small number of noisy, randomly chosen frequencydomain samples. We model the output of this “compressive matched filter ” as a random process whose mean equals the scale ..."
Abstract

Cited by 9 (5 self)
 Add to MetaCart
(Show Context)
In this paper, we study a simple correlationbased strategy for estimating the unknown delay and amplitude of a signal based on a small number of noisy, randomly chosen frequencydomain samples. We model the output of this “compressive matched filter ” as a random process whose mean equals the scaled, shifted autocorrelation function of the template signal. Using tools from the theory of empirical processes, we prove that the expected maximum deviation of this process from its mean decreases sharply as the number of measurements increases, and we also derive a probabilistic tail bound on the maximum deviation. Putting all of this together, we bound the minimum number of measurements required to guarantee that the empirical maximum of this random process occurs sufficiently close to the true peak of its mean function. We conclude that for broad classes of signals, this compressive matched filter will successfully estimate the unknown delay (with high probability, and within a prescribed tolerance) using a number of random frequencydomain samples that scales inversely with the signaltonoise ratio and only logarithmically in the in the observation bandwidth and the possible range of delays.