Results 1  10
of
216
Structured compressed sensing: From theory to applications
 IEEE TRANS. SIGNAL PROCESS
, 2011
"... Compressed sensing (CS) is an emerging field that has attracted considerable research interest over the past few years. Previous review articles in CS limit their scope to standard discretetodiscrete measurement architectures using matrices of randomized nature and signal models based on standard ..."
Abstract

Cited by 98 (15 self)
 Add to MetaCart
(Show Context)
Compressed sensing (CS) is an emerging field that has attracted considerable research interest over the past few years. Previous review articles in CS limit their scope to standard discretetodiscrete measurement architectures using matrices of randomized nature and signal models based on standard sparsity. In recent years, CS has worked its way into several new application areas. This, in turn, necessitates a fresh look on many of the basics of CS. The random matrix measurement operator must be replaced by more structured sensing architectures that correspond to the characteristics of feasible acquisition hardware. The standard sparsity prior has to be extended to include a much richer class of signals and to encode broader data models, including continuoustime signals. In our overview, the theme is exploiting signal and measurement structure in compressive sensing. The prime focus is bridging theory and practice; that is, to pinpoint the potential of structured CS strategies to emerge from the math to the hardware. Our summary highlights new directions as well as relations to more traditional CS, with the hope of serving both as a review to practitioners wanting to join this emerging field, and as a reference for researchers that attempts to put some of the existing ideas in perspective of practical applications.
Localization of Frames, Banach Frames, and the Invertibility of the Frame Operator
"... We introduce a new concept to describe the localization of frames. In our main result we shown that the frame operator preserves this localization and that the dual frame possesses the same localization property. As an application we show that certain frames for Hilbert spaces extend automatically t ..."
Abstract

Cited by 79 (9 self)
 Add to MetaCart
We introduce a new concept to describe the localization of frames. In our main result we shown that the frame operator preserves this localization and that the dual frame possesses the same localization property. As an application we show that certain frames for Hilbert spaces extend automatically to Banach frames. Using this abstract theory, we derive new results on the construction of nonuniform Gabor frames and solve a problem about nonuniform sampling in shiftinvariant spaces. 1.
Compressed Sensing of Analog Signals in ShiftInvariant Spaces
, 2009
"... A traditional assumption underlying most data converters is that the signal should be sampled at a rate exceeding twice the highest frequency. This statement is based on a worstcase scenario in which the signal occupies the entire available bandwidth. In practice, many signals are sparse so that on ..."
Abstract

Cited by 74 (41 self)
 Add to MetaCart
(Show Context)
A traditional assumption underlying most data converters is that the signal should be sampled at a rate exceeding twice the highest frequency. This statement is based on a worstcase scenario in which the signal occupies the entire available bandwidth. In practice, many signals are sparse so that only part of the bandwidth is used. In this paper, we develop methods for lowrate sampling of continuoustime sparse signals in shiftinvariant (SI) spaces, generated by m kernels with period T. We model sparsity by treating the case in which only k out of the m generators are active, however, we do not know which k are chosen. We show how to sample such signals at a rate much lower than m/T, which is the minimal sampling rate without exploiting sparsity. Our approach combines ideas from analog sampling in a subspace with a recently developed block diagram that converts an infinite set of sparse equations to a finite counterpart. Using these two components we formulate our problem within the framework of finite compressed sensing (CS) and then rely on algorithms developed in that context. The distinguishing feature of our results is that in contrast to standard CS, which treats finitelength vectors, we consider sampling of analog signals for which no underlying finitedimensional model exists. The proposed framework allows to extend much of the recent literature on CS to the analog domain.
Democracy in Action: Quantization, Saturation, and Compressive Sensing
"... Recent theoretical developments in the area of compressive sensing (CS) have the potential to significantly extend the capabilities of digital data acquisition systems such as analogtodigital converters and digital imagers in certain applications. A key hallmark of CS is that it enables subNyquis ..."
Abstract

Cited by 59 (22 self)
 Add to MetaCart
Recent theoretical developments in the area of compressive sensing (CS) have the potential to significantly extend the capabilities of digital data acquisition systems such as analogtodigital converters and digital imagers in certain applications. A key hallmark of CS is that it enables subNyquist sampling for signals, images, and other data. In this paper, we explore and exploit another heretofore relatively unexplored hallmark, the fact that certain CS measurement systems are democractic, which means that each measurement carries roughly the same amount of information about the signal being acquired. Using the democracy property, we rethink how to quantize the compressive measurements in practical CS systems. If we were to apply the conventional wisdom gained from conventional ShannonNyquist uniform sampling, then we would scale down the analog signal amplitude (and therefore increase the quantization error) to avoid the gross saturation errors that occur when the signal amplitude exceeds the quantizer’s dynamic range. In stark contrast, we demonstrate that a CS system achieves the best performance when it operates at a significantly nonzero saturation rate. We develop two methods to recover signals from saturated CS measurements. The first directly exploits the democracy property by simply discarding the saturated measurements. The second integrates saturated measurements as constraints into standard linear programming and greedy recovery techniques. Finally, we develop a simple automatic gain control system that uses the saturation rate to optimize the input gain.
Multichannel sampling of pulse streams at the rate of innovation
 IEEE TRANS. SIGNAL PROCESS
, 2011
"... We consider minimalrate sampling schemes for infinite streams of delayed and weighted versions of a known pulse shape. The minimal sampling rate for these parametric signals is referred to as the rate of innovation and is equal to the number of degrees of freedom per unit time. Although sampling of ..."
Abstract

Cited by 51 (9 self)
 Add to MetaCart
We consider minimalrate sampling schemes for infinite streams of delayed and weighted versions of a known pulse shape. The minimal sampling rate for these parametric signals is referred to as the rate of innovation and is equal to the number of degrees of freedom per unit time. Although sampling of infinite pulse streams was treated in previous works, either the rate of innovation was not achieved, or the pulse shape was limited to Diracs. In this paper we propose a multichannel architecture for sampling pulse streams with arbitrary shape, operating at the rate of innovation. Our approach is based on modulating the input signal with a set of properly chosen waveforms, followed by a bank of integrators. This architecture is motivated by recent work on subNyquist sampling of multiband signals. We show that the pulse stream can be recovered from the proposed minimalrate samples using standard tools taken from spectral estimation in a stable way even at high rates of innovation. In addition, we address practical implementation issues, such as reduction of hardware complexity and immunity to failure in the sampling channels. The resulting scheme is flexible and exhibits better noise robustness than previous approaches.
Shannon sampling and function reconstruction from point values
 BULL. AM. MATH. SOC
, 2004
"... ..."
Optimal tight frames and quantum measurement
 IEEE Trans. Inform. Theory
, 2002
"... Tight frames and rankone quantum measurements are shown to be intimately related. In fact, the family of normalized tight frames for the space in which a quantum mechanical system lies is precisely the family of rankone generalized quantum measurements (POVMs) on that space. Using this relationshi ..."
Abstract

Cited by 44 (7 self)
 Add to MetaCart
(Show Context)
Tight frames and rankone quantum measurements are shown to be intimately related. In fact, the family of normalized tight frames for the space in which a quantum mechanical system lies is precisely the family of rankone generalized quantum measurements (POVMs) on that space. Using this relationship, frametheoretical analogues of various quantummechanical concepts and results are developed. The analogue of a leastsquares quantum measurement is a tight frame that is closest in a leastsquares sense to a given set of vectors. The leastsquares tight frame is found for both the case in which the scaling of the frame is specified (constrained leastsquares frame (CLSF)) and the case in which the scaling is free (unconstrained leastsquares frame (ULSF)). The wellknown canonical frame is shown to be proportional to the ULSF and to coincide with the CLSF with a certain scaling. Finally, the canonical frame vectors corresponding to a geometrically uniform vector set are shown to be geometrically uniform and to have the same symmetries as the original vector set.
Generalized smoothing splines and the optimal discretization of the Wiener filter
 IEEE Trans. Signal Process
, 2005
"... Abstract—We introduce an extended class of cardinal L Lsplines, where L is a pseudodifferential operator satisfying some admissibility conditions. We show that the L Lspline signal interpolation problem is well posed and that its solution is the unique minimizer of the spline energy functional L ..."
Abstract

Cited by 43 (24 self)
 Add to MetaCart
(Show Context)
Abstract—We introduce an extended class of cardinal L Lsplines, where L is a pseudodifferential operator satisfying some admissibility conditions. We show that the L Lspline signal interpolation problem is well posed and that its solution is the unique minimizer of the spline energy functional L P, subject to the interpolation constraint. Next, we consider the corresponding regularized least squares estimation problem, which is more appropriate for dealing with noisy data. The criterion to be minimized is the sum of a quadratic data term, which forces the solution to be close to the input samples, and a “smoothness” term that privileges solutions with small spline energies. Here, too, we find that the optimal solution, among all possible functions, is a cardinal L Lspline. We show that this smoothing spline estimator has a stable representation in a Bsplinelike basis and that its coefficients can be computed by digital filtering of the input signal. We describe an efficient recursive filtering algorithm that is applicable whenever the transfer function of L is rational (which corresponds to the case of exponential splines). We justify these algorithms statistically by establishing an equivalence between L L smoothing splines and the minimum mean square error (MMSE) estimation of a stationary signal corrupted by white Gaussian noise. In this modelbased formulation, the optimum operator L is the whitening filter of the process, and the regularization parameter is proportional to the noise variance. Thus, the proposed formalism yields the optimal discretization of the classical Wiener filter, together with a fast recursive algorithm. It extends the standard Wiener solution by providing the optimal interpolation space. We also present a Bayesian interpretation of the algorithm. Index Terms—Nonparametric estimation, recursive filtering, smoothing splines, splines (polynomial and exponential), stationary processes, variational principle, Wiener filter. I.
Nonideal sampling and interpolation from noisy observations in shiftinvariant spaces
 IEEE Trans. Signal Processing
, 2006
"... Abstract—Digital analysis and processing of signals inherently relies on the existence of methods for reconstructing a continuoustime signal from a sequence of corrupted discretetime samples. In this paper, a general formulation of this problem is developed that treats the interpolation problem fro ..."
Abstract

Cited by 43 (22 self)
 Add to MetaCart
(Show Context)
Abstract—Digital analysis and processing of signals inherently relies on the existence of methods for reconstructing a continuoustime signal from a sequence of corrupted discretetime samples. In this paper, a general formulation of this problem is developed that treats the interpolation problem from ideal, noisy samples, and the deconvolution problem in which the signal is filtered prior to sampling, in a unified way. The signal reconstruction is performed in a shiftinvariant subspace spanned by the integer shifts of a generating function, where the expansion coefficients are obtained by processing the noisy samples with a digital correction filter. Several alternative approaches to designing the correction filter are suggested, which differ in their assumptions on the signal and noise. The classical deconvolution solutions (leastsquares, Tikhonov, and Wiener) are adapted to our particular situation, and new methods that are optimal in a minimax sense are also proposed. The solutions often have a similar structure and can be computed simply and efficiently by digital filtering. Some concrete examples of reconstruction filters are presented, as well as simple guidelines for selecting the free parameters (e.g., regularization) of the various algorithms. Index Terms—Deconvolution, interpolation, minimax reconstruction, sampling. I.
History and evolution of the Density Theorem for Gabor frames
, 2007
"... The Density Theorem for Gabor Frames is one of the fundamental results of timefrequency analysis. This expository survey attempts to reconstruct the long and very involved history of this theorem and to present its context and evolution, from the onedimensional rectangular lattice setting, to arb ..."
Abstract

Cited by 39 (6 self)
 Add to MetaCart
The Density Theorem for Gabor Frames is one of the fundamental results of timefrequency analysis. This expository survey attempts to reconstruct the long and very involved history of this theorem and to present its context and evolution, from the onedimensional rectangular lattice setting, to arbitrary lattices in higher dimensions, to irregular Gabor frames, and most recently beyond the setting of Gabor frames to abstract localized frames. Related fundamental principles in Gabor analysis are also surveyed, including the Wexler–Raz biorthogonality relations, the Duality Principle, the Balian–Low Theorem, the Walnut and Janssen representations, and the Homogeneous Approximation Property. An extended bibliography is included.