Results 1  10
of
158
Structured compressed sensing: From theory to applications
 IEEE TRANS. SIGNAL PROCESS
, 2011
"... Compressed sensing (CS) is an emerging field that has attracted considerable research interest over the past few years. Previous review articles in CS limit their scope to standard discretetodiscrete measurement architectures using matrices of randomized nature and signal models based on standard ..."
Abstract

Cited by 104 (16 self)
 Add to MetaCart
(Show Context)
Compressed sensing (CS) is an emerging field that has attracted considerable research interest over the past few years. Previous review articles in CS limit their scope to standard discretetodiscrete measurement architectures using matrices of randomized nature and signal models based on standard sparsity. In recent years, CS has worked its way into several new application areas. This, in turn, necessitates a fresh look on many of the basics of CS. The random matrix measurement operator must be replaced by more structured sensing architectures that correspond to the characteristics of feasible acquisition hardware. The standard sparsity prior has to be extended to include a much richer class of signals and to encode broader data models, including continuoustime signals. In our overview, the theme is exploiting signal and measurement structure in compressive sensing. The prime focus is bridging theory and practice; that is, to pinpoint the potential of structured CS strategies to emerge from the math to the hardware. Our summary highlights new directions as well as relations to more traditional CS, with the hope of serving both as a review to practitioners wanting to join this emerging field, and as a reference for researchers that attempts to put some of the existing ideas in perspective of practical applications.
Sparse signal recovery with temporally correlated source vectors using sparse Bayesian learning
 IEEE J. Sel. Topics Signal Process
, 2011
"... Abstract — We address the sparse signal recovery problem in the context of multiple measurement vectors (MMV) when elements in each nonzero row of the solution matrix are temporally correlated. Existing algorithms do not consider such temporal correlation and thus their performance degrades signific ..."
Abstract

Cited by 60 (15 self)
 Add to MetaCart
(Show Context)
Abstract — We address the sparse signal recovery problem in the context of multiple measurement vectors (MMV) when elements in each nonzero row of the solution matrix are temporally correlated. Existing algorithms do not consider such temporal correlation and thus their performance degrades significantly with the correlation. In this work, we propose a block sparse Bayesian learning framework which models the temporal correlation. We derive two sparse Bayesian learning (SBL) algorithms, which have superior recovery performance compared to existing algorithms, especially in the presence of high temporal correlation. Furthermore, our algorithms are better at handling highly underdetermined problems and require less rowsparsity on the solution matrix. We also provide analysis of the global and local minima of their cost function, and show that the SBL cost function has the very desirable property that the global minimum is at the sparsest solution to the MMV problem. Extensive experiments also provide some interesting results that motivate future theoretical research on the MMV model.
Xampling: Signal acquisition and processing in union of subspaces
, 2011
"... We introduce Xampling, a unified framework for signal acquisition and processing of signals in a union of subspaces. The main functions of this framework are two: Analog compression that narrows down the input bandwidth prior to sampling with commercial devices followed by a nonlinear algorithm that ..."
Abstract

Cited by 43 (21 self)
 Add to MetaCart
(Show Context)
We introduce Xampling, a unified framework for signal acquisition and processing of signals in a union of subspaces. The main functions of this framework are two: Analog compression that narrows down the input bandwidth prior to sampling with commercial devices followed by a nonlinear algorithm that detects the input subspace prior to conventional signal processing. A representative union model of spectrally sparse signals serves as a testcase to study these Xampling functions. We adopt three metrics for the choice of analog compression: robustness to model mismatch, required hardware accuracy, and software complexities. We conduct a comprehensive comparison between two subNyquist acquisition strategies for spectrally sparse signals, the random demodulator and the modulated wideband converter (MWC), in terms of these metrics and draw operative conclusions regarding the choice of analog compression. We then address low rate signal processing and develop an algorithm for that purpose that enables convenient signal processing at subNyquist rates from samples obtained by the MWC. We conclude by showing that a variety of other sampling approaches for different union classes fit nicely into our framework.
Sparse Recovery from Combined Fusion Frame Measurements
 IEEE Trans. Inform. Theory
"... Sparse representations have emerged as a powerful tool in signal and information processing, culminated by the success of new acquisition and processing techniques such as Compressed Sensing (CS). Fusion frames are very rich new signal representation methods that use collections of subspaces instead ..."
Abstract

Cited by 43 (12 self)
 Add to MetaCart
(Show Context)
Sparse representations have emerged as a powerful tool in signal and information processing, culminated by the success of new acquisition and processing techniques such as Compressed Sensing (CS). Fusion frames are very rich new signal representation methods that use collections of subspaces instead of vectors to represent signals. This work combines these exciting fields to introduce a new sparsity model for fusion frames. Signals that are sparse under the new model can be compressively sampled and uniquely reconstructed in ways similar to sparse signals using standard CS. The combination provides a promising new set of mathematical tools and signal models useful in a variety of applications. With the new model, a sparse signal has energy in very few of the subspaces of the fusion frame, although it does not need to be sparse within each of the subspaces it occupies. This sparsity model is captured using a mixed ℓ1/ℓ2 norm for fusion frames. A signal sparse in a fusion frame can be sampled using very few random projections and exactly reconstructed using a convex optimization that minimizes this mixed ℓ1/ℓ2 norm. The provided sampling conditions generalize coherence and RIP conditions used in standard CS theory. It is demonstrated that they are sufficient to guarantee sparse recovery of any signal sparse in our model. Moreover, an average case analysis is provided using a probability model on the sparse signal that shows that under very mild conditions the probability of recovery failure decays exponentially with increasing dimension of the subspaces. Index Terms
CHiLasso: A collaborative hierarchical sparse modeling framework
, 2010
"... Sparse modeling is a powerful framework for data analysis and processing. Traditionally, encoding in this framework is performed by solving an ℓ1regularized linear regression problem, commonly referred to as Lasso or basis pursuit. In this work we combine the sparsityinducing property of the Lasso ..."
Abstract

Cited by 36 (6 self)
 Add to MetaCart
Sparse modeling is a powerful framework for data analysis and processing. Traditionally, encoding in this framework is performed by solving an ℓ1regularized linear regression problem, commonly referred to as Lasso or basis pursuit. In this work we combine the sparsityinducing property of the Lasso model at the individual feature level, with the blocksparsity property of the Group Lasso model, where sparse groups of features are jointly encoded, obtaining a sparsity pattern hierarchically structured. This results in the Hierarchical Lasso (HiLasso), which shows important practical modeling advantages. We then extend this approach to the collaborative case, where a set of simultaneously coded signals share the same sparsity pattern at the higher (group) level, but not necessarily at the lower (inside the group) level, obtaining the collaborative HiLasso model (CHiLasso). Such signals then share the same active groups, or classes, but not necessarily the same active set. This model is very well suited for applications such as source identification and separation. An efficient optimization procedure, which guarantees convergence to the global optimum, is developed for these new models. The underlying presentation of the new framework and optimization approach is complemented with experimental examples and theoretical results regarding recovery guarantees for the proposed models.
Rank Awareness in Joint Sparse Recovery
, 2010
"... In this paper we revisit the sparse multiple measurement vector (MMV) problem, where the aim is to recover a set of jointly sparse multichannel vectors from incomplete measurements. This problem has received increasing interest as an extension of single channel sparse recovery, which lies at the hea ..."
Abstract

Cited by 35 (7 self)
 Add to MetaCart
(Show Context)
In this paper we revisit the sparse multiple measurement vector (MMV) problem, where the aim is to recover a set of jointly sparse multichannel vectors from incomplete measurements. This problem has received increasing interest as an extension of single channel sparse recovery, which lies at the heart of the emerging field of compressed sensing. However, MMV approximation has origins in the field of array signal processing as we discuss in this paper. Inspired by these links, we introduce a new family of MMV algorithms based on the wellknow MUSIC method in array processing. We particularly highlight the role of the rank of the unknown signal matrix X in determining the difficulty of the recovery problem. We begin by deriving necessary and sufficient conditions for the uniqueness of the sparse MMV solution, which indicates that the larger the rank of X the less sparse X needs to be to ensure uniqueness. We also show that as the rank of X increases, the computational effort required to solve the MMV problem through a combinatorial search is reduced. In the second part of the paper we consider practical suboptimal algorithms for MMV recovery. We examine the rank awareness of popular methods such as Simultaneous Orthogonal Matching Pursuit (SOMP) and mixed norm minimization techniques and show them to be rank blind in terms of worst case analysis. We then consider a family of greedy algorithms that are rank aware. The simplest such method is a discrete version of the MUSIC algorithm popular in array signal processing. This approach is guaranteed to recover the sparse vectors in the full rank MMV setting under mild conditions. We then extend this idea to develop a rank aware pursuit algorithm that naturally reduces to Order Recursive Matching Pursuit (ORMP) in the single measurement case. This approach also provides guaranteed recovery in the full rank setting. Numerical simulations demonstrate that the rank aware techniques are significantly better than existing methods in dealing with multiple measurements.
Joint Base Station Clustering and Beamformer Design for Partial Coordinated Transmission in Heterogeneous Networks
, 2012
"... ..."
Asymptotic analysis of complex LASSO via complex approximate message passing
 IEEE Trans. Inf. Theory
, 2011
"... Recovering a sparse signal from an undersampled set of random linear measurements is the main problem of interest in compressed sensing. In this paper, we consider the case where both the signal and the measurements are complexvalued. We study the popular reconstruction method of ℓ1regularized lea ..."
Abstract

Cited by 27 (9 self)
 Add to MetaCart
(Show Context)
Recovering a sparse signal from an undersampled set of random linear measurements is the main problem of interest in compressed sensing. In this paper, we consider the case where both the signal and the measurements are complexvalued. We study the popular reconstruction method of ℓ1regularized least squares or LASSO. While several studies have shown that the LASSO algorithm offers desirable solutions under certain conditions, the precise asymptotic performance of this algorithm in the complex setting is not yet known. In this paper, we extend the approximate message passing (AMP) algorithm to the complexvalued signals and measurements to obtain the complex approximate message passing algorithm (CAMP). We then generalize the state evolution framework recently introduced for the analysis of AMP, to the complex setting. Using the state evolution, we derive accurate formulas for the phase transition and noise sensitivity of both LASSO and CAMP. Our results are theoretically proved for the case of i.i.d. Gaussian sensing matrices. But we confirm through simulations that our results hold for larger class of random matrices. 1
Exploiting Statistical Dependencies in Sparse Representations for Signal Recovery
, 2012
"... Signal modeling lies at the core of numerous signal and image processing applications. A recent approach that has drawn considerable attention is sparse representation modeling, in which the signal is assumed to be generated as a combination of a few atoms from a given dictionary. In this work we c ..."
Abstract

Cited by 24 (6 self)
 Add to MetaCart
Signal modeling lies at the core of numerous signal and image processing applications. A recent approach that has drawn considerable attention is sparse representation modeling, in which the signal is assumed to be generated as a combination of a few atoms from a given dictionary. In this work we consider a Bayesian setting and go beyond the classic assumption of independence between the atoms. The main goal of this paper is to introduce a statistical model that takes such dependencies into account and show how this model can be used for sparse signal recovery. We follow the suggestion of two recent works and assume that the sparsity pattern is modeled by a Boltzmann machine, a commonly used graphical model. For general dependency models, exact MAP and MMSE estimation of the sparse representation becomes computationally complex. To simplify the computations, we propose greedy approximations of the MAP and MMSE estimators. We then consider a special case in which exact MAP is feasible, by assuming that the dictionary is unitary and the dependency model corresponds to a certain sparse graph. Exploiting this structure, we develop an efficient message passing algorithm that recovers the underlying signal. When the model parameters defining the underlying graph are unknown, we suggest an algorithm that learns these parameters directly from the data, leading to an iterative scheme for adaptive sparse signal recovery. The effectiveness of our approach is demonstrated on reallife signals patches of natural images where we compare the denoising performance to that of previous recovery methods that do not exploit the statistical dependencies.
A NonUniform Sampler for Wideband SpectrallySparse Environments
, 2009
"... We present the first custom integrated circuit implementation of the compressed sensing based nonuniform sampler (NUS). By sampling signals nonuniformly, the average sample rate can be more than a magnitude lower than the Nyquist rate, provided that these signals have a relatively low information ..."
Abstract

Cited by 20 (7 self)
 Add to MetaCart
We present the first custom integrated circuit implementation of the compressed sensing based nonuniform sampler (NUS). By sampling signals nonuniformly, the average sample rate can be more than a magnitude lower than the Nyquist rate, provided that these signals have a relatively low information content as measured by the sparsity of their spectrum. The hardware design combines a wideband IndiumPhosphide (InP) heterojunction bipolar transistor (HBT) sampleandhold with a commercial offtheshelf (COTS) analogtodigital converter (ADC) to digitize an 800 MHz to 2 GHz band (having 100 MHz of noncontiguous spectral content) at an average sample rate of 236 Msps. Signal reconstruction is performed via a nonlinear compressed sensing algorithm, and an efficient GPU implementation is discussed. Measured biterrorrate (BER) data for a GSM channel is presented, and comparisons to a conventional wideband 4.4 Gsps ADC are made.