Results 1  10
of
24
Structured compressed sensing: From theory to applications
 IEEE TRANS. SIGNAL PROCESS
, 2011
"... Compressed sensing (CS) is an emerging field that has attracted considerable research interest over the past few years. Previous review articles in CS limit their scope to standard discretetodiscrete measurement architectures using matrices of randomized nature and signal models based on standard ..."
Abstract

Cited by 104 (16 self)
 Add to MetaCart
(Show Context)
Compressed sensing (CS) is an emerging field that has attracted considerable research interest over the past few years. Previous review articles in CS limit their scope to standard discretetodiscrete measurement architectures using matrices of randomized nature and signal models based on standard sparsity. In recent years, CS has worked its way into several new application areas. This, in turn, necessitates a fresh look on many of the basics of CS. The random matrix measurement operator must be replaced by more structured sensing architectures that correspond to the characteristics of feasible acquisition hardware. The standard sparsity prior has to be extended to include a much richer class of signals and to encode broader data models, including continuoustime signals. In our overview, the theme is exploiting signal and measurement structure in compressive sensing. The prime focus is bridging theory and practice; that is, to pinpoint the potential of structured CS strategies to emerge from the math to the hardware. Our summary highlights new directions as well as relations to more traditional CS, with the hope of serving both as a review to practitioners wanting to join this emerging field, and as a reference for researchers that attempts to put some of the existing ideas in perspective of practical applications.
Dictionary Learning for Noisy and Incomplete Hyperspectral Images
, 2011
"... We consider analysis of noisy and incomplete hyperspectral imagery, with the objective of removing the noise and inferring the missing data. The noise statistics may be wavelengthdependent, and the fraction of data missing (at random) may be substantial, including potentially entire bands, offering ..."
Abstract

Cited by 14 (4 self)
 Add to MetaCart
We consider analysis of noisy and incomplete hyperspectral imagery, with the objective of removing the noise and inferring the missing data. The noise statistics may be wavelengthdependent, and the fraction of data missing (at random) may be substantial, including potentially entire bands, offering the potential to significantly reduce the quantity of data that need be measured. To achieve this objective, the imagery is divided into contiguous threedimensional (3D) spatiospectral blocks, of spatial dimension much less than the image dimension. It is assumed that each such 3D block may be represented as a linear combination of dictionary elements of the same dimension, plus noise, and the dictionary elements are learned in situ based on the observed data (no a priori training). The number of dictionary elements needed for representation of any particular block is typically small relative to the block dimensions, and all the image blocks are processed jointly (“collaboratively”) to infer the underlying dictionary. We address dictionary learning from a Bayesian perspective, considering two distinct means of imposing sparse dictionary usage. These models allow inference of the number of dictionary elements needed as well as the underlying wavelengthdependent noise statistics. It is demonstrated that drawing the dictionary elements from a Gaussian process prior, imposing structure on the wavelength dependence of the dictionary elements, yields significant advantages, relative to the moreconventional approach of using an i.i.d. Gaussian prior for the dictionary elements; this advantage is particularly evident in the presence of noise. The framework is demonstrated by processing hyperspectral imagery with a significant number of voxels missing uniformly at random, with imagery at specific wavelengths missing entirely, and in the presence of substantial additive noise.
Compressed sensing for energyefficient wireless telemonitoring of noninvasive fetal ECG via block sparse bayesian learning
 Biomedical Engineering, IEEE Transactions on
, 2013
"... ar ..."
(Show Context)
Sparse and Redundant Representation Modeling  What Next?
, 2012
"... Signal processing relies heavily on data models; these are mathematical constructions imposed on the data source that force a dimensionality reduction of some sort. The vast activity in signal processing during the past several decades is essentially driven by an evolution of these models and their ..."
Abstract

Cited by 8 (1 self)
 Add to MetaCart
Signal processing relies heavily on data models; these are mathematical constructions imposed on the data source that force a dimensionality reduction of some sort. The vast activity in signal processing during the past several decades is essentially driven by an evolution of these models and their use in practice. In that respect, the past decade has been certainly the era of sparse and redundant representations, a popular and highly effective data model. This very appealing model led to a long series of intriguing theoretical and numerical questions, and to many innovative ideas that harness this model to real engineering problems. The new entries recently added to the IEEESPL EDICS reflect the popularity of this model and its impact on signal processing research and practice. Despite the huge success of this model so far, this field
Learning efficient sparse and low rank models
 CoRR
"... Parsimony, including sparsity and low rank, has been shown to successfully model data in numerous machine learning and signal processing tasks. Traditionally, such modeling approaches rely on an iterative algorithm that minimizes an objective function with parsimonypromoting terms. The inherently ..."
Abstract

Cited by 6 (3 self)
 Add to MetaCart
(Show Context)
Parsimony, including sparsity and low rank, has been shown to successfully model data in numerous machine learning and signal processing tasks. Traditionally, such modeling approaches rely on an iterative algorithm that minimizes an objective function with parsimonypromoting terms. The inherently sequential structure and datadependent complexity and latency of iterative optimization constitute a major limitation in many applications requiring realtime performance or involving largescale data. Another limitation encountered by these modeling techniques is the difficulty of their inclusion in discriminative learning scenarios. In this work, we propose to move the emphasis from the model to the pursuit algorithm, and develop a processcentric view of parsimonious modeling, in which a learned deterministic fixedcomplexity pursuit process is used in lieu of iterative optimization. We show a principled way to construct learnable pursuit process architectures for structured sparse and robust low rank models, derived from the iteration of proximal descent algorithms. These architectures learn to approximate the exact parsimonious representation at a fraction of the complexity of the standard optimization methods. We also show that appropriate training regimes allow to naturally extend parsimonious models to discriminative settings. Stateoftheart results are demonstrated on several challenging problems in image and audio processing with several orders of magnitude speedup compared to the exact optimization algorithms.
1A Statistical Prediction Model Based on Sparse Representations for Single Image SuperResolution
"... We address single image superresolution using a statistical prediction model based on sparse representations of low and high resolution image patches. The suggested model allows us to avoid any invariance assumption, which is a common practice in sparsitybased approaches treating this task. Predic ..."
Abstract

Cited by 5 (0 self)
 Add to MetaCart
(Show Context)
We address single image superresolution using a statistical prediction model based on sparse representations of low and high resolution image patches. The suggested model allows us to avoid any invariance assumption, which is a common practice in sparsitybased approaches treating this task. Prediction of high resolution patches is obtained via MMSE estimation and the resulting scheme has the useful interpretation of a feedforward neural network. To further enhance performance we suggest data clustering and cascading several levels of the basic algorithm. We suggest a training scheme for the resulting network and demonstrate the capabilities of our algorithm, showing its advantages over existing methods based on a low and high resolution dictionary pair, in terms of computational complexity, numerical criteria and visual appearance. The suggested approach offers a desirable compromise between low computational complexity and reconstruction quality, when comparing it with stateoftheart methods for single image superresolution. Index Terms Dictionary learning, feedforward neural networks, MMSE estimation, nonlinear prediction, single image superresolution, sparse representations, statistical models, restricted Boltzmann machine, zooming deblurring I.
Structured Sparsity Models for Reverberant Speech Separation
, 2010
"... We tackle the speech separation problem through modeling the acoustics of the reverberant chambers. Our approach exploits structured sparsity models to perform speech recovery and room acoustic modeling from recordings of concurrent unknown sources. The speakers are assumed to lie on a twodimension ..."
Abstract

Cited by 5 (5 self)
 Add to MetaCart
We tackle the speech separation problem through modeling the acoustics of the reverberant chambers. Our approach exploits structured sparsity models to perform speech recovery and room acoustic modeling from recordings of concurrent unknown sources. The speakers are assumed to lie on a twodimensional plane and the multipath channel is characterized using the image model. We propose an algorithm for room geometry estimation relying on localization of the early images of the speakers by sparse approximation of the spatial spectrum of the virtual sources in a freespace model. The images are then clustered exploiting the lowrank structure of the spectrotemporal components belonging to each source. This enables us to identify the early support of the room impulse response function and its unique map to the room geometry. To further tackle the ambiguity of the reflection ratios, we propose a novel formulation of the reverberation model and estimate the absorption coefficients through a convex optimization exploiting joint sparsity model formulated upon spatiospectral sparsity of concurrent speech representation. The acoustic parameters are then incorporated for separating individual speech signals through either structured sparse recovery or inverse filtering the acoustic channels. The experiments conducted on real data recordings of spatially stationary sources demonstrate the effectiveness of the proposed approach for multiparty speech recovery and recognition.
Denoising of image patches via sparse representations with learned statistical dependencies
 in ICASSP
, 2011
"... ABSTRACT We address the problem of denoising for image patches. The approach taken is based on Bayesian modeling of sparse representations, which takes into account dependencies between the dictionary atoms. Following recent work, we use a Boltzman machine to model the sparsity pattern. In this wor ..."
Abstract

Cited by 3 (2 self)
 Add to MetaCart
(Show Context)
ABSTRACT We address the problem of denoising for image patches. The approach taken is based on Bayesian modeling of sparse representations, which takes into account dependencies between the dictionary atoms. Following recent work, we use a Boltzman machine to model the sparsity pattern. In this work we focus on the special case of a unitary dictionary and obtain the exact MAP estimate for the sparse representation using an efficient message passing algorithm. We present an adaptive modelbased scheme for sparse signal recovery, which is based on sparse coding via message passing and on learning the model parameters from the data. This adaptive approach is applied on noisy image patches in order to recover their sparse representations over a fixed unitary dictionary. We compare the denoising performance to that of previous sparse recovery methods, which do not exploit the statistical dependencies, and show the effectiveness of our approach.
On MAP and MMSE Estimators for the Cosparse Analysis ModelI
"... The sparse synthesis model for signals has become very popular in the last decade, leading to improved performance in many signal processing applications. This model assumes that a signal may be described as a linear combination of few columns (atoms) of a given synthesis matrix (dictionary). The ..."
Abstract

Cited by 3 (0 self)
 Add to MetaCart
(Show Context)
The sparse synthesis model for signals has become very popular in the last decade, leading to improved performance in many signal processing applications. This model assumes that a signal may be described as a linear combination of few columns (atoms) of a given synthesis matrix (dictionary). The CoSparse Analysis model is a recently introduced counterpart, whereby signals are assumed to be orthogonal to many rows of a given analysis dictionary. These rows are called the cosupport. The Analysis model has already led to a series of contributions that address the pursuit problem: identifying the cosupport of a corrupted signal in order to restore it. While all the existing work adopts a deterministic point of view towards the design of such pursuit algorithms, this paper introduces a Bayesian estimation point of view, starting with a random generative model for the cosparse analysis signals. This is followed by a derivation of Oracle, MinimumMeanSquaredError (MMSE), and MaximumA’posterioriProbability (MAP) based estimators. We present a comparison between the deterministic formulations and these estimators, drawing some connections between the two. We develop practical approximations to the MAP and MMSE estimators, and demonstrate the proposed reconstruction algorithms in several synthetic and real image experiments, showing their potential and applicability.
Learning feature selection dependencies in multitask learning
 in Proceedings of the Advances in Neural Information Processing Systems 26 (NIPS ’13
, 2013
"... A probabilistic model based on the horseshoe prior is proposed for learning dependencies in the process of identifying relevant features for prediction. Exact inference is intractable in this model. However, expectation propagation offers an approximate alternative. Because the process of estimatin ..."
Abstract

Cited by 3 (0 self)
 Add to MetaCart
(Show Context)
A probabilistic model based on the horseshoe prior is proposed for learning dependencies in the process of identifying relevant features for prediction. Exact inference is intractable in this model. However, expectation propagation offers an approximate alternative. Because the process of estimating feature selection dependencies may suffer from overfitting in the model proposed, additional data from a multitask learning scenario are considered for induction. The same model can be used in this setting with few modifications. Furthermore, the assumptions made are less restrictive than in other multitask methods: The different tasks must share feature selection dependencies, but can have different relevant features and model coefficients. Experiments with real and synthetic data show that this model performs better than other multitask alternatives from the literature. The experiments also show that the model is able to induce suitable feature selection dependencies for the problems considered, only from the training data. 1