Results 1  10
of
55
STATISTICAL COMPRESSIVE SENSING OF GAUSSIAN MIXTURE MODELS By
, 2010
"... A new framework of compressive sensing (CS), namely statistical compressive sensing (SCS), that aims at efficiently sampling a collection of signals that follow a statistical distribution and achieving accurate reconstruction on average, is introduced. For signals following a Gaussian distribution, ..."
Abstract

Cited by 14 (3 self)
 Add to MetaCart
(Show Context)
A new framework of compressive sensing (CS), namely statistical compressive sensing (SCS), that aims at efficiently sampling a collection of signals that follow a statistical distribution and achieving accurate reconstruction on average, is introduced. For signals following a Gaussian distribution, with Gaussian or Bernoulli sensing matrices of O(k) measurements, considerably smaller than the O(k log(N/k)) required by conventional CS, where N is the signal dimension, and with an optimal decoder implemented with linear filtering, significantly faster than the pursuit decoders applied in conventional CS, the error of SCS is shown tightly upper bounded by a constant times the kbest term approximation error, with overwhelming probability. The failure probability is also significantly smaller than that of conventional CS. Stronger yet simpler results further show that for any sensing matrix, the error of Gaussian SCS is upper bounded by a constant times the kbest term approximation with probability one, and the bound constant can be efficiently calculated. For signals following Gaussian mixture models, SCS with a piecewise linear decoder is introduced and shown to produce for real images better results than conventional CS based on sparse models. Index Terms — Compressive sensing, Gaussian mixture models I.
Video Compressive Sensing Using Gaussian Mixture Models
"... A Gaussian mixture model (GMM) based algorithm is proposed for video reconstruction from temporallycompressed video measurements. The GMM is used to model spatiotemporal video patches, and the reconstruction can be efficiently computed based on analytic expressions. The GMMbased inversion method ..."
Abstract

Cited by 11 (5 self)
 Add to MetaCart
A Gaussian mixture model (GMM) based algorithm is proposed for video reconstruction from temporallycompressed video measurements. The GMM is used to model spatiotemporal video patches, and the reconstruction can be efficiently computed based on analytic expressions. The GMMbased inversion method benefits from online adaptive learning and parallel computation. We demonstrate the efficacy of the proposed inversion method with videos reconstructed from simulated compressive video measurements, and from a real compressive video camera. We also use the GMM as a tool to investigate adaptive video compressive sensing, i.e., adaptive rate of temporal compression.
Taskdriven adaptive statistical compressive sensing of gaussian mixture models
 IEEE Transactions on Signal Processing
, 2012
"... A framework of online adaptive statistical compressed sensing is introduced for signals following a mixture model. The scheme first uses nonadaptive measurements, from which an online decoding scheme estimates the model selection. As soon as a candidate model has been selected, an optimal sensing ..."
Abstract

Cited by 10 (5 self)
 Add to MetaCart
(Show Context)
A framework of online adaptive statistical compressed sensing is introduced for signals following a mixture model. The scheme first uses nonadaptive measurements, from which an online decoding scheme estimates the model selection. As soon as a candidate model has been selected, an optimal sensing scheme for the selected model continues to apply. The final signal reconstruction is calculated from the ensemble of both the nonadaptive and the adaptive measurements. For signals generated from a Gaussian mixture model, the online adaptive sensing algorithm is given and its performance is analyzed. On both synthetic and real image data, the proposed adaptive scheme considerably reduces the average reconstruction error with respect to standard statistical compressed sensing that uses fully random measurements, at a marginally increased computational complexity. Index Terms — Statistical compressed sensing, adaptive sensing, Gaussian mixture models, model selection. 1.
Sparse and Redundant Representation Modeling  What Next?
, 2012
"... Signal processing relies heavily on data models; these are mathematical constructions imposed on the data source that force a dimensionality reduction of some sort. The vast activity in signal processing during the past several decades is essentially driven by an evolution of these models and their ..."
Abstract

Cited by 8 (1 self)
 Add to MetaCart
Signal processing relies heavily on data models; these are mathematical constructions imposed on the data source that force a dimensionality reduction of some sort. The vast activity in signal processing during the past several decades is essentially driven by an evolution of these models and their use in practice. In that respect, the past decade has been certainly the era of sparse and redundant representations, a popular and highly effective data model. This very appealing model led to a long series of intriguing theoretical and numerical questions, and to many innovative ideas that harness this model to real engineering problems. The new entries recently added to the IEEESPL EDICS reflect the popularity of this model and its impact on signal processing research and practice. Despite the huge success of this model so far, this field
Reconstruction of Signals Drawn from a Gaussian Mixture via Noisy Compressive Measurements
, 2014
"... This paper determines to within a single measurement the minimum number of measurements required to successfully reconstruct a signal drawn from a Gaussian mixture model in the lownoise regime. The method is to develop upper and lower bounds that are a function of the maximum dimension of the lin ..."
Abstract

Cited by 8 (5 self)
 Add to MetaCart
This paper determines to within a single measurement the minimum number of measurements required to successfully reconstruct a signal drawn from a Gaussian mixture model in the lownoise regime. The method is to develop upper and lower bounds that are a function of the maximum dimension of the linear subspaces spanned by the Gaussian mixture components. The method not only reveals the existence or absence of a minimum meansquared error (MMSE) error floor (phase transition) but also provides insight into the MMSE decay via multivariate generalizations of the MMSE dimension and the MMSE power offset, which are a function of the interaction between the geometrical properties of the kernel and the Gaussian mixture. These results apply not only to standard linear random Gaussian measurements but also to linear kernels that minimize the MMSE. It is shown that optimal kernels do not change the number of measurements associated with the MMSE phase transition, rather they affect the sensed power required to achieve a target MMSE in the lownoise regime. Overall, our bounds are tighter and sharper than standard bounds on the minimum number of measurements needed to recover sparse signals associated with a union of subspaces model, as they are not asymptotic in the signal dimension or signal sparsity.
Efficient Matrix Completion with Gaussian Models
 In ICASSP
, 2011
"... A general framework based on Gaussian models and a MAPEM algorithm is introduced in this paper for solving matrix/table completion problems. The numerical experiments with the standard and challenging movie ratings data show that the proposed approach, based on probably one of the simplest probabil ..."
Abstract

Cited by 7 (4 self)
 Add to MetaCart
(Show Context)
A general framework based on Gaussian models and a MAPEM algorithm is introduced in this paper for solving matrix/table completion problems. The numerical experiments with the standard and challenging movie ratings data show that the proposed approach, based on probably one of the simplest probabilistic models, leads to the results in the same ballpark as the stateoftheart, at a lower computational cost. Index Terms — Matrix completion, inverse problems, collaborative filtering, Gaussian mixture models, MAP estimation, EM algorithm
Image Transformation based on Learning Dictionaries across Image Spaces
, 2012
"... In this paper, we propose a framework of transforming images from a source image space to a target image space, based on learning coupled dictionaries from a training set of paired images. The framework can be used for applications such as image superresolution, and estimation of image intrinsic c ..."
Abstract

Cited by 6 (0 self)
 Add to MetaCart
(Show Context)
In this paper, we propose a framework of transforming images from a source image space to a target image space, based on learning coupled dictionaries from a training set of paired images. The framework can be used for applications such as image superresolution, and estimation of image intrinsic components (shading and albedo). It is based on a local parametric regression approach, using sparse feature representations over learned coupled dictionaries across the source and target image spaces. After coupled dictionary learning, sparse coefficient vectors of training image patch pairs are partitioned into easily retrievable local clusters. For any test image patch, we can fast index into its closest local cluster and perform a local parametric regression between the learned sparse feature spaces. Obtained sparse representation (together with the learned target space dictionary) provides multiple constraints for each pixel of the target image to be estimated. The final target image is reconstructed based on these constraints. The contributions of our proposed framework are threefold. (1) We propose a concept of coupled dictionary learning based on coupled sparse coding, which requires the sparse coefficient vectors of a pair of corresponding source and target image patches have the same support, i.e., the same indices of nonzero elements. (2) We devise a space partitioning scheme to divide the highdimensional but sparse feature space into local clusters. The partitioning facilitates extremely fast retrieval of closest local clusters for query patches. (3) Benefiting from sparse feature based image transformation, our method is more robust to corrupted input data, and can be considered as a simultaneous image restoration and transformation process. Experiments on intrinsic image estimation and superresolution demonstrate the effectiveness and efficiency of our proposed method.
A Coded Aperture Compressive Imaging Array and Its Visual Detection and Tracking Algorithms for Surveillance Systems
, 2012
"... sensors ..."
(Show Context)
LowCost Compressive Sensing for Color Video and Depth
"... A simple and inexpensive (lowpower and lowbandwidth) modification is made to a conventional offtheshelf color video camera, from which we recover multiple color frames for each of the original measured frames, and each of the recovered frames can be focused at a different depth. The recovery of ..."
Abstract

Cited by 5 (4 self)
 Add to MetaCart
(Show Context)
A simple and inexpensive (lowpower and lowbandwidth) modification is made to a conventional offtheshelf color video camera, from which we recover multiple color frames for each of the original measured frames, and each of the recovered frames can be focused at a different depth. The recovery of multiple frames for each measured frame is made possible via highspeed coding, manifested via translation of a single coded aperture; the inexpensive translation is constituted by mounting the binary code on a piezoelectric device. To simultaneously recover depth information, a liquid lens is modulated at high speed, via a variable voltage. Consequently, during the aforementioned coding process, the liquid lens allows the camera to sweep the focus through multiple depths. In addition to designing and implementing the camera, fast recovery is achieved by an anytime algorithm exploiting the groupsparsity of wavelet/DCT coefficients. 1.
On mmse estimation: A linear model under gaussian mixture statistics
 IEEE Transactions on Signal Processing
"... Abstract—In a Bayesian linear model, suppose observation y = Hx+n stems from independent inputs x and n which are Gaussian mixture (GM) distributed. With known matrix H, the minimum mean square error (MMSE) estimator for x, has analytical form. However, its performance measure, the MMSE itself, has ..."
Abstract

Cited by 5 (0 self)
 Add to MetaCart
(Show Context)
Abstract—In a Bayesian linear model, suppose observation y = Hx+n stems from independent inputs x and n which are Gaussian mixture (GM) distributed. With known matrix H, the minimum mean square error (MMSE) estimator for x, has analytical form. However, its performance measure, the MMSE itself, has no such closed form. Because existing Bayesian MMSE bounds prove to have limited practical value under these settings, we instead seek analytical bounds for the MMSE, both upper and lower. This paper provides such bounds, and relates them to the signaltonoiseratio (SNR). I.