Results 11  20
of
167
Exploiting Statistical Dependencies in Sparse Representations for Signal Recovery
, 2012
"... Signal modeling lies at the core of numerous signal and image processing applications. A recent approach that has drawn considerable attention is sparse representation modeling, in which the signal is assumed to be generated as a combination of a few atoms from a given dictionary. In this work we c ..."
Abstract

Cited by 24 (6 self)
 Add to MetaCart
Signal modeling lies at the core of numerous signal and image processing applications. A recent approach that has drawn considerable attention is sparse representation modeling, in which the signal is assumed to be generated as a combination of a few atoms from a given dictionary. In this work we consider a Bayesian setting and go beyond the classic assumption of independence between the atoms. The main goal of this paper is to introduce a statistical model that takes such dependencies into account and show how this model can be used for sparse signal recovery. We follow the suggestion of two recent works and assume that the sparsity pattern is modeled by a Boltzmann machine, a commonly used graphical model. For general dependency models, exact MAP and MMSE estimation of the sparse representation becomes computationally complex. To simplify the computations, we propose greedy approximations of the MAP and MMSE estimators. We then consider a special case in which exact MAP is feasible, by assuming that the dictionary is unitary and the dependency model corresponds to a certain sparse graph. Exploiting this structure, we develop an efficient message passing algorithm that recovers the underlying signal. When the model parameters defining the underlying graph are unknown, we suggest an algorithm that learns these parameters directly from the data, leading to an iterative scheme for adaptive sparse signal recovery. The effectiveness of our approach is demonstrated on reallife signals patches of natural images where we compare the denoising performance to that of previous recovery methods that do not exploit the statistical dependencies.
Phase transitions for greedy sparse approximation algorithms. 2009. available at arxiv
"... Abstract This paper applies the phase transition framework to three greedy algorithms so we can actually tell which is best. This can be at most 250 words. Index Terms Compressed sensing, greedy algorithms, CoSaMP, iterative hard thresholding, subspace pursuit, sparsity, sparse approximation, spars ..."
Abstract

Cited by 23 (10 self)
 Add to MetaCart
Abstract This paper applies the phase transition framework to three greedy algorithms so we can actually tell which is best. This can be at most 250 words. Index Terms Compressed sensing, greedy algorithms, CoSaMP, iterative hard thresholding, subspace pursuit, sparsity, sparse approximation, sparse solutions to underdetermined systems, restricted isometry property, restricted isometry constants, phase transitions, convex relaxation, random matrices, Gaussian matrices, Wishart matrices, singular values of random matrices, eigenvalues of random matrices (this is way too many for IEEE)
A NonUniform Sampler for Wideband SpectrallySparse Environments
, 2009
"... We present the first custom integrated circuit implementation of the compressed sensing based nonuniform sampler (NUS). By sampling signals nonuniformly, the average sample rate can be more than a magnitude lower than the Nyquist rate, provided that these signals have a relatively low information ..."
Abstract

Cited by 20 (7 self)
 Add to MetaCart
We present the first custom integrated circuit implementation of the compressed sensing based nonuniform sampler (NUS). By sampling signals nonuniformly, the average sample rate can be more than a magnitude lower than the Nyquist rate, provided that these signals have a relatively low information content as measured by the sparsity of their spectrum. The hardware design combines a wideband IndiumPhosphide (InP) heterojunction bipolar transistor (HBT) sampleandhold with a commercial offtheshelf (COTS) analogtodigital converter (ADC) to digitize an 800 MHz to 2 GHz band (having 100 MHz of noncontiguous spectral content) at an average sample rate of 236 Msps. Signal reconstruction is performed via a nonlinear compressed sensing algorithm, and an efficient GPU implementation is discussed. Measured biterrorrate (BER) data for a GSM channel is presented, and comparisons to a conventional wideband 4.4 Gsps ADC are made.
BlockBased Compressed Sensing of Images and Video
 Foundations and Trends in Signal Processing
, 2012
"... A number of techniques for the compressed sensing of imagery are surveyed. Various imaging media are considered, including still images, motion video, as well as multiview image sets and multiview video. A particular emphasis is placed on blockbased compressed sensing due to its advantages in terms ..."
Abstract

Cited by 20 (4 self)
 Add to MetaCart
(Show Context)
A number of techniques for the compressed sensing of imagery are surveyed. Various imaging media are considered, including still images, motion video, as well as multiview image sets and multiview video. A particular emphasis is placed on blockbased compressed sensing due to its advantages in terms of both lightweight reconstruction complexity as well as a reduced memory burden for the randomprojection measurement operator. For multipleimage scenarios, including video and multiview imagery, motion and disparity compensation is employed to exploit frametoframe redundancies due to object motion and parallax, resulting in residual frames which are more compressible and thus more easily reconstructed from compressedsensing measurements. ExFoundations and Trends R ○ in Signal Processing, to appear, 2012. tensive experimental comparisons evaluate various prominent reconstruction algorithms for stillimage, motionvideo, and multiview scenarios in terms of both reconstruction quality as well as computational
Analysis operator learning and its application to image reconstruction
 IEEE Trans. Image Process
, 2013
"... ..."
(Show Context)
Random access compressed sensing for energyefficient underwater sensor networks
 IEEE Journal on Selected Areas in Communications
, 2011
"... Abstract—Inspired by the theory of compressed sensing and employing random channel access, we propose a distributed energyefficient sensor network scheme denoted by Random Access Compressed Sensing (RACS). The proposed scheme is suitable for longterm deployment of large underwater networks, in whi ..."
Abstract

Cited by 16 (1 self)
 Add to MetaCart
(Show Context)
Abstract—Inspired by the theory of compressed sensing and employing random channel access, we propose a distributed energyefficient sensor network scheme denoted by Random Access Compressed Sensing (RACS). The proposed scheme is suitable for longterm deployment of large underwater networks, in which saving energy and bandwidth is of crucial importance. During each frame, a randomly chosen subset of nodes participate in the sensing process, then share the channel using random access. Due to the nature of random access, packets may collide at the fusion center. To account for the packet loss that occurs due to collisions, the network design employs the concept of sufficient sensing probability. With this probability, sufficiently many data packets – as required for field reconstruction based on compressed sensing – are to be received. The RACS scheme prolongs network lifetime while employing a simple and distributed scheme which eliminates the need for scheduling. Index Terms—Sensor networks, compressed sensing, wireless communications, underwater acoustic networks, random access. I.
Online GroupStructured Dictionary Learning ∗
"... We develop a dictionary learning method which is (i) online, (ii) enables overlapping group structures with (iii) nonconvex sparsityinducing regularization and (iv) handles the partially observable case. Structured sparsity and the related group norms have recently gained widespread attention in g ..."
Abstract

Cited by 15 (3 self)
 Add to MetaCart
(Show Context)
We develop a dictionary learning method which is (i) online, (ii) enables overlapping group structures with (iii) nonconvex sparsityinducing regularization and (iv) handles the partially observable case. Structured sparsity and the related group norms have recently gained widespread attention in groupsparsity regularized problems in the case when the dictionary is assumed to be known and fixed. However, when the dictionary also needs to be learned, the problem is much more difficult. Only a few methods have been proposed to solve this problem, and they can handle two of these four desirable properties at most. To the best of our knowledge, our proposed method is the first one that possesses all of these properties. We investigate several interesting special cases of our framework, such as the online, structured, sparse nonnegative matrix factorization, and demonstrate the efficiency of our algorithm with several numerical experiments. 1.
Optimal quantization for compressive sensing under message passing reconstruction
 in Proc. IEEE Int. Symp. Inf. Theory
, 2011
"... Abstract—We consider the optimal quantization of compressive sensing measurements along with estimation from quantized samples using generalized approximate message passing (GAMP). GAMP is an iterative reconstruction scheme inspired by the belief propagation algorithm on bipartite graphs which gener ..."
Abstract

Cited by 13 (2 self)
 Add to MetaCart
(Show Context)
Abstract—We consider the optimal quantization of compressive sensing measurements along with estimation from quantized samples using generalized approximate message passing (GAMP). GAMP is an iterative reconstruction scheme inspired by the belief propagation algorithm on bipartite graphs which generalizes approximate message passing (AMP) for arbitrary measurement channels. Its asymptotic error performance can be accurately predicted and tracked through the state evolution formalism. We utilize these results to design meansquare optimal scalar quantizers for GAMP signal reconstruction and empirically demonstrate the superior error performance of the resulting quantizers. I.
A proximalgradient homotopy method for the sparse leastsquares problem
 SIAM Journal on Optimization
, 2013
"... Abstract We consider solving the 1 regularized leastsquares ( 1 LS) problem in the context of sparse recovery, for applications such as compressed sensing. The standard proximal gradient method, also known as iterative softthresholding when applied to this problem, has low computational cost pe ..."
Abstract

Cited by 12 (2 self)
 Add to MetaCart
(Show Context)
Abstract We consider solving the 1 regularized leastsquares ( 1 LS) problem in the context of sparse recovery, for applications such as compressed sensing. The standard proximal gradient method, also known as iterative softthresholding when applied to this problem, has low computational cost per iteration but a rather slow convergence rate. Nevertheless, when the solution is sparse, it often exhibits fast linear convergence in the final stage. We exploit the local linear convergence using a homotopy continuation strategy, i.e., we solve the 1 LS problem for a sequence of decreasing values of the regularization parameter, and use an approximate solution at the end of each stage to warm start the next stage. Although similar strategies have been studied in the literature, there have been no theoretical analysis of their global iteration complexity. This paper shows that under suitable assumptions for sparse recovery, the proposed homotopy strategy ensures that all iterates along the homotopy solution path are sparse. Therefore the objective function is effectively strongly convex along the solution path, and geometric convergence at each stage can be established. As a result, the overall iteration complexity of our method is O(log(1/ )) for finding an optimal solution, which can be interpreted as global geometric rate of convergence. We also present empirical results to support our theoretical analysis.
Directional sparsity in optimal control of partial differential equations
 SIAM J. Control Optim
, 2012
"... Abstract. We study optimal control problems in which controls with certain sparsity patterns are preferred. For timedependent problems the approach can be used to find locations for control devices that allow controlling the system in an optimal way over the entire time interval. The approach uses ..."
Abstract

Cited by 12 (1 self)
 Add to MetaCart
(Show Context)
Abstract. We study optimal control problems in which controls with certain sparsity patterns are preferred. For timedependent problems the approach can be used to find locations for control devices that allow controlling the system in an optimal way over the entire time interval. The approach uses a nondifferentiable cost functional to implement the sparsity requirements; additionally, bound constraints for the optimal controls can be included. We study the resulting problem in appropriate function spaces and present two solution methods of Newton type, based on different formulations of the optimality system. Using elliptic and parabolic test problems we research the sparsity properties of the optimal controls and analyze the behavior of the proposed solution algorithms.