Results 1  10
of
36
KSVD: An Algorithm for Designing Overcomplete Dictionaries for Sparse Representation
, 2006
"... In recent years there has been a growing interest in the study of sparse representation of signals. Using an overcomplete dictionary that contains prototype signalatoms, signals are described by sparse linear combinations of these atoms. Applications that use sparse representation are many and inc ..."
Abstract

Cited by 930 (41 self)
 Add to MetaCart
In recent years there has been a growing interest in the study of sparse representation of signals. Using an overcomplete dictionary that contains prototype signalatoms, signals are described by sparse linear combinations of these atoms. Applications that use sparse representation are many and include compression, regularization in inverse problems, feature extraction, and more. Recent activity in this field has concentrated mainly on the study of pursuit algorithms that decompose signals with respect to a given dictionary. Designing dictionaries to better fit the above model can be done by either selecting one from a prespecified set of linear transforms or adapting the dictionary to a set of training signals. Both of these techniques have been considered, but this topic is largely still open. In this paper we propose a novel algorithm for adapting dictionaries in order to achieve sparse signal representations. Given a set of training signals, we seek the dictionary that leads to the best representation for each member in this set, under strict sparsity constraints. We present a new method—the KSVD algorithm—generalizing the umeans clustering process. KSVD is an iterative method that alternates between sparse coding of the examples based on the current dictionary and a process of updating the dictionary atoms to better fit the data. The update of the dictionary columns is combined with an update of the sparse representations, thereby accelerating convergence. The KSVD algorithm is flexible and can work with any pursuit method (e.g., basis pursuit, FOCUSS, or matching pursuit). We analyze this algorithm and demonstrate its results both on synthetic tests and in applications on real image data.
From Sparse Solutions of Systems of Equations to Sparse Modeling of Signals and Images
, 2007
"... A fullrank matrix A ∈ IR n×m with n < m generates an underdetermined system of linear equations Ax = b having infinitely many solutions. Suppose we seek the sparsest solution, i.e., the one with the fewest nonzero entries: can it ever be unique? If so, when? As optimization of sparsity is combin ..."
Abstract

Cited by 423 (37 self)
 Add to MetaCart
A fullrank matrix A ∈ IR n×m with n < m generates an underdetermined system of linear equations Ax = b having infinitely many solutions. Suppose we seek the sparsest solution, i.e., the one with the fewest nonzero entries: can it ever be unique? If so, when? As optimization of sparsity is combinatorial in nature, are there efficient methods for finding the sparsest solution? These questions have been answered positively and constructively in recent years, exposing a wide variety of surprising phenomena; in particular, the existence of easilyverifiable conditions under which optimallysparse solutions can be found by concrete, effective computational methods. Such theoretical results inspire a bold perspective on some important practical problems in signal and image processing. Several wellknown signal and image processing problems can be cast as demanding solutions of undetermined systems of equations. Such problems have previously seemed, to many, intractable. There is considerable evidence that these problems often have sparse solutions. Hence, advances in finding sparse solutions to underdetermined systems energizes research on such signal and image processing problems – to striking effect. In this paper we review the theoretical results on sparse solutions of linear systems, empirical
Dictionaries for Sparse Representation Modeling
"... Sparse and redundant representation modeling of data assumes an ability to describe signals as linear combinations of a few atoms from a prespecified dictionary. As such, the choice of the dictionary that sparsifies the signals is crucial for the success of this model. In general, the choice of a p ..."
Abstract

Cited by 108 (3 self)
 Add to MetaCart
Sparse and redundant representation modeling of data assumes an ability to describe signals as linear combinations of a few atoms from a prespecified dictionary. As such, the choice of the dictionary that sparsifies the signals is crucial for the success of this model. In general, the choice of a proper dictionary can be done using one of two ways: (i) building a sparsifying dictionary based on a mathematical model of the data, or (ii) learning a dictionary to perform best on a training set. In this paper we describe the evolution of these two paradigms. As manifestations of the first approach, we cover topics such as wavelets, wavelet packets, contourlets, and curvelets, all aiming to exploit 1D and 2D mathematical models for constructing effective dictionaries for signals and images. Dictionary learning takes a different route, attaching the dictionary to a set of examples it is supposed to serve. From the seminal work of Field and Olshausen, through the MOD, the KSVD, the Generalized PCA and others, this paper surveys the various options such training has to offer, up to the most recent contributions and structures.
On the uniqueness of overcomplete dictionaries, and a practical way to retrieve them
, 2006
"... ..."
1 Dictionary Learning for Sparse Approximations with the Majorization Method
"... Abstract—In order to find sparse approximations of signals, an appropriate generative model for the signal class has to be known. If the model is unknown, it can be adapted using a set of training samples. This paper presents a novel method for dictionary learning and extends the learning problem by ..."
Abstract

Cited by 50 (10 self)
 Add to MetaCart
Abstract—In order to find sparse approximations of signals, an appropriate generative model for the signal class has to be known. If the model is unknown, it can be adapted using a set of training samples. This paper presents a novel method for dictionary learning and extends the learning problem by introducing different constraints on the dictionary. The convergence of the proposed method to a fixed point is guaranteed, unless the accumulation points form a continuum. This holds for different sparsity measures. The majorization method is an optimization method that substitutes the original objective function with a surrogate function that is updated in each optimization step. This method has been used successfully in sparse approximation and statistical estimation (e.g. Expectation Maximization (EM)) problems. This paper shows that the majorization method can be used for the dictionary learning problem too. The proposed method is compared with other methods on both synthetic and real data and different constraints on the dictionary are compared. Simulations show the advantages of the proposed method over other currently available dictionary learning methods not only in terms of average performance but also in terms of computation time.
Blind compressed sensing
 IEEE TRANS. INF. THEORY
, 2011
"... The fundamental principle underlying compressed sensing is that a signal, which is sparse under some basis representation, can be recovered from a small number of linear measurements. However, prior knowledge of the sparsity basis is essential for the recovery process. This work introduces the conc ..."
Abstract

Cited by 15 (3 self)
 Add to MetaCart
(Show Context)
The fundamental principle underlying compressed sensing is that a signal, which is sparse under some basis representation, can be recovered from a small number of linear measurements. However, prior knowledge of the sparsity basis is essential for the recovery process. This work introduces the concept of blind compressed sensing, which avoids the need to know the sparsity basis in both the sampling and the recovery process. We suggest three possible constraints on the sparsity basis that can be added to the problem in order to guarantee a unique solution. For each constraint, we prove conditions for uniqueness, and suggest a simple method to retrieve the solution. We demonstrate through simulations that our methods can achieve results similar to those of standard compressed sensing, which rely on prior knowledge of the sparsity basis, as long as the signals are sparse enough. This offers a general sampling and reconstruction system that fits all sparse signals, regardless of the sparsity basis, under the conditions and constraints presented in this work.
Constrained Overcomplete Analysis Operator Learning for Cosparse Signal Modelling
"... We consider the problem of learning a lowdimensional signal model from a collection of training samples. The mainstream approach would be to learn an overcomplete dictionary to provide good approximations of thetraining samples using sparsesynthesis coefficients. This famous sparse model has a less ..."
Abstract

Cited by 12 (1 self)
 Add to MetaCart
(Show Context)
We consider the problem of learning a lowdimensional signal model from a collection of training samples. The mainstream approach would be to learn an overcomplete dictionary to provide good approximations of thetraining samples using sparsesynthesis coefficients. This famous sparse model has a less well known counterpart, in analysis form, called the cosparse analysis model. In this new model, signals are characterised by their parsimony in a transformed domain using an overcomplete (linear) analysis operator. We propose to learn an analysis operator from a training corpus using a constrained optimisation framework based on L1 optimisation. The reason for introducing a constraint in the optimisation framework is to exclude trivial solutions. Although there is no final answer here for which constraint is the most relevant constraint, we investigate some conventional constraints in the model adaptation field and use the uniformly normalised tight frame (UNTF) for this purpose. We then derive a practical learning algorithm, based on projected subgradients and DouglasRachford splitting technique, and demonstrate its ability to robustly recover a ground truth analysis operator, when provided with a clean training set, of sufficient size. We also find an analysis operator for images, using some noisy cosparse signals, which is indeed a more realistic experiment. As the derived optimisation problem is not a convex program, we often find a local minimum using such variational methods. For two different settings, we provide preliminary theoretical support for the wellposedness of the learning problem, which can be practically used to test the local identifiability conditions of learnt operators.
Dictionary optimization for blocksparse representations
 IEEE TRANS. SIGNAL PROCESS
, 2010
"... Recent work has demonstrated that using a carefully designed dictionary instead of a predefined one, can improve the sparsity in jointly representing a class of signals. This has motivated the derivation of learning methods for designing a dictionary which leads to the sparsest representation for a ..."
Abstract

Cited by 9 (1 self)
 Add to MetaCart
Recent work has demonstrated that using a carefully designed dictionary instead of a predefined one, can improve the sparsity in jointly representing a class of signals. This has motivated the derivation of learning methods for designing a dictionary which leads to the sparsest representation for a given set of signals. In some applications, the signals of interest can have further structure, so that they can be well approximated by a union of a small number of subspaces (e.g., face recognition and motion segmentation). This implies the existence of a dictionary which enables blocksparse representations of the input signals once its atoms are properly sorted into blocks. In this paper, we propose an algorithm for learning a blocksparsifying dictionary of a given set of signals. We do not require prior knowledge on the association of signals into groups (subspaces). Instead, we develop a method that automatically detects the underlying block structure. This is achieved by iteratively alternating between updating the block structure of the dictionary and updating the dictionary atoms to better fit the data. Our experiments show that for blocksparse data the proposed algorithm significantly improves the dictionary recovery ability and lowers the representation error compared to dictionary learning methods that do not employ block structure.
Improved MFOCUSS algorithm with overlapping blocks for locally smooth sparse signals
 IEEE Trans. Signal Process
"... CUSS) algorithm has already found many applications in signal processing and data analysis, whereas the regularized MFOCUSS algorithm has been recently proposed by Cotter et al. for finding sparse solutions to an underdetermined system of linear equations with multiple measurement vectors. In this ..."
Abstract

Cited by 8 (0 self)
 Add to MetaCart
(Show Context)
CUSS) algorithm has already found many applications in signal processing and data analysis, whereas the regularized MFOCUSS algorithm has been recently proposed by Cotter et al. for finding sparse solutions to an underdetermined system of linear equations with multiple measurement vectors. In this paper, we propose three modifications to the MFOCUSS algorithm to make it more efficient for sparse and locally smooth solutions. First, motivated by the simultaneously autoregressive (SAR) model, we incorporate an additional weighting (smoothing) matrix into the Tikhonov regularization term. Next, the entire set of measurement vectors is divided into blocks, and the solution is updated sequentially, based on the overlapping of data blocks. The last modification is based on an alternating minimization technique to provide datadriven (simultaneous) estimation of the regularization parameter with the generalized crossvalidation (GCV) approach. Finally, the simulation results demonstrating the benefits of the proposed modifications support the analysis. Index Terms—FOCal Underdetermined System Solver (FOCUSS), generalized crossvalidation (GCV), smooth signals, sparse solutions, underdetermined systems. I.
Learning sparse dictionaries for sparse signal representation
 IEEE Transactions on Signal Processing, (2008). submitted. CHAPTER 1. SPARSE COMPONENT ANALYSIS
"... An efficient and flexible dictionary structure is proposed for sparse and redundant signal representation. The structure is based on a sparsity model of the dictionary atoms over a base dictionary. The sparse dictionary provides efficient forward and adjoint operators, has a compact representation, ..."
Abstract

Cited by 7 (1 self)
 Add to MetaCart
(Show Context)
An efficient and flexible dictionary structure is proposed for sparse and redundant signal representation. The structure is based on a sparsity model of the dictionary atoms over a base dictionary. The sparse dictionary provides efficient forward and adjoint operators, has a compact representation, and can be effectively trained from given example data. In this, the sparse structure bridges the gap between implicit dictionaries, which have efficient implementations yet lack adaptability, and explicit dictionaries, which are fully adaptable but nonefficient and costly to deploy. In this report we discuss the advantages of sparse dictionaries, and present an efficient algorithm for training them. We demonstrate the advantages of the proposed structure for 3D image denoising. 1