Results 1  10
of
50
M.: Parametric Dictionary Design for Sparse Coding
 IEEE Trans. on Signal Processing
, 2009
"... Abstract—This paper introduces a new dictionary design method for sparse coding of a class of signals. It has been shown that one can sparsely approximate some natural signals using an overcomplete set of parametric functions, e.g. [1], [2]. A problem in using these parametric dictionaries is how to ..."
Abstract

Cited by 19 (5 self)
 Add to MetaCart
(Show Context)
Abstract—This paper introduces a new dictionary design method for sparse coding of a class of signals. It has been shown that one can sparsely approximate some natural signals using an overcomplete set of parametric functions, e.g. [1], [2]. A problem in using these parametric dictionaries is how to choose the parameters. In practice these parameters have been chosen by an expert or through a set of experiments. In the sparse approximation context, it has been shown that an incoherent dictionary is appropriate for the sparse approximation methods. In this paper we first characterize the dictionary design problem, subject to a constraint on the dictionary. Then we briefly explain that equiangular tight frames have minimum coherence. The complexity of the problem does not allow it to be solved exactly. We introduce a practical method to approximately solve it. Some experiments show the advantages one gets by using these dictionaries.
Blind compressed sensing
 IEEE TRANS. INF. THEORY
, 2011
"... The fundamental principle underlying compressed sensing is that a signal, which is sparse under some basis representation, can be recovered from a small number of linear measurements. However, prior knowledge of the sparsity basis is essential for the recovery process. This work introduces the conc ..."
Abstract

Cited by 15 (3 self)
 Add to MetaCart
(Show Context)
The fundamental principle underlying compressed sensing is that a signal, which is sparse under some basis representation, can be recovered from a small number of linear measurements. However, prior knowledge of the sparsity basis is essential for the recovery process. This work introduces the concept of blind compressed sensing, which avoids the need to know the sparsity basis in both the sampling and the recovery process. We suggest three possible constraints on the sparsity basis that can be added to the problem in order to guarantee a unique solution. For each constraint, we prove conditions for uniqueness, and suggest a simple method to retrieve the solution. We demonstrate through simulations that our methods can achieve results similar to those of standard compressed sensing, which rely on prior knowledge of the sparsity basis, as long as the signals are sparse enough. This offers a general sampling and reconstruction system that fits all sparse signals, regardless of the sparsity basis, under the conditions and constraints presented in this work.
Online GroupStructured Dictionary Learning ∗
"... We develop a dictionary learning method which is (i) online, (ii) enables overlapping group structures with (iii) nonconvex sparsityinducing regularization and (iv) handles the partially observable case. Structured sparsity and the related group norms have recently gained widespread attention in g ..."
Abstract

Cited by 14 (3 self)
 Add to MetaCart
(Show Context)
We develop a dictionary learning method which is (i) online, (ii) enables overlapping group structures with (iii) nonconvex sparsityinducing regularization and (iv) handles the partially observable case. Structured sparsity and the related group norms have recently gained widespread attention in groupsparsity regularized problems in the case when the dictionary is assumed to be known and fixed. However, when the dictionary also needs to be learned, the problem is much more difficult. Only a few methods have been proposed to solve this problem, and they can handle two of these four desirable properties at most. To the best of our knowledge, our proposed method is the first one that possesses all of these properties. We investigate several interesting special cases of our framework, such as the online, structured, sparse nonnegative matrix factorization, and demonstrate the efficiency of our algorithm with several numerical experiments. 1.
Simultaneous codeword optimization (SimCO) for dictionary update and learning
 IEEE Trans. Signal Process
, 2012
"... Abstract—We consider the datadriven dictionary learning problem. The goal is to seek an overcomplete dictionary from which every training signal can be best approximated by a linear combination of only a few codewords. This task is often achieved by iteratively executing two operations: sparse cod ..."
Abstract

Cited by 13 (7 self)
 Add to MetaCart
(Show Context)
Abstract—We consider the datadriven dictionary learning problem. The goal is to seek an overcomplete dictionary from which every training signal can be best approximated by a linear combination of only a few codewords. This task is often achieved by iteratively executing two operations: sparse coding and dictionary update. In the literature, there are two benchmark mechanisms to update a dictionary. The first approach, for example the MOD algorithm, is characterized by searching for the optimal codewords while fixing the sparse coefficients. In the second approach, represented by the KSVD method, one codeword and the related sparse coefficients are simultaneously updated while all other codewords and coefficients remain unchanged. We propose a novel framework that generalizes the aforementioned two methods. The unique feature of our approach is that one can update an arbitrary set of codewords and the corresponding sparse coefficients simultaneously: when sparse coefficients are fixed, the underlying optimization problem is the same as that in the MOD algorithm; when only one codeword is selected for update, it can be proved that the proposed algorithm is equivalent to the KSVD method; and more importantly, our method allows to update all codewords and all sparse coefficients simultaneously, hence the term simultaneously codeword optimization (SimCO). Under the proposed framework, we design two algorithms, namely the primitive and regularized SimCO. Simulations demonstrate that our approach excels the benchmark KSVD in terms of both learning performance and running speed. I.
Constrained Overcomplete Analysis Operator Learning for Cosparse Signal Modelling
"... We consider the problem of learning a lowdimensional signal model from a collection of training samples. The mainstream approach would be to learn an overcomplete dictionary to provide good approximations of thetraining samples using sparsesynthesis coefficients. This famous sparse model has a less ..."
Abstract

Cited by 12 (1 self)
 Add to MetaCart
(Show Context)
We consider the problem of learning a lowdimensional signal model from a collection of training samples. The mainstream approach would be to learn an overcomplete dictionary to provide good approximations of thetraining samples using sparsesynthesis coefficients. This famous sparse model has a less well known counterpart, in analysis form, called the cosparse analysis model. In this new model, signals are characterised by their parsimony in a transformed domain using an overcomplete (linear) analysis operator. We propose to learn an analysis operator from a training corpus using a constrained optimisation framework based on L1 optimisation. The reason for introducing a constraint in the optimisation framework is to exclude trivial solutions. Although there is no final answer here for which constraint is the most relevant constraint, we investigate some conventional constraints in the model adaptation field and use the uniformly normalised tight frame (UNTF) for this purpose. We then derive a practical learning algorithm, based on projected subgradients and DouglasRachford splitting technique, and demonstrate its ability to robustly recover a ground truth analysis operator, when provided with a clean training set, of sufficient size. We also find an analysis operator for images, using some noisy cosparse signals, which is indeed a more realistic experiment. As the derived optimisation problem is not a convex program, we often find a local minimum using such variational methods. For two different settings, we provide preliminary theoretical support for the wellposedness of the learning problem, which can be practically used to test the local identifiability conditions of learnt operators.
Blind calibration for compressed sensing by convex optimization
 in Proc. IEEE Int. Conf. Acoustics, Speech and Signal Process
"... We consider the problem of calibrating a compressed sensing measurement system under the assumption that the decalibration consists in unknown gains on each measure. We focus on blind calibration, using measures performed on a few unknown (but sparse) signals. A naive formulation of this blind cal ..."
Abstract

Cited by 12 (6 self)
 Add to MetaCart
(Show Context)
We consider the problem of calibrating a compressed sensing measurement system under the assumption that the decalibration consists in unknown gains on each measure. We focus on blind calibration, using measures performed on a few unknown (but sparse) signals. A naive formulation of this blind calibration problem, using `1 minimization, is reminiscent of blind source separation and dictionary learning, which are known to be highly nonconvex and riddled with local minima. In the considered context, we show that in fact this formulation can be exactly expressed as a convex optimization problem, and can be solved using offtheshelf algorithms. Numerical simulations demonstrate the effectiveness of the approach even for highly uncalibrated measures, when a sufficient number of (unknown, but sparse) calibrating signals is provided. We observe that the success/failure of the approach seems to obey sharp phase transitions. 1
NOISE AWARE ANALYSIS OPERATOR LEARNING FOR APPROXIMATELY COSPARSE SIGNALS
, 2012
"... This paper investigates analysis operator learning for the recently introduced cosparse signal model that is a natural analysis complement to the more traditional sparse signal model. Previous work on such analysis operator learning has relied on access to a set of clean training samples. Here we in ..."
Abstract

Cited by 9 (1 self)
 Add to MetaCart
This paper investigates analysis operator learning for the recently introduced cosparse signal model that is a natural analysis complement to the more traditional sparse signal model. Previous work on such analysis operator learning has relied on access to a set of clean training samples. Here we introduce a new learning framework which can use training data which is corrupted by noise and/or is only approximately cosparse. The new model assumes that a pcosparse signal exists in an epsilon neighborhood of each data point. The operator is assumed to be uniformly normalized tight frame (UNTF) to exclude some trivial operators. In this setting, an alternating optimization algorithm is introduced to learn a suitable analysis operator.
Local Identification of Overcomplete Dictionaries. arXiv
, 2014
"... This paper presents the first theoretical results showing that stable identification of overcomplete µcoherent dictionaries Φ ∈ Rd×K is locally possible from training signals with sparsity levels S up to the order O(µ−2) and signal to noise ratios up to O( d). In particular the dictionary is reco ..."
Abstract

Cited by 4 (0 self)
 Add to MetaCart
(Show Context)
This paper presents the first theoretical results showing that stable identification of overcomplete µcoherent dictionaries Φ ∈ Rd×K is locally possible from training signals with sparsity levels S up to the order O(µ−2) and signal to noise ratios up to O( d). In particular the dictionary is recoverable as the local maximum of a new maximization criterion that generalizes the Kmeans criterion. For this maximization criterion results for asymptotic exact recovery for sparsity levels up to O(µ−1) and stable recovery for sparsity levels up to O(µ−2) as well as signal to noise ratios up to O( d) are provided. These asymptotic results translate to finite sample size recovery results with high probability as long as the sample size N scales as O(K3dSε̃−2), where the recovery precision ε ̃ can go down to the asymptotically achievable precision. Further, to actually find the local maxima of the new criterion, a very simple Iterative Thresholding and K (signed) Means algorithm (ITKM), which has complexity O(dKN) in each iteration, is presented and its local efficiency is demonstrated in several experiments.
COMPRESSIBLE DICTIONARY LEARNING FOR FAST SPARSE APPROXIMATIONS
"... By solving a linear inverse problem under a sparsity constraint, one can successfully recover the coefficients, if there exists such a sparse approximation for the proposed class of signals. In this framework the dictionary can be adapted to a given set of signals using dictionary learning methods. ..."
Abstract

Cited by 4 (3 self)
 Add to MetaCart
(Show Context)
By solving a linear inverse problem under a sparsity constraint, one can successfully recover the coefficients, if there exists such a sparse approximation for the proposed class of signals. In this framework the dictionary can be adapted to a given set of signals using dictionary learning methods. The learned dictionary often does not have useful structures for a fast implementation, i.e. fast matrixvector multiplication. This prevents such a dictionary being used for the real applications or large scale problems. The structure can be induced on the dictionary throughout the learning progress. Examples of such structures are shiftinvariance and being multiscale. These dictionaries can be efficiently implemented using a filter bank. In this paper a wellknown structure, called compressibility, is adapted to be used in the dictionary learning problem. As a result, the complexity of the implementation of a compressible dictionary can be reduced by wisely choosing a generative model. By some simulations, it has been shown that the learned dictionary provides sparser approximations, while it does not increase the computational complexity of the algorithms, with respect to the predesigned fast structured dictionaries.
Learning a common dictionary over a sensor network
 in Proc. IEEE International Workshop on Computational Advances in MultiSensor Adaptive Processing (CAMSAP), Saint
, 2013
"... Abstract—We consider the problem of distributed dictionary learning, where a set of nodes is required to collectively learn a common dictionary from noisy measurements. This approach may be useful in several contexts including sensor networks. Diffusion cooperation schemes have been proposed to solv ..."
Abstract

Cited by 4 (0 self)
 Add to MetaCart
(Show Context)
Abstract—We consider the problem of distributed dictionary learning, where a set of nodes is required to collectively learn a common dictionary from noisy measurements. This approach may be useful in several contexts including sensor networks. Diffusion cooperation schemes have been proposed to solve the distributed linear regression problem. In this work we focus on a diffusionbased adaptive dictionary learning strategy: each node records independent observations and cooperates with its neighbors by sharing its local dictionary. The resulting algorithm corresponds to a distributed alternate optimization. Beyond dictionary learning, this strategy could be adapted to many matrix factorization problems in various settings. We illustrate its efficiency on some numerical experiments. I.