Results 11  20
of
110
Constrained Overcomplete Analysis Operator Learning for Cosparse Signal Modelling
"... We consider the problem of learning a lowdimensional signal model from a collection of training samples. The mainstream approach would be to learn an overcomplete dictionary to provide good approximations of thetraining samples using sparsesynthesis coefficients. This famous sparse model has a less ..."
Abstract

Cited by 12 (1 self)
 Add to MetaCart
(Show Context)
We consider the problem of learning a lowdimensional signal model from a collection of training samples. The mainstream approach would be to learn an overcomplete dictionary to provide good approximations of thetraining samples using sparsesynthesis coefficients. This famous sparse model has a less well known counterpart, in analysis form, called the cosparse analysis model. In this new model, signals are characterised by their parsimony in a transformed domain using an overcomplete (linear) analysis operator. We propose to learn an analysis operator from a training corpus using a constrained optimisation framework based on L1 optimisation. The reason for introducing a constraint in the optimisation framework is to exclude trivial solutions. Although there is no final answer here for which constraint is the most relevant constraint, we investigate some conventional constraints in the model adaptation field and use the uniformly normalised tight frame (UNTF) for this purpose. We then derive a practical learning algorithm, based on projected subgradients and DouglasRachford splitting technique, and demonstrate its ability to robustly recover a ground truth analysis operator, when provided with a clean training set, of sufficient size. We also find an analysis operator for images, using some noisy cosparse signals, which is indeed a more realistic experiment. As the derived optimisation problem is not a convex program, we often find a local minimum using such variational methods. For two different settings, we provide preliminary theoretical support for the wellposedness of the learning problem, which can be practically used to test the local identifiability conditions of learnt operators.
Sparse representations, compressive sensing and dictionaries for pattern recognition
 in Asian Conference on Pattern Recognition (ACPR
, 2011
"... Abstract—In recent years, the theories of Compressive Sensing ..."
Abstract

Cited by 8 (6 self)
 Add to MetaCart
(Show Context)
Abstract—In recent years, the theories of Compressive Sensing
Bilinear Generalized Approximate Message Passing
, 2013
"... Abstract—We extend the generalized approximate message passing (GAMP) approach, originally proposed for highdimensional generalizedlinear regression in the context of compressive sensing, to the generalizedbilinear case, which enables its application to matrix completion, robust PCA, dictionary l ..."
Abstract

Cited by 8 (2 self)
 Add to MetaCart
(Show Context)
Abstract—We extend the generalized approximate message passing (GAMP) approach, originally proposed for highdimensional generalizedlinear regression in the context of compressive sensing, to the generalizedbilinear case, which enables its application to matrix completion, robust PCA, dictionary learning, and related matrixfactorization problems. In the first part of the paper, we derive our Bilinear GAMP (BiGAMP) algorithm as an approximation of the sumproduct belief propagation algorithm in the highdimensional limit, where centrallimit theorem arguments and Taylorseries approximations apply, and under the assumption of statistically independent matrix entries with known priors. In addition, we propose an adaptive damping mechanism that aids convergence under finite problem sizes, an expectationmaximization (EM)based method to automatically tune the parameters of the assumed priors, and two rankselection strategies. In the second part of the paper, we discuss the specializations of EMBiGAMP to the problems of matrix completion, robust PCA, and dictionary learning, and present the results of an extensive empirical study comparing EMBiGAMP to stateoftheart algorithms on each problem. Our numerical results, using both synthetic and realworld datasets, demonstrate that EMBiGAMP yields excellent reconstruction accuracy (often best in class) while maintaining competitive runtimes and avoiding the need to tune algorithmic parameters. I.
Classification of Human Epithelial Type 2 Cell Indirect Immunofluoresence Images via Codebook Based Descriptors
"... The AntiNuclear Antibody (ANA) clinical pathology test is commonly used to identify the existence of various diseases. A hallmark method for identifying the presence of ANAs is the Indirect Immunofluorescence method on Human Epithelial (HEp2) cells, due to its high sensitivity and the large range ..."
Abstract

Cited by 8 (2 self)
 Add to MetaCart
(Show Context)
The AntiNuclear Antibody (ANA) clinical pathology test is commonly used to identify the existence of various diseases. A hallmark method for identifying the presence of ANAs is the Indirect Immunofluorescence method on Human Epithelial (HEp2) cells, due to its high sensitivity and the large range of antigens that can be detected. However, the method suffers from numerous shortcomings, such as being subjective as well as time and labour intensive. Computer Aided Diagnostic (CAD) systems have been developed to address these problems, which automatically classify a HEp2 cell image into one of its known patterns (eg., speckled, homogeneous). Most of the existing CAD systems use handpicked features to represent a HEp2 cell image, which may only work in limited scenarios. In this paper, we propose a cell classification system comprised of a dualregion codebookbased descriptor, combined with the Nearest Convex Hull Classifier. We evaluate the performance of several variants of the descriptor on two publicly available datasets: ICPR HEp2 cell classification contest dataset and the new SNPHEp2 dataset. To our knowledge, this is the first time codebookbased descriptors are applied and studied in this domain. Experiments show that the proposed system has consistent high performance and is more robust than two recent CAD systems. 1.
Social sparsity! neighborhood systems enrich structured shrinkage operators
 IEEE Trans. Signal Processing
, 2013
"... Abstract—Sparse and structured signal expansions on dictionaries can be obtained through explicit modeling in the coefficient domain. The originality of the present article lies in the construction and the study of generalized shrinkage operators, whose goal is to identify structured significance ma ..."
Abstract

Cited by 8 (2 self)
 Add to MetaCart
(Show Context)
Abstract—Sparse and structured signal expansions on dictionaries can be obtained through explicit modeling in the coefficient domain. The originality of the present article lies in the construction and the study of generalized shrinkage operators, whose goal is to identify structured significance maps and give rise to structured thresholding. These generalize Group Lasso and the previously introduced Elitist Lasso by introducing more flexibility in the coefficient domain modeling, and lead to the notion of social sparsity. The proposed operators are studied theoretically and embedded in iterative thresholding algorithms. Moreover, a link between these operators and a convex functional is established. Numerical studies on both simulated and real signals confirm the benefits of such an approach.
KERNEL DICTIONARY LEARNING
"... In this paper, we present dictionary learning methods for sparse and redundant signal representations in high dimensional feature space. Using the kernel method, we describe how the wellknown dictionary learning approaches such as the method of optimal directions and KSVD can be made nonlinear. We ..."
Abstract

Cited by 8 (5 self)
 Add to MetaCart
(Show Context)
In this paper, we present dictionary learning methods for sparse and redundant signal representations in high dimensional feature space. Using the kernel method, we describe how the wellknown dictionary learning approaches such as the method of optimal directions and KSVD can be made nonlinear. We analyze these constructions and demonstrate their improved performance through several experiments on classification problems. It is shown that nonlinear dictionary learning approaches can provide better discrimination compared to their linear counterparts and kernel PCA, especially when the data is corrupted by noise. Index Terms — Kernel methods, dictionary learning, method of optimal directions, KSVD. 1.
Sparsity Based Feedback Design: A New Paradigm in Opportunistic Sensing
"... Abstract — We introduce the concept of using compressive sensing techniques to provide feedback in order to control dynamical systems. Compressive sensing algorithms use l1regularization for reconstructing data from a few measurement samples. These algorithms provide highly efficient reconstruction ..."
Abstract

Cited by 8 (0 self)
 Add to MetaCart
(Show Context)
Abstract — We introduce the concept of using compressive sensing techniques to provide feedback in order to control dynamical systems. Compressive sensing algorithms use l1regularization for reconstructing data from a few measurement samples. These algorithms provide highly efficient reconstruction for sparse data. For data that is not sparse enough, the reconstruction technique produces a bounded error in the estimate. In a dynamical system, such erroneous stateestimation can lead to undesirable effects in the output of the plant. In this work, we present some techniques to overcome the aforementioned restriction. Our efforts fall into two main categories. First, we present some techniques to design feedback systems that sparsify the state in order to perfectly reconstruct it using compressive sensing algorithms. We study the effect of such sparsification schemes on the stability and regulation of the plant. Second, we study the characteristics of dynamical systems that produce sparse states so that compressive sensing techniques can be used for feedback in such scenarios without any additional modification in the feedback loop. I.
Sparsity Averaging for Compressive Imaging
"... We propose a novel regularization method for sparse image reconstruction from compressive measurements. The approach relies on the conjecture that natural images exhibit strong average sparsity over multiple coherent frames. The associated reconstruction algorithm, based on an analysis prior and a ..."
Abstract

Cited by 8 (3 self)
 Add to MetaCart
We propose a novel regularization method for sparse image reconstruction from compressive measurements. The approach relies on the conjecture that natural images exhibit strong average sparsity over multiple coherent frames. The associated reconstruction algorithm, based on an analysis prior and a reweighted ℓ1 scheme, is dubbed Sparsity Averaging Reweighted Analysis (SARA). We test our prior and the associated algorithm through extensive numerical simulations for spread spectrum and Gaussian acquisition schemes suggested by the recent theory of compressed sensing with coherent and redundant dictionaries. Our results show that average sparsity outperforms stateoftheart priors that promote sparsity in a single orthonormal basis or redundant frame, or that promote gradient sparsity. We also illustrate the performance of SARA in the context of Fourier imaging, for particular applications in astronomy and medicine.
On the local correctness of `1 minimization for dictionary learning,” ArXiv preprint arXiv:1101.5672
, 2011
"... The idea that many important classes of signals can be wellrepresented by linear combinations of a small set of atoms selected from a given dictionary has had dramatic impact on the theory and practice of signal processing. For practical problems in which an appropriate sparsifying dictionary is n ..."
Abstract

Cited by 8 (1 self)
 Add to MetaCart
The idea that many important classes of signals can be wellrepresented by linear combinations of a small set of atoms selected from a given dictionary has had dramatic impact on the theory and practice of signal processing. For practical problems in which an appropriate sparsifying dictionary is not known ahead of time, a very popular and successful heuristic is to search for a dictionary that minimizes an appropriate sparsity surrogate over a given set of sample data. While this idea is appealing, the behavior of these algorithms is largely a mystery; although there is a body of empirical evidence suggesting they do learn very effective representations, there is little theory to guarantee when they will behave correctly, or when the learned dictionary can be expected to generalize. In this paper, we take a step towards such a theory. We show that under mild hypotheses, the dictionary learning problem is locally wellposed: the desired solution is indeed a local minimum of the `1 norm. Namely, if A ∈ Rm×n is an incoherent (and possibly overcomplete) dictionary, and the coefficients X ∈ Rn×p follow a random sparse model, then with high probability (A,X) is a local minimum of the `1 norm over the manifold of factorizations (A′,X ′) satisfying A′X ′ = Y, provided the number of samples p = Ω(n3k). For overcomplete A, this is the first result showing that the dictionary learning problem is locally solvable. Our analysis draws on tools developed for the problem of completing a lowrank matrix from a small subset of its entries, which allow us to overcome a number of technical obstacles; in particular, the absence of the restricted isometry property.1 1
On the Identifiability of Overcomplete Dictionaries via the Minimisation Principle Underlying KSVD
, 2013
"... This article gives theoretical insights into the performance of KSVD, a dictionary learning algorithm that has gained significant popularity in practical applications. The particular question studied here is when a dictionary Φ ∈ Rd×K can be recovered as local minimum of the minimisation criterion ..."
Abstract

Cited by 8 (1 self)
 Add to MetaCart
This article gives theoretical insights into the performance of KSVD, a dictionary learning algorithm that has gained significant popularity in practical applications. The particular question studied here is when a dictionary Φ ∈ Rd×K can be recovered as local minimum of the minimisation criterion underlying KSVD from a set of N training signals yn = Φxn. A theoretical analysis of the problem leads to two types of identifiability results assuming the training signals are generated from a tight frame with coefficients drawn from a random symmetric distribution. First asymptotic results showing, that in expectation the generating dictionary can be recovered exactly as a local minimum of the KSVD criterion if the coefficient distribution exhibits sufficient decay. This decay can be characterised by the coherence of the dictionary and the `1norm of the coefficients. Based on the asymptotic results it is further demonstrated that given a finite number of training samples N, such that N / logN = O(K3d), except with probability O(N−Kd) there is a local minimum of the KSVD criterion within distance O(KN−1/4) to the generating dictionary.