Results 1  10
of
14
1 Direct Optimization of the Dictionary Learning Problem
"... Abstract—A novel way of solving the dictionary learning problem is proposed in this paper. It is based on a socalled direct optimization as it avoids the usual technique which consists in alternatively optimizing the coefficients of a sparse decomposition and in optimizing dictionary atoms. The alg ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
(Show Context)
Abstract—A novel way of solving the dictionary learning problem is proposed in this paper. It is based on a socalled direct optimization as it avoids the usual technique which consists in alternatively optimizing the coefficients of a sparse decomposition and in optimizing dictionary atoms. The algorithm we advocate simply performs a joint proximal gradient descent step over the dictionary atoms and the coefficient matrix. As such, we have denoted the algorithm as a onestep blockcoordinate proximal gradient descent and we have shown that it can be applied to a broader class of nonconvex optimization problems than the dictionary learning one. After having derived the algorithm, we also provided indepth discussions on how the stepsizes of the proximal gradient descent have been chosen. In addition, we uncover the connection between our direct approach and the alternating optimization method for dictionary learning. The main advantage of our novel algorithm is that, as suggested by our simulation study, it is far more efficient than alternating optimization algorithms. Index Terms—dictionary learning, nonconvex proximal, onestep blockcoordinate descent. I.
Analysis SimCO: A new algorithm for analysis dictionary learning
 in Proc. Int. Conf. Acoust., Speech, and Signal Process
"... We consider the dictionary learning problem for the analysis model based sparse representation. A novel algorithm is proposed by adapting the synthesis model based simultaneous codeword optimisation (SimCO) algorithm to the analysis model. This algorithm assumes that the analysis dictionary cont ..."
Abstract

Cited by 1 (1 self)
 Add to MetaCart
(Show Context)
We consider the dictionary learning problem for the analysis model based sparse representation. A novel algorithm is proposed by adapting the synthesis model based simultaneous codeword optimisation (SimCO) algorithm to the analysis model. This algorithm assumes that the analysis dictionary contains unit 2norm atoms and trains the dictionary by the optimisation on manifolds. This framework allows one to update multiple dictionary atoms in each iteration, leading to a computationally efficient optimisation process. We demonstrate the competitive performance of the proposed algorithm using experiments on both synthetic and real data, as compared with three baseline algorithms, Analysis KSVD, analysis operator learning (AOL) and learning overcomplete sparsifying transforms (LOST), respectively. Index Terms — Analysis model, SimCO, analysis dictionary learning.
W.: Joint image separation and dictionary learning
 In: Accepted by 18th International Conference on Digital Signal Processing
, 2013
"... Abstract—Blind source separation (BSS) aims to estimate unknown sources from their mixtures. Methods to address this include the benchmark ICA, SCA, MMCA, and more recently, a dictionary learning based algorithm BMMCA. In this paper, we solve the separation problem by using the recently proposed Sim ..."
Abstract

Cited by 1 (1 self)
 Add to MetaCart
(Show Context)
Abstract—Blind source separation (BSS) aims to estimate unknown sources from their mixtures. Methods to address this include the benchmark ICA, SCA, MMCA, and more recently, a dictionary learning based algorithm BMMCA. In this paper, we solve the separation problem by using the recently proposed SimCO optimization framework. Our approach not only allows to unify the two subproblems emerging in the separation problem, but also mitigates the singularity issue which was reported in the dictionary learning literature. Another unique feature is that only one dictionary is used to sparsely represent the source signals while in the literature typically multiple dictionaries are assumed (one dictionary per source). Numerical experiments are performed and the results show that our scheme significantly improves the performance, especially in terms of the accuracy of the mixing matrix estimation. Index Terms—Blind source separation, dictionary learning, image processing, optimization. I.
Convergence of Gradient Descent for LowRank Matrix Approximation
"... AbstractThis paper provides a proof of global convergence of gradient search for lowrank matrix approximation. Such approximations have recently been of interest for large scale problems, as well as for dictionary learning for sparse signal representations and matrix completion. The proof is base ..."
Abstract
 Add to MetaCart
(Show Context)
AbstractThis paper provides a proof of global convergence of gradient search for lowrank matrix approximation. Such approximations have recently been of interest for large scale problems, as well as for dictionary learning for sparse signal representations and matrix completion. The proof is based on the interpretation of the problem as an optimization on the Grassmann manifold and FubinyStudy distance on this space.
LEARNING OVERCOMPLETE DICTIONARIES BASED ON PARALLEL ATOMUPDATING
"... In this paper we propose a fast and efficient algorithm for learning overcomplete dictionaries. The proposed algorithm is indeed an alternative to the wellknown KSingular Value Decomposition (KSVD) algorithm. The main drawback of KSVD is its high computational load especially in highdimensional ..."
Abstract
 Add to MetaCart
(Show Context)
In this paper we propose a fast and efficient algorithm for learning overcomplete dictionaries. The proposed algorithm is indeed an alternative to the wellknown KSingular Value Decomposition (KSVD) algorithm. The main drawback of KSVD is its high computational load especially in highdimensional problems. This is due to the fact that in the dictionary update stage of this algorithm a SVD is performed to update each column of the dictionary. Our proposed algorithm avoids performing SVD and instead uses a special form of alternating minimization. In this way, as our simulations on both synthetic and real data show, our algorithm outperforms KSVD in both computational load and the quality of the results. Index Terms — Sparse approximation, compressive sensing, dictionary learning, alternative minimization
RESEARCH Open Access
"... activation and tumor progression in hepatocellular carcinoma ..."
(Show Context)
unknown title
"... pt li W ty o per rm e 4 knowing A and V. A classical example for BSS is the so called “cocktail party problem”, where a number of people are talking simultaneously in a cocktail party, and each one can distinguish the others ’ speech in this sound mixing environment, but it is difficult for machines ..."
Abstract
 Add to MetaCart
(Show Context)
pt li W ty o per rm e 4 knowing A and V. A classical example for BSS is the so called “cocktail party problem”, where a number of people are talking simultaneously in a cocktail party, and each one can distinguish the others ’ speech in this sound mixing environment, but it is difficult for machines to replicate such capabilities.
1Analysis SimCO Algorithms for Sparse Analysis Model Based Dictionary Learning
"... Abstract—In this paper, we consider the dictionary learning problem for the sparse analysis model. A novel algorithm is proposed by adapting the simultaneous codeword optimization (SimCO) algorithm, based on the sparse synthesis model, to the sparse analysis model. This algorithm assumes that the an ..."
Abstract
 Add to MetaCart
(Show Context)
Abstract—In this paper, we consider the dictionary learning problem for the sparse analysis model. A novel algorithm is proposed by adapting the simultaneous codeword optimization (SimCO) algorithm, based on the sparse synthesis model, to the sparse analysis model. This algorithm assumes that the analysis dictionary contains unit ℓ2norm atoms and learns the dictionary by optimization on manifolds. This framework allows multiple dictionary atoms to be updated simultaneously in each iteration. However, similar to several existing analysis dictionary learning algorithms, dictionaries learned by the proposed algorithm may contain similar atoms, leading to a degenerate (coherent) dictionary. To address this problem, we also consider restricting the coherence of the learned dictionary and propose Incoherent Analysis SimCO by introducing an atom decorrelation step following the update of the dictionary. We demonstrate the competitive performance of the proposed algorithms using experiments with synthetic data and image denoising as compared with existing algorithms. Index Terms—Sparse representation, analysis model, SimCO, analysis dictionary learning. I.
Weighted SimCO: A Novel Algorithm for Dictionary Update
"... Abstract—Algorithms aiming at solving dictionary learning problem usually involve iteratively performing two stage operations: sparse coding and dictionary update. In the dictionary update stage, codewords are updated based on a given sparsity pattern. In the ideal case where there is no noise and ..."
Abstract
 Add to MetaCart
(Show Context)
Abstract—Algorithms aiming at solving dictionary learning problem usually involve iteratively performing two stage operations: sparse coding and dictionary update. In the dictionary update stage, codewords are updated based on a given sparsity pattern. In the ideal case where there is no noise and the true sparsity pattern is known a priori, dictionary update should produce a dictionary that precisely represent the training samples. However, we analytically show that benchmark algorithms, including MOD, KSVD and regularized SimCO, could not always guarantee this property: they may fail to converge to a global minimum. The key behind the failure is the singularity in the objective function. To address this problem, we propose a weighted technique based on the SimCO optimization framework, hence the term weighted SimCO. Decompose the overall objective function as a sum of atomic functions. The crux of weighted SimCO is to apply weighting coefficients to atomic functions so that singular points are zeroed out. A second order method is implemented to solve the corresponding optimization problem. We numerically compare the proposed algorithm with the benchmark algorithms for noiseless and noisy scenarios. The empirical results demonstrate the significant improvement in the performance. I.
1A Fast Dictionary Learning Algorithm via Codeword Clustering and Hierarchical Sparse Coding
"... Dictionary learning algorithms, aiming to learn a sparsifying transform from training data, are often established on an optimization process involving the iterations between two stages: sparse coding and dictionary update. In practice, however, these algorithms are often computationally demanding ..."
Abstract
 Add to MetaCart
Dictionary learning algorithms, aiming to learn a sparsifying transform from training data, are often established on an optimization process involving the iterations between two stages: sparse coding and dictionary update. In practice, however, these algorithms are often computationally demanding especially when dealing with large scale data or high dimensional signals. In this paper, we propose new methods for improving the computational efficiency of dictionary learning algorithms. Specifically, we develop a treestructured multilevel representation of dictionary based on clustering, which is used to derive a hierarchical algorithm in the sparse coding stage. The proposed idea is then applied to the simultaneous codeword optimisation (SimCO) algorithm, a dictionary learning algorithm that we developed recently, resulting in a new algorithm: fast SimCO. Numerical examples are provided to show its computational efficiency and the performance for image denoising. 1.