Results

**1 - 3**of**3**### Convolutional Trees for Fast Transform Learning

"... Abstract—Dictionary learning is a powerful approach for sparse representation. However, the numerical complexity of classical dictionary learning methods restricts their use to atoms with small supports such as patches. In a previous work, we introduced a model based on a composition of convolutions ..."

Abstract
- Add to MetaCart

(Show Context)
Abstract—Dictionary learning is a powerful approach for sparse representation. However, the numerical complexity of classical dictionary learning methods restricts their use to atoms with small supports such as patches. In a previous work, we introduced a model based on a composition of convolutions with sparse kernels to build large dictionary atoms with a low computational cost. The subject of this work is to consider this model at the next level, i.e., to build a full dictionary of atoms from convolutions of sparse kernels. Moreover, we further reduce the size of the representation space by organizing the convolution kernels used to build atoms into a tree structure. The performance of the method is tested for the construction of a curvelet dictionary with a known code. I.

### Learning a fast transform with a dictionary

"... Abstract — A powerful approach to sparse representation, dic-tionary learning consists in finding a redundant frame in which the representation of a particular class of images is sparse. In practice, all algorithms performing dictionary learning iteratively estimate the dictionary and a sparse repre ..."

Abstract
- Add to MetaCart

(Show Context)
Abstract — A powerful approach to sparse representation, dic-tionary learning consists in finding a redundant frame in which the representation of a particular class of images is sparse. In practice, all algorithms performing dictionary learning iteratively estimate the dictionary and a sparse representation of the images using this dictionary. However, the numerical complexity of dic-tionary learning restricts its use to atoms with a small support. A way to alleviate these issues is introduced in this paper, con-sisting in dictionary atoms obtained by translating the composi-tion of K convolutions with S-sparse kernels of known support. The dictionary update step associated with this strategy is a non-convex optimization problem, which we study here. A block-coordinate descent or Gauss-Seidel algorithm is pro-posed to solve this problem, whose search space is of dimension KS, which is much smaller than the size of the image. Moreover, the complexity of the algorithm is linear with respect to the size of the image, allowing larger atoms to be learned (as opposed to small patches). An experiment is presented that shows the ap-proximation of a large cosine atom with K = 7 sparse kernels, demonstrating a very good accuracy. 1

### Learning computationally efficient dictionaries and their implementation as fast transforms

"... Dictionary learning is a branch of signal processing and machine learning that aims at finding a frame (called dictionary) in which some training data admits a sparse representation. The sparser the representation, the better the dictionary. The resulting dictionary is in general a dense matrix, and ..."

Abstract
- Add to MetaCart

(Show Context)
Dictionary learning is a branch of signal processing and machine learning that aims at finding a frame (called dictionary) in which some training data admits a sparse representation. The sparser the representation, the better the dictionary. The resulting dictionary is in general a dense matrix, and its manipulation can be computationally costly both at the learning stage and later in the usage of this dic-tionary, for tasks such as sparse coding. Dictionary learning is thus limited to rel-atively small-scale problems. In this paper, inspired by usual fast transforms, we consider a general dictionary structure that allows cheaper manipulation, and pro-pose an algorithm to learn such dictionaries –and their fast implementation – over training data. The approach is demonstrated experimentally with the factorization of the Hadamard matrix and with synthetic dictionary learning experiments. 1