Results 1 - 10
of
79
Visual classification with multi-task joint sparse representation
- In CVPR
, 2010
"... Abstract — We address the problem of visual classification with multiple features and/or multiple instances. Motivated by the recent success of multitask joint covariate selection, we formulate this problem as a multitask joint sparse representation model to combine the strength of multiple features ..."
Abstract
-
Cited by 66 (1 self)
- Add to MetaCart
(Show Context)
Abstract — We address the problem of visual classification with multiple features and/or multiple instances. Motivated by the recent success of multitask joint covariate selection, we formulate this problem as a multitask joint sparse representation model to combine the strength of multiple features and/or instances for recognition. A joint sparsity-inducing norm is utilized to enforce class-level joint sparsity patterns among the multiple representation vectors. The proposed model can be efficiently optimized by a proximal gradient method. Furthermore, we extend our method to the setup where features are described in kernel matrices. We then investigate into two applications of our method to visual classification: 1) fusing multiple kernel features for object categorization and 2) robust face recognition in video with an ensemble of query images. Extensive experiments on challenging real-world data sets demonstrate that the proposed method is competitive to the state-of-the-art methods in respective applications. Index Terms — Feature fusion, multitask learning, sparse representation, visual classification.
Image deblurring and superresolution by adaptive sparse domain selection and adaptive regularization
- IEEE Trans. Image Process
, 2011
"... Abstract—As a powerful statistical image modeling technique, sparse representation has been successfully used in various image restoration applications. The success of sparse representation owes to the development of the-norm optimization techniques and the fact that natural images are intrinsically ..."
Abstract
-
Cited by 59 (11 self)
- Add to MetaCart
(Show Context)
Abstract—As a powerful statistical image modeling technique, sparse representation has been successfully used in various image restoration applications. The success of sparse representation owes to the development of the-norm optimization techniques and the fact that natural images are intrinsically sparse in some do-mains. The image restoration quality largely depends on whether the employed sparse domain can represent well the underlying image. Considering that the contents can vary significantly across different images or different patches in a single image, we propose to learn various sets of bases from a precollected dataset of ex-ample image patches, and then, for a given patch to be processed, one set of bases are adaptively selected to characterize the local sparse domain. We further introduce two adaptive regularization terms into the sparse representation framework. First, a set of autoregressive (AR) models are learned from the dataset of ex-ample image patches. The best fitted AR models to a given patch are adaptively selected to regularize the image local structures. Second, the image nonlocal self-similarity is introduced as an-other regularization term. In addition, the sparsity regularization parameter is adaptively estimated for better image restoration performance. Extensive experiments on image deblurring and super-resolution validate that by using adaptive sparse domain se-lection and adaptive regularization, the proposed method achieves much better results than many state-of-the-art algorithms in terms of both PSNR and visual perception. Index Terms—Deblurring, image restoration (IR), regulariza-tion, sparse representation, super-resolution. I.
Nonlocally Centralized Sparse Representation for Image Restoration
, 2011
"... The sparse representation models code an image patch as a linear combination of a few atoms chosen out from an over-complete dictionary, and they have shown promising results in various image restoration applications. However, due to the degradation of the observed image (e.g., noisy, blurred and/o ..."
Abstract
-
Cited by 25 (8 self)
- Add to MetaCart
The sparse representation models code an image patch as a linear combination of a few atoms chosen out from an over-complete dictionary, and they have shown promising results in various image restoration applications. However, due to the degradation of the observed image (e.g., noisy, blurred and/or downsampled), the sparse representations by conventional models may not be accurate enough for a faithful reconstruction of the original image. To improve the performance of sparse representation based image restoration, in this paper the concept of sparse coding noise is introduced, and the goal of image restoration turns to how to suppress the sparse coding noise. To this end, we exploit the image nonlocal self-similarity to obtain good estimates of the sparse coding coefficients of the original image, and then centralize the sparse coding coefficients of the observed image to those estimates. The so-called nonlocally centralized sparse representation (NCSR) model is as simple as the standard sparse representation model, while our extensive experiments on various types of image restoration problems, including denoising, deblurring and super-resolution, validate the generality and state-of-the-art performance of the proposed NCSR algorithm.
Learning sparse codes for hyperspectral imagery
- IEEE Journal of Selected Topics in Signal Processing
, 2011
"... The spectral features in hyperspectral imagery (HSI) contain significant structure that, if properly characterized could enable more efficient data acquisition and improved data analysis. Because most pixels contain reflectances of just a few materials, we propose that a sparse coding model is well- ..."
Abstract
-
Cited by 19 (2 self)
- Add to MetaCart
(Show Context)
The spectral features in hyperspectral imagery (HSI) contain significant structure that, if properly characterized could enable more efficient data acquisition and improved data analysis. Because most pixels contain reflectances of just a few materials, we propose that a sparse coding model is well-matched to HSI data. Sparsity models consider each pixel as a combination of just a few elements from a larger dictionary, and this approach has proven effective in a wide range of applications. Furthermore, previous work has shown that optimal sparse coding dictionaries can be learned from a dataset with no other a priori information (in contrast to many HSI “endmember ” discovery algorithms that assume the presence of pure spectra or side information). We modified an existing unsupervised learning approach and applied it to HSI data (with significant ground truth labeling) to learn an optimal sparse coding dictionary. Using this learned dictionary, we demonstrate three main findings: i) the sparse coding model learns spectral signatures of materials in the scene and locally approximates nonlinear manifolds for individual materials, ii) this learned dictionary can be used to infer HSI-resolution data with very high accuracy from simulated imagery collected at multispectral-level resolution, and iii) this learned dictionary improves the performance of a supervised classification algorithm, both in terms of the classifier complexity and generalization from very small training sets.
Block-Sparse Recovery via Convex Optimization
, 2012
"... Given a dictionary that consists of multiple blocks and a signal that lives in the range space of only a few blocks, we study the problem of finding a block-sparse representation of the signal, i.e., a representation that uses the minimum number of blocks. Motivated by signal/image processing and co ..."
Abstract
-
Cited by 19 (1 self)
- Add to MetaCart
Given a dictionary that consists of multiple blocks and a signal that lives in the range space of only a few blocks, we study the problem of finding a block-sparse representation of the signal, i.e., a representation that uses the minimum number of blocks. Motivated by signal/image processing and computer vision applications, such as face recognition, we consider the block-sparse recovery problem in the case where the number of atoms in each block is arbitrary, possibly much larger than the dimension of the underlying subspace. To find a block-sparse representation of a signal, we propose two classes of non-convex optimization programs, which aim to minimize the number of nonzero coefficient blocks and the number of nonzero reconstructed vectors from the blocks, respectively. Since both classes of problems are NP-hard, we propose convex relaxations and derive conditions under which each class of the convex programs is equivalent to the original non-convex formulation. Our conditions depend on the notions of mutual and cumu-lative subspace coherence of a dictionary, which are natural generalizations of existing notions of mutual and cumulative coherence. We evaluate the performance of the proposed convex programs through simulations as well as real experiments on face recognition. We show that treating the face recognition problem as a block-sparse recovery problem improves the state-of-the-art results by 10 % with only 25 % of the training data.
Analysis operator learning and its application to image reconstruction
- IEEE Trans. Image Process
, 2013
"... ..."
(Show Context)
Close the Loop: Joint Blind Image Restoration and Recognition with Sparse Representation Prior
"... Most previous visual recognition systems simply assume ideal inputs without real-world degradations, such as low resolution, motion blur and out-of-focus blur. In presence of such unknown degradations, the conventional approach first resorts to blind image restoration and then feeds the restored ima ..."
Abstract
-
Cited by 15 (2 self)
- Add to MetaCart
(Show Context)
Most previous visual recognition systems simply assume ideal inputs without real-world degradations, such as low resolution, motion blur and out-of-focus blur. In presence of such unknown degradations, the conventional approach first resorts to blind image restoration and then feeds the restored image into a classifier. Treating restoration and recognition separately, such a straightforward approach, however, suffers greatly from the defective output of the illposed blind image restoration. In this paper, we present a joint blind image restoration and recognition method based on the sparse representation prior to handle the challenging problem of face recognition from low-quality images, where the degradation model is realistic and totally unknown. The sparse representation prior states that the degraded input image, if correctly restored, will have a good sparse representation in terms of the training set, which indicates the identity of the test image. The proposed algorithm achieves simultaneous restoration and recognition by iteratively solving the blind image restoration in pursuit of the sparest representation for recognition. Based on such a sparse representation prior, we demonstrate that the image restoration task and the recognition task can benefit greatly from each other. Extensive experiments on face datasets under various degradations are carried out and the results of our joint model shows significant improvements over conventional methods of treating the two tasks independently. 1.
Nonlocal hierarchical dictionary learning using wavelets for image denoising
- IEEE Transactions on Image Processing
"... Abstract — Exploiting the sparsity within representation models for images is critical for image denoising. The best currently available denoising methods take advantage of the sparsity from image self-similarity, pre-learned, and fixed representations. Most of these methods, however, still have dif ..."
Abstract
-
Cited by 9 (0 self)
- Add to MetaCart
(Show Context)
Abstract — Exploiting the sparsity within representation models for images is critical for image denoising. The best currently available denoising methods take advantage of the sparsity from image self-similarity, pre-learned, and fixed representations. Most of these methods, however, still have difficulties in tackling high noise levels or noise models other than Gaussian. In this paper, the multiresolution structure and sparsity of wavelets are employed by nonlocal dictionary learning in each decomposition level of the wavelets. Experimental results show that our pro-posed method outperforms two state-of-the-art image denoising algorithms on higher noise levels. Furthermore, our approach is more adaptive to the less extensively researched uniform noise. Index Terms — Image denoising, wavelets, sparse coding, multi-scale, nonlocal.
Sparse representations, compressive sensing and dictionaries for pattern recognition
- in Asian Conference on Pattern Recognition (ACPR
, 2011
"... Abstract—In recent years, the theories of Compressive Sensing ..."
Abstract
-
Cited by 8 (6 self)
- Add to MetaCart
(Show Context)
Abstract—In recent years, the theories of Compressive Sensing
Block sparse representations of tensors using Kronecker bases
- In IEEE International Conference on Acoustics, Speech and Signal Processing, ICASSP-2012
, 2012
"... In this paper, we consider sparse representations of multidimensional signals (tensors) by generalizing the onedimensional case (vectors). A new greedy algorithm, namely the Tensor-OMP algorithm, is proposed to compute a blocksparse representation of a tensor with respect to a Kronecker basis where ..."
Abstract
-
Cited by 7 (1 self)
- Add to MetaCart
(Show Context)
In this paper, we consider sparse representations of multidimensional signals (tensors) by generalizing the onedimensional case (vectors). A new greedy algorithm, namely the Tensor-OMP algorithm, is proposed to compute a blocksparse representation of a tensor with respect to a Kronecker basis where the non-zero coefficients are restricted to be located within a sub-tensor (block). It is demonstrated, through simulation examples, the advantage of considering the Kronecker structure together with the block-sparsity property obtaining faster and more precise sparse representations of tensors compared to the case of applying the classical OMP