Results 1  10
of
246
The Cosparse Analysis Model and Algorithms
, 2011
"... After a decade of extensive study of the sparse representation synthesis model, we can safely say that this is a mature and stable field, with clear theoretical foundations, and appealing applications. Alongside this approach, there is an analysis counterpart model, which, despite its similarity to ..."
Abstract

Cited by 64 (14 self)
 Add to MetaCart
After a decade of extensive study of the sparse representation synthesis model, we can safely say that this is a mature and stable field, with clear theoretical foundations, and appealing applications. Alongside this approach, there is an analysis counterpart model, which, despite its similarity to the synthesis alternative, is markedly different. Surprisingly, the analysis model did not get a similar attention, and its understanding today is shallow and partial. In this paper we take a closer look at the analysis approach, better define it as a generative model for signals, and contrast it with the synthesis one. This workproposeseffectivepursuitmethodsthat aimtosolveinverseproblemsregularized with the analysismodel prior, accompanied by a preliminary theoretical study of their performance. We demonstrate the effectiveness of the analysis model in several experiments.
Sparse signal recovery with temporally correlated source vectors using sparse Bayesian learning
 IEEE J. Sel. Topics Signal Process
, 2011
"... Abstract — We address the sparse signal recovery problem in the context of multiple measurement vectors (MMV) when elements in each nonzero row of the solution matrix are temporally correlated. Existing algorithms do not consider such temporal correlation and thus their performance degrades signific ..."
Abstract

Cited by 54 (14 self)
 Add to MetaCart
(Show Context)
Abstract — We address the sparse signal recovery problem in the context of multiple measurement vectors (MMV) when elements in each nonzero row of the solution matrix are temporally correlated. Existing algorithms do not consider such temporal correlation and thus their performance degrades significantly with the correlation. In this work, we propose a block sparse Bayesian learning framework which models the temporal correlation. We derive two sparse Bayesian learning (SBL) algorithms, which have superior recovery performance compared to existing algorithms, especially in the presence of high temporal correlation. Furthermore, our algorithms are better at handling highly underdetermined problems and require less rowsparsity on the solution matrix. We also provide analysis of the global and local minima of their cost function, and show that the SBL cost function has the very desirable property that the global minimum is at the sparsest solution to the MMV problem. Extensive experiments also provide some interesting results that motivate future theoretical research on the MMV model.
CoherenceBased Performance Guarantees for Estimating a Sparse Vector Under Random Noise
"... We consider the problem of estimating a deterministic sparse vector x0 from underdetermined measurements Ax0 + w, where w represents white Gaussian noise and A is a given deterministic dictionary. We analyze the performance of three sparse estimation algorithms: basis pursuit denoising (BPDN), orth ..."
Abstract

Cited by 43 (15 self)
 Add to MetaCart
(Show Context)
We consider the problem of estimating a deterministic sparse vector x0 from underdetermined measurements Ax0 + w, where w represents white Gaussian noise and A is a given deterministic dictionary. We analyze the performance of three sparse estimation algorithms: basis pursuit denoising (BPDN), orthogonal matching pursuit (OMP), and thresholding. These algorithms are shown to achieve nearoracle performance with high probability, assuming that x0 is sufficiently sparse. Our results are nonasymptotic and are based only on the coherence of A, so that they are applicable to arbitrary dictionaries. Differences in the precise conditions required for the performance guarantees of each algorithm are manifested in the observed performance at high and low signaltonoise ratios. This provides insight on the advantages and drawbacks of ℓ1 relaxation techniques such as BPDN as opposed to greedy approaches such as OMP and thresholding.
1 BM3D frames and variational image deblurring
, 1106
"... Abstract—A family of the Block Matching 3D (BM3D) algorithms for various imaging problems has been recently proposed within the framework of nonlocal patchwise image modeling [1], [2]. In this paper we construct analysis and synthesis frames, formalizing the BM3D image modeling and use these frame ..."
Abstract

Cited by 28 (8 self)
 Add to MetaCart
(Show Context)
Abstract—A family of the Block Matching 3D (BM3D) algorithms for various imaging problems has been recently proposed within the framework of nonlocal patchwise image modeling [1], [2]. In this paper we construct analysis and synthesis frames, formalizing the BM3D image modeling and use these frames to develop novel iterative deblurring algorithms. We consider two different formulations of the deblurring problem: one given by minimization of the single objective function and another based on the Nash equilibrium balance of two objective functions. The latter results in an algorithm where the denoising and deblurring operations are decoupled. The convergence of the developed algorithms is proved. Simulation experiments show that the decoupled algorithm derived from the Nash equilibrium formulation demonstrates the best numerical and visual results and shows superiority with respect to the state of the art in the field, confirming a valuable potential of BM3Dframes as an advanced image modeling tool. I.
On single image scaleup using sparserepresentations,” Curves and Surfaces
, 2012
"... Abstract. This paper deals with the single image scaleup problem using sparserepresentation modeling. The goal is to recover an original image from its blurred and downscaled noisy version. Since this problem is highly illposed, a prior is needed in order to regularize it. The literature offers ..."
Abstract

Cited by 26 (2 self)
 Add to MetaCart
(Show Context)
Abstract. This paper deals with the single image scaleup problem using sparserepresentation modeling. The goal is to recover an original image from its blurred and downscaled noisy version. Since this problem is highly illposed, a prior is needed in order to regularize it. The literature offers various ways to address this problem, ranging from simple linear spaceinvariant interpolation schemes (e.g., bicubic interpolation), to spatiallyadaptive and nonlinear filters of various sorts. We embark from a recentlyproposed successful algorithm by Yang et. al. [1,2], and similarly assume a local SparseLand model on image patches, serving as regularization. Several important modifications to the abovementioned solution are introduced, and are shown to lead to improved results. These modifications include a major simplification of the overall process both in terms of the computational complexity and the algorithm architecture, using a different training approach for the dictionarypair, and introducing the ability to operate without a trainingset by bootstrapping the scaleup task from the given lowresolution image. We demonstrate the results on true images, showing both visual and PSNR improvements. 1
The computational complexity of the restricted isometry property, the nullspace property, and related concepts in compressed sensing
 IEEE Trans. Inf. Theory
, 2014
"... ar ..."
(Show Context)
Analysis KSVD: A DictionaryLearning Algorithm for the Analysis Sparse Model
, 2012
"... The synthesisbased sparse representation model for signals has drawn considerable interest in the past decade. Such a model assumes that the signal of interest can be decomposed as a linear combination of a few atoms from a given dictionary. In this paper we concentrate on an alternative, analysis ..."
Abstract

Cited by 21 (6 self)
 Add to MetaCart
The synthesisbased sparse representation model for signals has drawn considerable interest in the past decade. Such a model assumes that the signal of interest can be decomposed as a linear combination of a few atoms from a given dictionary. In this paper we concentrate on an alternative, analysisbased model, where an analysis operator – hereafter referred to as the analysis dictionary – multiplies the signal, leading to a sparse outcome. Our goal is to learn the analysis dictionary from a set of examples. The approach taken is parallel and similar to the one adopted by the KSVD algorithm that serves the corresponding problem in the synthesis model. We present the development of the algorithm steps: This includes tailored pursuit algorithms – the Backward Greedy and the Optimized Backward Greedy algorithms, and a penalty function that defines the objective for the dictionary update stage. We demonstrate the effectiveness of the proposed dictionary learning in several experiments, treating synthetic data and real images, and showing a successful and meaningful recovery of the analysis dictionary.
Efficient Sparse Modeling with Automatic Feature Grouping
"... The grouping of features is highly beneficial in learning with highdimensional data. It reduces the variance in the estimation and improves the stability of feature selection, leading to improved generalization. Moreover, it can also help in data understanding and interpretation. OSCAR is a recent ..."
Abstract

Cited by 20 (0 self)
 Add to MetaCart
(Show Context)
The grouping of features is highly beneficial in learning with highdimensional data. It reduces the variance in the estimation and improves the stability of feature selection, leading to improved generalization. Moreover, it can also help in data understanding and interpretation. OSCAR is a recent sparse modeling tool that achieves this by using a ℓ1regularizer and a pairwise ℓ∞regularizer. However, its optimization is computationally expensive. In this paper, we propose an efficient solver based on the accelerated gradient methods. We show that its key projection step can be solved by a simple iterative group merging algorithm. It is highly efficient and reduces the empirical time complexity from O(d3 ∼ d5) for the existing solvers to just O(d), where d is the number of features. Experimental results on toy and realworld data sets demonstrate that OSCAR is a competitive sparse modeling approach with the added ability of automatic feature grouping. 1.
Stable Restoration and Separation of Approximately Sparse Signals
"... This paper develops new theory and algorithms to recover signals that are approximately sparse in some general (i.e., basis, frame, overcomplete, or incomplete) dictionary but corrupted by a combination of measurement noise and interference having a sparse representation in a second general diction ..."
Abstract

Cited by 17 (10 self)
 Add to MetaCart
(Show Context)
This paper develops new theory and algorithms to recover signals that are approximately sparse in some general (i.e., basis, frame, overcomplete, or incomplete) dictionary but corrupted by a combination of measurement noise and interference having a sparse representation in a second general dictionary. Particular applications covered by our framework include the restoration of signals impaired by impulse noise, narrowband interference, or saturation, as well as image inpainting, superresolution, and signal separation. We develop efficient recovery algorithms and deterministic conditions that guarantee stable restoration and separation. Two application examples demonstrate the efficacy of our approach.