Results 1  10
of
66
Compressed Sensing: Theory and Applications
, 2012
"... Compressed sensing is a novel research area, which was introduced in 2006, and since then has already become a key concept in various areas of applied mathematics, computer science, and electrical engineering. It surprisingly predicts that highdimensional signals, which allow a sparse representati ..."
Abstract

Cited by 120 (30 self)
 Add to MetaCart
(Show Context)
Compressed sensing is a novel research area, which was introduced in 2006, and since then has already become a key concept in various areas of applied mathematics, computer science, and electrical engineering. It surprisingly predicts that highdimensional signals, which allow a sparse representation by a suitable basis or, more generally, a frame, can be recovered from what was previously considered highly incomplete linear measurements by using efficient algorithms. This article shall serve as an introduction to and a survey about compressed sensing. Key Words. Dimension reduction. Frames. Greedy algorithms. Illposed inverse problems. `1 minimization. Random matrices. Sparse approximation. Sparse recovery.
Stable image reconstruction using total variation minimization
 SIAM Journal on Imaging Sciences
, 2013
"... This article presents nearoptimal guarantees for accurate and robust image recovery from undersampled noisy measurements using total variation minimization, and our results may be the first of this kind. In particular, we show that from O(s log(N)) nonadaptive linear measurements, an image can be ..."
Abstract

Cited by 50 (2 self)
 Add to MetaCart
(Show Context)
This article presents nearoptimal guarantees for accurate and robust image recovery from undersampled noisy measurements using total variation minimization, and our results may be the first of this kind. In particular, we show that from O(s log(N)) nonadaptive linear measurements, an image can be reconstructed to within the best sterm approximation of its gradient, up to a logarithmic factor. Along the way, we prove a strengthened Sobolev inequality for functions lying in the null space of a suitably incoherent matrix. 1
Robust sparse analysis regularization
 IEEE Transactions on Information Theory
"... ABSTRACT This work studies some properties of 1 analysis regularization for the resolution of linear inverse problems. Analysis regularization minimizes the 1 norm of the correlations between the signal and the atoms in the dictionary. The corresponding variational problem includes several wellkn ..."
Abstract

Cited by 38 (15 self)
 Add to MetaCart
(Show Context)
ABSTRACT This work studies some properties of 1 analysis regularization for the resolution of linear inverse problems. Analysis regularization minimizes the 1 norm of the correlations between the signal and the atoms in the dictionary. The corresponding variational problem includes several wellknown regularizations such as the discrete total variation and the fused lasso. We give sufficient conditions such that analysis regularization is robust to noise. ANALYSIS VERSUS SYNTHESIS Regularization through variational analysis is a popular way to compute an approximation of x 0 ∈ R N from the measurements y ∈ R Q as defined by an inverse problem y = Φx 0 + w where w is some additive noise and Φ is a linear operator, for instance a superresolution or an inpainting operator. N which is used to synthesize a signal Common examples in signal processing of dictionary include the wavelet transform or a finitedifference operator. Synthesis regularization corresponds to the following minimization problem where Ψ = ΦD, and x = Dα. Properties of synthesis prior had been studied intensively, see for instance Analysis regularization corresponds to the following minimization problem In the noiseless case, w = 0, one uses the constrained optimization which reads min x∈R N D * x 1 subject to Φx = y. This prior had been less studied than the synthesis prior, see for instance UNION OF SUBSPACES MODEL It is natural to keep track of the support of this correlation vector, as done in the following definition. A signal x such that D * x is sparse lives in a cospace G J of small dimension where G J is defined as follow. Definition 2. Given a dictionary D, and J a subset of {1 · · · P }, the cospace G J is defined as where D J is the subdictionary whose columns are indexed by J. The signal space can thus be decomposed as a union of subspaces of increasing dimensions For the 1D total variation prior, Θ k is the set of piecewise constant signals with k − 1 steps.
Analysis KSVD: A DictionaryLearning Algorithm for the Analysis Sparse Model
, 2012
"... The synthesisbased sparse representation model for signals has drawn considerable interest in the past decade. Such a model assumes that the signal of interest can be decomposed as a linear combination of a few atoms from a given dictionary. In this paper we concentrate on an alternative, analysis ..."
Abstract

Cited by 21 (6 self)
 Add to MetaCart
The synthesisbased sparse representation model for signals has drawn considerable interest in the past decade. Such a model assumes that the signal of interest can be decomposed as a linear combination of a few atoms from a given dictionary. In this paper we concentrate on an alternative, analysisbased model, where an analysis operator – hereafter referred to as the analysis dictionary – multiplies the signal, leading to a sparse outcome. Our goal is to learn the analysis dictionary from a set of examples. The approach taken is parallel and similar to the one adopted by the KSVD algorithm that serves the corresponding problem in the synthesis model. We present the development of the algorithm steps: This includes tailored pursuit algorithms – the Backward Greedy and the Optimized Backward Greedy algorithms, and a penalty function that defines the objective for the dictionary update stage. We demonstrate the effectiveness of the proposed dictionary learning in several experiments, treating synthetic data and real images, and showing a successful and meaningful recovery of the analysis dictionary.
Constrained Overcomplete Analysis Operator Learning for Cosparse Signal Modelling
"... We consider the problem of learning a lowdimensional signal model from a collection of training samples. The mainstream approach would be to learn an overcomplete dictionary to provide good approximations of thetraining samples using sparsesynthesis coefficients. This famous sparse model has a less ..."
Abstract

Cited by 12 (1 self)
 Add to MetaCart
(Show Context)
We consider the problem of learning a lowdimensional signal model from a collection of training samples. The mainstream approach would be to learn an overcomplete dictionary to provide good approximations of thetraining samples using sparsesynthesis coefficients. This famous sparse model has a less well known counterpart, in analysis form, called the cosparse analysis model. In this new model, signals are characterised by their parsimony in a transformed domain using an overcomplete (linear) analysis operator. We propose to learn an analysis operator from a training corpus using a constrained optimisation framework based on L1 optimisation. The reason for introducing a constraint in the optimisation framework is to exclude trivial solutions. Although there is no final answer here for which constraint is the most relevant constraint, we investigate some conventional constraints in the model adaptation field and use the uniformly normalised tight frame (UNTF) for this purpose. We then derive a practical learning algorithm, based on projected subgradients and DouglasRachford splitting technique, and demonstrate its ability to robustly recover a ground truth analysis operator, when provided with a clean training set, of sufficient size. We also find an analysis operator for images, using some noisy cosparse signals, which is indeed a more realistic experiment. As the derived optimisation problem is not a convex program, we often find a local minimum using such variational methods. For two different settings, we provide preliminary theoretical support for the wellposedness of the learning problem, which can be practically used to test the local identifiability conditions of learnt operators.
Local behavior of sparse analysis regularization: Applications to risk estimation
 APPLIED AND COMPUTATIONAL HARMONIC ANALYSIS
, 2013
"... In this paper, we aim at recovering an unknown signal x0 from noisy measurements y = Φx0 +w, where Φ is an illconditioned or singular linear operator and w accounts for some noise. To regularize such an illposed inverse problem, we impose an analysis sparsity prior. More precisely, the recovery is ..."
Abstract

Cited by 9 (5 self)
 Add to MetaCart
(Show Context)
In this paper, we aim at recovering an unknown signal x0 from noisy measurements y = Φx0 +w, where Φ is an illconditioned or singular linear operator and w accounts for some noise. To regularize such an illposed inverse problem, we impose an analysis sparsity prior. More precisely, the recovery is cast as a convex optimization program where the objective is the sum of a quadratic data fidelity term and a regularization term formed of the ℓ 1norm of the correlations between the sought after signal and atoms in a given (generally overcomplete) dictionary. The ℓ 1sparsity analysis prior is weighted by a regularization parameter λ> 0. In this paper, we prove that any minimizers of this problem is a piecewiseaffine function of the observations y and the regularization parameter λ. As a byproduct, we exploit these properties to get an objectively guided choice of λ. In particular, we develop an extension of the Generalized Stein Unbiased Risk Estimator (GSURE) and show that it is an unbiased and reliable estimator of an appropriately defined risk. The latter encompasses special cases
KSVD DICTIONARYLEARNING FOR THE ANALYSIS SPARSE MODEL
"... The synthesisbased sparse representation model for signals has drawn a considerable interest in the past decade. Such a model assumes that the signal of interest can be decomposed as a linear combination of a few atoms from a given dictionary. In this paper we concentrate on an alternative, analysi ..."
Abstract

Cited by 8 (0 self)
 Add to MetaCart
(Show Context)
The synthesisbased sparse representation model for signals has drawn a considerable interest in the past decade. Such a model assumes that the signal of interest can be decomposed as a linear combination of a few atoms from a given dictionary. In this paper we concentrate on an alternative, analysisbased model, where an Analysis Dictionary multiplies the signal, leading to a sparse outcome. Our goal is to learn the analysis dictionary from a set of signal examples, and the approach taken is parallel and similar to the one adopted by the KSVD algorithm that serves the corresponding problem in the synthesis model. We present the development of the algorithm steps, which include two greedy tailored pursuit algorithms and a penalty function for the dictionary update stage. We demonstrate its effectiveness in several experiments, showing a successful and meaningful recovery of the analysis dictionary.
Generalized approximate message passing for the cosparse analysis model,” arXiv:1312.3968, 2013, (Matlab codes at http://www2.ece.ohiostate.edu/˜schniter/GrAMPA
"... In cosparse analysis compressive sensing (CS), one seeks to estimate a nonsparse signal vector from noisy subNyquist linear measurements by exploiting the knowledge that a given linear transform of the signal is cosparse, i.e., has sufficiently many zeros. We propose a novel approach to cosparse ..."
Abstract

Cited by 5 (3 self)
 Add to MetaCart
(Show Context)
In cosparse analysis compressive sensing (CS), one seeks to estimate a nonsparse signal vector from noisy subNyquist linear measurements by exploiting the knowledge that a given linear transform of the signal is cosparse, i.e., has sufficiently many zeros. We propose a novel approach to cosparse analysis CS based on the generalized approximate message passing (GAMP) algorithm. Unlike other AMPbased approaches to this problem, ours works with a wide range of analysis operators and regularizers. In addition, we propose a novel ℓ0like softthresholder based on MMSE denoising for a spikeandslab distribution with an infinitevariance slab. Numerical demonstrations on synthetic and practical datasets demonstrate advantages over existing AMPbased, greedy, and reweightedℓ1 approaches. Index Terms — Approximate message passing, belief propagation, compressed sensing. 1.
M.: A joint intensity and depth cosparse analysis model for depth map superresolution
 In: Proceedings of the IEEE International Conference on Computer Vision, ICCV
, 2013
"... Highresolution depth maps can be inferred from lowresolution depth measurements and an additional highresolution intensity image of the same scene. To that end, we introduce a bimodal cosparse analysis model, which is able to capture the interdependency of registered intensity and depth informat ..."
Abstract

Cited by 4 (2 self)
 Add to MetaCart
(Show Context)
Highresolution depth maps can be inferred from lowresolution depth measurements and an additional highresolution intensity image of the same scene. To that end, we introduce a bimodal cosparse analysis model, which is able to capture the interdependency of registered intensity and depth information. This model is based on the assumption that the cosupports of corresponding bimodal image structures are aligned when computed by a suitable pair of analysis operators. No analytic form of such operators exist and we propose a method for learning them from a set of registered training signals. This learning process is done offline and returns a bimodal analysis operator that is universally applicable to natural scenes. We use this to exploit the bimodal cosparse analysis model as a prior for solving inverse problems, which leads to an efficient algorithm for depth map superresolution. 1