Results 1  10
of
53
From Sparse Solutions of Systems of Equations to Sparse Modeling of Signals and Images
, 2007
"... A fullrank matrix A ∈ IR n×m with n < m generates an underdetermined system of linear equations Ax = b having infinitely many solutions. Suppose we seek the sparsest solution, i.e., the one with the fewest nonzero entries: can it ever be unique? If so, when? As optimization of sparsity is combin ..."
Abstract

Cited by 423 (37 self)
 Add to MetaCart
(Show Context)
A fullrank matrix A ∈ IR n×m with n < m generates an underdetermined system of linear equations Ax = b having infinitely many solutions. Suppose we seek the sparsest solution, i.e., the one with the fewest nonzero entries: can it ever be unique? If so, when? As optimization of sparsity is combinatorial in nature, are there efficient methods for finding the sparsest solution? These questions have been answered positively and constructively in recent years, exposing a wide variety of surprising phenomena; in particular, the existence of easilyverifiable conditions under which optimallysparse solutions can be found by concrete, effective computational methods. Such theoretical results inspire a bold perspective on some important practical problems in signal and image processing. Several wellknown signal and image processing problems can be cast as demanding solutions of undetermined systems of equations. Such problems have previously seemed, to many, intractable. There is considerable evidence that these problems often have sparse solutions. Hence, advances in finding sparse solutions to underdetermined systems energizes research on such signal and image processing problems – to striking effect. In this paper we review the theoretical results on sparse solutions of linear systems, empirical
Image denoising via learned dictionaries and sparse representation
 In CVPR
, 2006
"... We address the image denoising problem, where zeromean white and homogeneous Gaussian additive noise should be removed from a given image. The approach taken is based on sparse and redundant representations over a trained dictionary. The proposed algorithm denoises the image, while simultaneously tr ..."
Abstract

Cited by 70 (7 self)
 Add to MetaCart
(Show Context)
We address the image denoising problem, where zeromean white and homogeneous Gaussian additive noise should be removed from a given image. The approach taken is based on sparse and redundant representations over a trained dictionary. The proposed algorithm denoises the image, while simultaneously trainining a dictionary on its (corrupted) content using the KSVD algorithm. As the dictionary training algorithm is limited in handling small image patches, we extend its deployment to arbitrary image sizes by defining a global image prior that forces sparsity over patches in every location in the image. We show how such Bayesian treatment leads to a simple and effective denoising algorithm, with stateoftheart performance, equivalent and sometimes surpassing recently published leading alternative denoising methods. 1.
Image sequence denoising via sparse and redundant representations, submitted to the
 IEEE Trans. on Image Processing
, 2007
"... Image sequences, such as old archive movies, webcam signals and TV broadcast, are often corrupted by additive noise that can be assumed white and Gaussian. Relative to single image denoising techniques, denoising of sequences aims to also utilize the temporal dimension. This assists in getting both ..."
Abstract

Cited by 47 (9 self)
 Add to MetaCart
(Show Context)
Image sequences, such as old archive movies, webcam signals and TV broadcast, are often corrupted by additive noise that can be assumed white and Gaussian. Relative to single image denoising techniques, denoising of sequences aims to also utilize the temporal dimension. This assists in getting both faster algorithms and better output quality. This paper focuses on utilizing sparse and redundant representations for image sequence denoising, extending the work reported in [1, 2]. In the single image setting, the KSVD algorithm is used to train a sparsifying dictionary for the corrupted image. This paper generalizes the above algorithm by offering several extensions: (i) the atoms used are threedimensional; (ii) the dictionary is propagated from one frame to the next, reducing the number of required iterations; and (iii) averaging is done on patches in both spatial and temporal neighboring locations. These modifications lead to substantial benefits in complexity and denoising performance, compared to simply running the single image algorithm sequentially. The stateoftheart denoising performance exhibited by the proposed method does not rely at all on explicit motion estimation, which makes the overall algorithm simpler and clearer. 1
Exact Recovery of SparselyUsed Dictionaries
 25TH ANNUAL CONFERENCE ON LEARNING THEORY
, 2012
"... We consider the problem of learning sparsely used dictionaries with an arbitrary square dictionary and a random, sparse coefficient matrix. We prove that O(n log n) samples are sufficient to uniquely determine the coefficient matrix. Based on this proof, we design a polynomialtime algorithm, called ..."
Abstract

Cited by 37 (2 self)
 Add to MetaCart
We consider the problem of learning sparsely used dictionaries with an arbitrary square dictionary and a random, sparse coefficient matrix. We prove that O(n log n) samples are sufficient to uniquely determine the coefficient matrix. Based on this proof, we design a polynomialtime algorithm, called Exact Recovery of SparselyUsed Dictionaries (ERSpUD), and prove that it probably recovers the dictionary and coefficient matrix when the coefficient matrix is sufficiently sparse. Simulation results show that ERSpUD reveals the true dictionary as well as the coefficients with probability higher than many stateoftheart algorithms.
Examplebased regularization deployed to superresolution reconstruction of a single image. The Computer Journal
, 2007
"... In superresolution (SR) reconstruction of images, regularization becomes crucial when insufficient number of measured lowresolution images is supplied. Beyond making the problem algebraically well posed, a properly chosen regularization can direct the solution toward a better quality outcome. Even ..."
Abstract

Cited by 30 (4 self)
 Add to MetaCart
In superresolution (SR) reconstruction of images, regularization becomes crucial when insufficient number of measured lowresolution images is supplied. Beyond making the problem algebraically well posed, a properly chosen regularization can direct the solution toward a better quality outcome. Even the extreme case—a SR reconstruction from a single measured image—can be made successful with a wellchosen regularization. Much of the progress made in the past two decades on inverse problems in image processing can be attributed to the advances in forming or choosing the way to practice the regularization. A Bayesian point of view interpret this as a way of including the prior distribution of images, which sheds some light on the complications involved. This paper reviews an emerging powerful family of regularization techniques that is drawing attention in recent years—the examplebased approach. We describe how examples can and have been used effectively for regularization of inverse problems, reviewing the main contributions along these lines in the literature, and organizing this information into major trends and directions. A description of the stateoftheart in this field, along with supporting simulation results on the image scaleup problem are given. This paper concludes with an outline of the outstanding challenges this field faces today.
Dictionary Identification  Sparse MatrixFactorisation via ℓ1Minimisation
, 2009
"... This article treats the problem of learning a dictionary providing sparse representations for a given signal class, via ℓ1minimisation. The problem can also be seen as factorising a d×N matrix Y = (y1... yN), yn ∈ R d of training signals into a d×K dictionary matrix Φ and a K ×N coefficient matrix ..."
Abstract

Cited by 29 (4 self)
 Add to MetaCart
This article treats the problem of learning a dictionary providing sparse representations for a given signal class, via ℓ1minimisation. The problem can also be seen as factorising a d×N matrix Y = (y1... yN), yn ∈ R d of training signals into a d×K dictionary matrix Φ and a K ×N coefficient matrix X = (x1... xN), xn ∈ R K, which is sparse. The exact question studied here is when a dictionary coefficient pair (Φ, X) can be recovered as local minimum of a (nonconvex) ℓ1criterion with input Y = ΦX. First, for general dictionaries and coefficient matrices, algebraic conditions ensuring local identifiability are derived, which are then specialised to the case when the dictionary is a basis. Finally, assuming a random BernoulliGaussian sparse model on the coefficient matrix, it is shown that sufficiently incoherent bases are locally identifiable with high probability. The perhaps surprising result is that the typically sufficient number of training samples N grows up to a logarithmic factor only linearly with the signal dimension, i.e. N ≈ CK log K, in contrast to previous approaches requiring combinatorially many samples.
Sparse and redundant modeling of image content using an imagesignaturedictionary
, 2007
"... Modeling signals by a sparse and redundant representations is drawing a considerable attention in recent years. Coupled with the ability to train the dictionary using signal examples, these techniques have been shown to lead to stateoftheart results in a series of recent applications. In this pa ..."
Abstract

Cited by 26 (2 self)
 Add to MetaCart
(Show Context)
Modeling signals by a sparse and redundant representations is drawing a considerable attention in recent years. Coupled with the ability to train the dictionary using signal examples, these techniques have been shown to lead to stateoftheart results in a series of recent applications. In this paper we propose a novel structure of such a model for representing image content. The new dictionary is itself a small image, such that every patch in it (in varying location and size) is a possible atom in the representation. We refer to this as the ImageSignatureDictionary (ISD), and show how it can be trained from image examples. This novel structure enjoys several important features, such as shift and scale flexibilities, and smaller memory and computational requirements, compared to the classical dictionary approach. As a demonstration of these benefits, we present highquality image denoising results based on this new model.
Optimal nonlinear models for sparsity and sampling
 JOURNAL OF FOURIER ANALYSIS AND APPLICATIONS, SPECIAL ISSUE ON COMPRESSED SAMPLING
, 2008
"... Given a set of vectors (the data) in a Hilbert space H, we prove the existence of an optimal collection of subspaces minimizing the sum of the square of the distances between each vector and its closest subspace in the collection. This collection of subspaces gives the best sparse representation f ..."
Abstract

Cited by 16 (5 self)
 Add to MetaCart
Given a set of vectors (the data) in a Hilbert space H, we prove the existence of an optimal collection of subspaces minimizing the sum of the square of the distances between each vector and its closest subspace in the collection. This collection of subspaces gives the best sparse representation for the given data, in a sense defined in the paper, and provides an optimal model for sampling in union of subspaces. The results are proved in a general setting and then applied to the case of low dimensional subspaces of RN and to infinite dimensional shiftinvariant spaces in L²(Rd). We also present an iterative search algorithm for finding the solution subspaces. These results are tightly connected to the new emergent theories of compressed sensing and dictionary design, signal models for signals with finite rate of innovation, and the subspace segmentation problem.
1 ClosedForm MMSE Estimation for Signal Denoising Under Sparse Representation Modeling Over a Unitary Dictionary
"... This paper deals with the Bayesian signal denoising problem, assuming a prior based on a sparse representation modeling over a unitary dictionary. It is well known that the Maximum Aposteriori Probability (MAP) estimator in such a case has a closedform solution based on a simple shrinkage. The foc ..."
Abstract

Cited by 15 (1 self)
 Add to MetaCart
(Show Context)
This paper deals with the Bayesian signal denoising problem, assuming a prior based on a sparse representation modeling over a unitary dictionary. It is well known that the Maximum Aposteriori Probability (MAP) estimator in such a case has a closedform solution based on a simple shrinkage. The focus in this paper is on the better performing and less familiar MinimumMeanSquaredError (MMSE) estimator. We show that this estimator also leads to a simple formula, in the form of a plain recursive expression for evaluating the contribution of every atom in the solution. An extension of the model to realworld signals is also offered, considering heteroscedastic nonzero entries in the representation, and allowing varying probabilities for the chosen atoms and the overall cardinality of the sparse representation. The MAP and MMSE estimators are redeveloped for this extended model, again resulting in closedform simple algorithms. Finally, the superiority of the MMSE estimator is demonstrated both on synthetically generated signals and on realworld signals (image patches).
Blind compressed sensing
 IEEE TRANS. INF. THEORY
, 2011
"... The fundamental principle underlying compressed sensing is that a signal, which is sparse under some basis representation, can be recovered from a small number of linear measurements. However, prior knowledge of the sparsity basis is essential for the recovery process. This work introduces the conc ..."
Abstract

Cited by 15 (3 self)
 Add to MetaCart
(Show Context)
The fundamental principle underlying compressed sensing is that a signal, which is sparse under some basis representation, can be recovered from a small number of linear measurements. However, prior knowledge of the sparsity basis is essential for the recovery process. This work introduces the concept of blind compressed sensing, which avoids the need to know the sparsity basis in both the sampling and the recovery process. We suggest three possible constraints on the sparsity basis that can be added to the problem in order to guarantee a unique solution. For each constraint, we prove conditions for uniqueness, and suggest a simple method to retrieve the solution. We demonstrate through simulations that our methods can achieve results similar to those of standard compressed sensing, which rely on prior knowledge of the sparsity basis, as long as the signals are sparse enough. This offers a general sampling and reconstruction system that fits all sparse signals, regardless of the sparsity basis, under the conditions and constraints presented in this work.