Results 11  20
of
427
Double Sparsity: Learning Sparse Dictionaries for Sparse Signal Approximation
"... An efficient and flexible dictionary structure is proposed for sparse and redundant signal representation. The proposed sparse dictionary is based on a sparsity model of the dictionary atoms over a base dictionary, and takes the form D = ΦA where Φ is a fixed base dictionary and A is sparse. The spa ..."
Abstract

Cited by 65 (3 self)
 Add to MetaCart
An efficient and flexible dictionary structure is proposed for sparse and redundant signal representation. The proposed sparse dictionary is based on a sparsity model of the dictionary atoms over a base dictionary, and takes the form D = ΦA where Φ is a fixed base dictionary and A is sparse. The sparse dictionary provides efficient forward and adjoint operators, has a compact representation, and can be effectively trained from given example data. In this, the sparse structure bridges the gap between implicit dictionaries, which have efficient implementations yet lack adaptability, and explicit dictionaries, which are fully adaptable but nonefficient and costly to deploy. In this paper we discuss the advantages of sparse dictionaries, and present an efficient algorithm for training them. We demonstrate the advantages of the proposed structure for 3D image denoising.
Alternating minimization and projection methods for nonconvex problems
 0801.1780v2[math.oc], arXiv
, 2008
"... Abstract We study the convergence properties of alternating proximal minimization algorithms for (nonconvex) functions of the following type: L(x,y) = f(x) + Q(x,y) + g(y) where f: R n → R∪{+∞} and g: R m → R∪{+∞} are proper lower semicontinuous functions and Q: R n ×R m → R is a smooth C 1 (finite ..."
Abstract

Cited by 62 (2 self)
 Add to MetaCart
Abstract We study the convergence properties of alternating proximal minimization algorithms for (nonconvex) functions of the following type: L(x,y) = f(x) + Q(x,y) + g(y) where f: R n → R∪{+∞} and g: R m → R∪{+∞} are proper lower semicontinuous functions and Q: R n ×R m → R is a smooth C 1 (finite valued) function which couples the variables x and y. The algorithm is defined by: (x0, y0) ∈ R n × R m given, (xk, yk) → (xk+1, yk) → (xk+1, yk+1)
Image deblurring and superresolution by adaptive sparse domain selection and adaptive regularization
 IEEE Trans. Image Process
, 2011
"... Abstract—As a powerful statistical image modeling technique, sparse representation has been successfully used in various image restoration applications. The success of sparse representation owes to the development of thenorm optimization techniques and the fact that natural images are intrinsically ..."
Abstract

Cited by 59 (11 self)
 Add to MetaCart
(Show Context)
Abstract—As a powerful statistical image modeling technique, sparse representation has been successfully used in various image restoration applications. The success of sparse representation owes to the development of thenorm optimization techniques and the fact that natural images are intrinsically sparse in some domains. The image restoration quality largely depends on whether the employed sparse domain can represent well the underlying image. Considering that the contents can vary significantly across different images or different patches in a single image, we propose to learn various sets of bases from a precollected dataset of example image patches, and then, for a given patch to be processed, one set of bases are adaptively selected to characterize the local sparse domain. We further introduce two adaptive regularization terms into the sparse representation framework. First, a set of autoregressive (AR) models are learned from the dataset of example image patches. The best fitted AR models to a given patch are adaptively selected to regularize the image local structures. Second, the image nonlocal selfsimilarity is introduced as another regularization term. In addition, the sparsity regularization parameter is adaptively estimated for better image restoration performance. Extensive experiments on image deblurring and superresolution validate that by using adaptive sparse domain selection and adaptive regularization, the proposed method achieves much better results than many stateoftheart algorithms in terms of both PSNR and visual perception. Index Terms—Deblurring, image restoration (IR), regularization, sparse representation, superresolution. I.
Compressed sensing: how sharp is the restricted isometry property?
, 2009
"... Compressed sensing is a recent technique by which signals can be measured at a rate proportional to their information content, combining the important task of compression directly into the measurement process. Since its introduction in 2004 there have been hundreds of manuscripts on compressed sens ..."
Abstract

Cited by 51 (7 self)
 Add to MetaCart
(Show Context)
Compressed sensing is a recent technique by which signals can be measured at a rate proportional to their information content, combining the important task of compression directly into the measurement process. Since its introduction in 2004 there have been hundreds of manuscripts on compressed sensing, a large fraction of which have focused on the design and analysis of algorithms to recover a signal from its compressed measurements. The Restricted Isometry Property (RIP) has become a ubiquitous property assumed in their analysis. We present the best known bounds on the RIP, and in the process illustrate the way in which the combinatorial nature of compressed sensing is controlled. Our quantitative bounds on the RIP allow precise statements as to how aggressively a signal can be undersampled, the essential question for practitioners.
Sparse unmixing of hyperspectral data
 IEEE TRANSACTIONS ON GEOSCIENCE AND REMOTE SENSING
, 2011
"... Linear spectral unmixing is a popular tool in remotely sensed hyperspectral data interpretation. It aims at estimating the fractional abundances of pure spectral signatures (also called as endmembers) in each mixed pixel collected by an imaging spectrometer. In many situations, the identification o ..."
Abstract

Cited by 51 (15 self)
 Add to MetaCart
Linear spectral unmixing is a popular tool in remotely sensed hyperspectral data interpretation. It aims at estimating the fractional abundances of pure spectral signatures (also called as endmembers) in each mixed pixel collected by an imaging spectrometer. In many situations, the identification of the endmember signatures in the original data set may be challenging due to insufficient spatial resolution, mixtures happening at different scales, and unavailability of completely pure spectral signatures in the scene. However, the unmixing problem can also be approached in semisupervised fashion, i.e., by assuming that the observed image signatures can be expressed in the form of linear combinations of a number of pure spectral signatures known in advance (e.g., spectra collected on the ground by a field spectroradiometer). Unmixing then amounts to finding the optimal subset of signatures in a (potentially very large) spectral library that can best model
Almost optimal unrestricted fast johnsonlindenstrauss transform
 Noga Alon. Problems and results in extremal combinatorics–i. Discrete Mathematics
, 2003
"... The problems of random projections and sparse reconstruction have much in common and individually received much attention. Surprisingly, until now they progressed in parallel and remained mostly separate. Here, we employ new tools from probability in Banach spaces that were successfully used in the ..."
Abstract

Cited by 48 (1 self)
 Add to MetaCart
(Show Context)
The problems of random projections and sparse reconstruction have much in common and individually received much attention. Surprisingly, until now they progressed in parallel and remained mostly separate. Here, we employ new tools from probability in Banach spaces that were successfully used in the context of sparse reconstruction to advance on an open problem in random pojection. In particular, we generalize and use an intricate result by Rudelson and Vershynin for sparse reconstruction which uses Dudley’s theorem for bounding Gaussian processes. Our main result states that any set of N = exp ( Õ(n)) real vectors in n dimensional space can be linearly mapped to a space of dimension k = O(log N polylog(n)), while (1) preserving the pairwise distances among the vectors to within any constant distortion and (2) being able to apply the transformation in time O(n log n) on each vector. This improves on the best known N = exp ( Õ(n1/2)) achieved by Ailon and Liberty and N = exp ( Õ(n1/3)) by Ailon and Chazelle. The dependence in the distortion constant however is believed to be suboptimal and subject to further investigation. For constant distortion, this settles the open question posed by these authors up to a polylog(n) factor while considerably simplifying their constructions. 1
Dense error correction via l1minimization
 IEEE Trans. Inf. Theor
, 2010
"... We study the problem of recovering a nonnegative sparse signal x ∈ Rn from highly corrupted linear measurements y = Ax+e ∈ Rm, where e is an unknown (and unbounded) error. Motivated by an observation from computer vision, we prove that for highly correlated dictionaries A, any nonnegative, suffic ..."
Abstract

Cited by 45 (0 self)
 Add to MetaCart
(Show Context)
We study the problem of recovering a nonnegative sparse signal x ∈ Rn from highly corrupted linear measurements y = Ax+e ∈ Rm, where e is an unknown (and unbounded) error. Motivated by an observation from computer vision, we prove that for highly correlated dictionaries A, any nonnegative, sufficiently sparse signal x can be recovered by solving an 1minimization problem: min ‖x‖1 + ‖e‖1 subject to y = Ax + e. If the fraction ρ of errors is bounded away from one and the support of x grows sublinearly in the dimension m of the observation, for large m, the above 1minimization recovers all sparse signals x from almost all signandsupport patterns of e. This suggests that accurate and efficient recovery of sparse signals is possible even with nearly 100 % of the observations corrupted. Index Terms — Error correction, Signal representation, Signal reconstruction
Sparse Recovery from Combined Fusion Frame Measurements
 IEEE Trans. Inform. Theory
"... Sparse representations have emerged as a powerful tool in signal and information processing, culminated by the success of new acquisition and processing techniques such as Compressed Sensing (CS). Fusion frames are very rich new signal representation methods that use collections of subspaces instead ..."
Abstract

Cited by 43 (12 self)
 Add to MetaCart
(Show Context)
Sparse representations have emerged as a powerful tool in signal and information processing, culminated by the success of new acquisition and processing techniques such as Compressed Sensing (CS). Fusion frames are very rich new signal representation methods that use collections of subspaces instead of vectors to represent signals. This work combines these exciting fields to introduce a new sparsity model for fusion frames. Signals that are sparse under the new model can be compressively sampled and uniquely reconstructed in ways similar to sparse signals using standard CS. The combination provides a promising new set of mathematical tools and signal models useful in a variety of applications. With the new model, a sparse signal has energy in very few of the subspaces of the fusion frame, although it does not need to be sparse within each of the subspaces it occupies. This sparsity model is captured using a mixed ℓ1/ℓ2 norm for fusion frames. A signal sparse in a fusion frame can be sampled using very few random projections and exactly reconstructed using a convex optimization that minimizes this mixed ℓ1/ℓ2 norm. The provided sampling conditions generalize coherence and RIP conditions used in standard CS theory. It is demonstrated that they are sufficient to guarantee sparse recovery of any signal sparse in our model. Moreover, an average case analysis is provided using a probability model on the sparse signal that shows that under very mild conditions the probability of recovery failure decays exponentially with increasing dimension of the subspaces. Index Terms
CoherenceBased Performance Guarantees for Estimating a Sparse Vector Under Random Noise
"... We consider the problem of estimating a deterministic sparse vector x0 from underdetermined measurements Ax0 + w, where w represents white Gaussian noise and A is a given deterministic dictionary. We analyze the performance of three sparse estimation algorithms: basis pursuit denoising (BPDN), orth ..."
Abstract

Cited by 43 (15 self)
 Add to MetaCart
(Show Context)
We consider the problem of estimating a deterministic sparse vector x0 from underdetermined measurements Ax0 + w, where w represents white Gaussian noise and A is a given deterministic dictionary. We analyze the performance of three sparse estimation algorithms: basis pursuit denoising (BPDN), orthogonal matching pursuit (OMP), and thresholding. These algorithms are shown to achieve nearoracle performance with high probability, assuming that x0 is sufficiently sparse. Our results are nonasymptotic and are based only on the coherence of A, so that they are applicable to arbitrary dictionaries. Differences in the precise conditions required for the performance guarantees of each algorithm are manifested in the observed performance at high and low signaltonoise ratios. This provides insight on the advantages and drawbacks of ℓ1 relaxation techniques such as BPDN as opposed to greedy approaches such as OMP and thresholding.