Results 1  10
of
167
Templates for Convex Cone Problems with Applications to Sparse Signal Recovery
, 2010
"... This paper develops a general framework for solving a variety of convex cone problems that frequently arise in signal processing, machine learning, statistics, and other fields. The approach works as follows: first, determine a conic formulation of the problem; second, determine its dual; third, app ..."
Abstract

Cited by 122 (6 self)
 Add to MetaCart
This paper develops a general framework for solving a variety of convex cone problems that frequently arise in signal processing, machine learning, statistics, and other fields. The approach works as follows: first, determine a conic formulation of the problem; second, determine its dual; third, apply smoothing; and fourth, solve using an optimal firstorder method. A merit of this approach is its flexibility: for example, all compressed sensing problems can be solved via this approach. These include models with objective functionals such as the totalvariation norm, ‖W x‖1 where W is arbitrary, or a combination thereof. In addition, the paper also introduces a number of technical contributions such as a novel continuation scheme, a novel approach for controlling the step size, and some new results showing that the smooth and unsmoothed problems are sometimes formally equivalent. Combined with our framework, these lead to novel, stable and computationally efficient algorithms. For instance, our general implementation is competitive with stateoftheart methods for solving intensively studied problems such as the LASSO. Further, numerical experiments show that one can solve the Dantzig selector problem, for which no efficient largescale solvers exist, in a few hundred iterations. Finally, the paper is accompanied with a software release. This software is not a single, monolithic solver; rather, it is a suite of programs and routines designed to serve as building blocks for constructing complete algorithms. Keywords. Optimal firstorder methods, Nesterov’s accelerated descent algorithms, proximal algorithms, conic duality, smoothing by conjugation, the Dantzig selector, the LASSO, nuclearnorm minimization.
The Cosparse Analysis Model and Algorithms
, 2011
"... After a decade of extensive study of the sparse representation synthesis model, we can safely say that this is a mature and stable field, with clear theoretical foundations, and appealing applications. Alongside this approach, there is an analysis counterpart model, which, despite its similarity to ..."
Abstract

Cited by 66 (14 self)
 Add to MetaCart
After a decade of extensive study of the sparse representation synthesis model, we can safely say that this is a mature and stable field, with clear theoretical foundations, and appealing applications. Alongside this approach, there is an analysis counterpart model, which, despite its similarity to the synthesis alternative, is markedly different. Surprisingly, the analysis model did not get a similar attention, and its understanding today is shallow and partial. In this paper we take a closer look at the analysis approach, better define it as a generative model for signals, and contrast it with the synthesis one. This workproposeseffectivepursuitmethodsthat aimtosolveinverseproblemsregularized with the analysismodel prior, accompanied by a preliminary theoretical study of their performance. We demonstrate the effectiveness of the analysis model in several experiments.
Stable image reconstruction using total variation minimization
 SIAM Journal on Imaging Sciences
, 2013
"... This article presents nearoptimal guarantees for accurate and robust image recovery from undersampled noisy measurements using total variation minimization, and our results may be the first of this kind. In particular, we show that from O(s log(N)) nonadaptive linear measurements, an image can be ..."
Abstract

Cited by 50 (2 self)
 Add to MetaCart
(Show Context)
This article presents nearoptimal guarantees for accurate and robust image recovery from undersampled noisy measurements using total variation minimization, and our results may be the first of this kind. In particular, we show that from O(s log(N)) nonadaptive linear measurements, an image can be reconstructed to within the best sterm approximation of its gradient, up to a logarithmic factor. Along the way, we prove a strengthened Sobolev inequality for functions lying in the null space of a suitably incoherent matrix. 1
Spectral Compressive Sensing
, 2010
"... Compressive sensing (CS) is a new approach to simultaneous sensing and compression of sparse and compressible signals. A great many applications feature smooth or modulated signals that can be modeled as a linear combination of a small number of sinusoids; such signals are sparse in the frequency do ..."
Abstract

Cited by 39 (5 self)
 Add to MetaCart
(Show Context)
Compressive sensing (CS) is a new approach to simultaneous sensing and compression of sparse and compressible signals. A great many applications feature smooth or modulated signals that can be modeled as a linear combination of a small number of sinusoids; such signals are sparse in the frequency domain. In practical applications, the standard frequency domain signal representation is the discrete Fourier transform (DFT). Unfortunately, the DFT coefficients of a frequencysparse signal are themselves sparse only in the contrived case where the sinusoid frequencies are integer multiples of the DFT’s fundamental frequency. As a result, practical DFTbased CS acquisition and recovery of smooth signals does not perform nearly as well as one might expect. In this paper, we develop a new spectral compressive sensing (SCS) theory for general frequencysparse signals. The key ingredients are an oversampled DFT frame, a signal model that inhibits closely spaced sinusoids, and classical sinusoid parameter estimation algorithms from the field of spectrum estimation. Using peridogram and eigenanalysis based spectrum estimates (e.g., MUSIC), our new SCS algorithms significantly outperform the current stateoftheart CS algorithms while providing provable bounds on the number of measurements required for stable recovery.
Cosparse analysis modeling – uniqueness and algorithms
 in Proceedings of ICASSP
, 2010
"... In the past decade there has been a great interest in a synthesisbased model for signals, based on sparse and redundant representations. Such a model assumes that the signal of interest can be composed as a linear combination of few columns from a given matrix (the dictionary). An alternative analy ..."
Abstract

Cited by 25 (11 self)
 Add to MetaCart
(Show Context)
In the past decade there has been a great interest in a synthesisbased model for signals, based on sparse and redundant representations. Such a model assumes that the signal of interest can be composed as a linear combination of few columns from a given matrix (the dictionary). An alternative analysisbased model can be envisioned, where an analysis operator multiplies the signal, leading to a cosparse outcome. In this paper, we consider this analysis model, in the context of a generic missing data problem (e.g., compressed sensing, inpainting, source separation, etc.). Our work proposes a uniqueness result for the solution of this problem, based on properties of the analysis operator and the measurement matrix. This paper also considers two pursuit algorithms for solving the missing data problem, an L1based and a new greedy method. Our simulations demonstrate the appeal of the analysis model, and the success of the pursuit techniques presented.
Analysis KSVD: A DictionaryLearning Algorithm for the Analysis Sparse Model
, 2012
"... The synthesisbased sparse representation model for signals has drawn considerable interest in the past decade. Such a model assumes that the signal of interest can be decomposed as a linear combination of a few atoms from a given dictionary. In this paper we concentrate on an alternative, analysis ..."
Abstract

Cited by 21 (6 self)
 Add to MetaCart
The synthesisbased sparse representation model for signals has drawn considerable interest in the past decade. Such a model assumes that the signal of interest can be decomposed as a linear combination of a few atoms from a given dictionary. In this paper we concentrate on an alternative, analysisbased model, where an analysis operator – hereafter referred to as the analysis dictionary – multiplies the signal, leading to a sparse outcome. Our goal is to learn the analysis dictionary from a set of examples. The approach taken is parallel and similar to the one adopted by the KSVD algorithm that serves the corresponding problem in the synthesis model. We present the development of the algorithm steps: This includes tailored pursuit algorithms – the Backward Greedy and the Optimized Backward Greedy algorithms, and a penalty function that defines the objective for the dictionary update stage. We demonstrate the effectiveness of the proposed dictionary learning in several experiments, treating synthetic data and real images, and showing a successful and meaningful recovery of the analysis dictionary.
Online Object Tracking With Sparse Prototypes
"... Abstract — Online object tracking is a challenging problem as it entails learning an effective model to account for appearance change caused by intrinsic and extrinsic factors. In this paper, we propose a novel online object tracking algorithm with sparse prototypes, which exploits both classic prin ..."
Abstract

Cited by 18 (5 self)
 Add to MetaCart
(Show Context)
Abstract — Online object tracking is a challenging problem as it entails learning an effective model to account for appearance change caused by intrinsic and extrinsic factors. In this paper, we propose a novel online object tracking algorithm with sparse prototypes, which exploits both classic principal component analysis (PCA) algorithms with recent sparse representation schemes for learning effective appearance models. We introduce ℓ1 regularization into the PCA reconstruction, and develop a novel algorithm to represent an object by sparse prototypes that account explicitly for data and noise. For tracking, objects are represented by the sparse prototypes learned online with update. In order to reduce tracking drift, we present a method that takes occlusion and motion blur into account rather than simply includes image observations for model update. Both qualitative and quantitative evaluations on challenging image sequences demonstrate that the proposed tracking algorithm performs favorably against several stateoftheart methods. Index Terms — Appearance model, ℓ1 minimization, object tracking, principal component analysis (PCA), sparse prototypes.
AN ALPS VIEW OF SPARSE RECOVERY
"... We provide two compressive sensing (CS) recovery algorithms based on iterative hardthresholding. The algorithms, collectively dubbed as algebraic pursuits (ALPS), exploit the restricted isometry properties of the CS measurement matrix within the algebra of Nesterov’s optimal gradient methods. We th ..."
Abstract

Cited by 15 (7 self)
 Add to MetaCart
(Show Context)
We provide two compressive sensing (CS) recovery algorithms based on iterative hardthresholding. The algorithms, collectively dubbed as algebraic pursuits (ALPS), exploit the restricted isometry properties of the CS measurement matrix within the algebra of Nesterov’s optimal gradient methods. We theoretically characterize the approximation guarantees of ALPS for signals that are sparse on orthobases as well as on tightframes. Simulation results demonstrate a great potential for ALPS in terms of phasetransition, noise robustness, and CS reconstruction. 1.
Performance guarantees of the thresholding algorithm for the cosparse analysis model
"... The cosparse analysis model for signals assumes that the signal of interest can be multiplied by an analysis dictionary Ω, leading to a sparse outcome. This model stands as an interesting alternative to the more classical synthesis based sparse representation model. In this work we propose a theore ..."
Abstract

Cited by 13 (4 self)
 Add to MetaCart
The cosparse analysis model for signals assumes that the signal of interest can be multiplied by an analysis dictionary Ω, leading to a sparse outcome. This model stands as an interesting alternative to the more classical synthesis based sparse representation model. In this work we propose a theoretical study of the performance guarantee of the thresholding algorithm for the pursuit problem in the presence of noise. Our analysis reveals two significant properties of Ω, which govern the pursuit performance: The first is the degree of linear dependencies between sets of rows in Ω, depicted by the cosparsity level. The second property, termed the Restricted Orthogonal Projection Property (ROPP), is the level of independence between such dependent sets and other rows in Ω. We show how these dictionary properties are meaningful and useful, both in the theoretical bounds derived, and in a series of experiments that are shown to align well with the theoretical prediction.
Constrained Overcomplete Analysis Operator Learning for Cosparse Signal Modelling
"... We consider the problem of learning a lowdimensional signal model from a collection of training samples. The mainstream approach would be to learn an overcomplete dictionary to provide good approximations of thetraining samples using sparsesynthesis coefficients. This famous sparse model has a less ..."
Abstract

Cited by 12 (1 self)
 Add to MetaCart
(Show Context)
We consider the problem of learning a lowdimensional signal model from a collection of training samples. The mainstream approach would be to learn an overcomplete dictionary to provide good approximations of thetraining samples using sparsesynthesis coefficients. This famous sparse model has a less well known counterpart, in analysis form, called the cosparse analysis model. In this new model, signals are characterised by their parsimony in a transformed domain using an overcomplete (linear) analysis operator. We propose to learn an analysis operator from a training corpus using a constrained optimisation framework based on L1 optimisation. The reason for introducing a constraint in the optimisation framework is to exclude trivial solutions. Although there is no final answer here for which constraint is the most relevant constraint, we investigate some conventional constraints in the model adaptation field and use the uniformly normalised tight frame (UNTF) for this purpose. We then derive a practical learning algorithm, based on projected subgradients and DouglasRachford splitting technique, and demonstrate its ability to robustly recover a ground truth analysis operator, when provided with a clean training set, of sufficient size. We also find an analysis operator for images, using some noisy cosparse signals, which is indeed a more realistic experiment. As the derived optimisation problem is not a convex program, we often find a local minimum using such variational methods. For two different settings, we provide preliminary theoretical support for the wellposedness of the learning problem, which can be practically used to test the local identifiability conditions of learnt operators.