• Documents
  • Authors
  • Tables
  • Log in
  • Sign up
  • MetaCart
  • DMCA
  • Donate

CiteSeerX logo

Advanced Search Include Citations
Advanced Search Include Citations

From sparse solutions of systems of equations to sparse modeling of signals and images (2009)

by A M Bruckstein, D L Donoho, M Elad
Venue:SIAM Review
Add To MetaCart

Tools

Sorted by:
Results 1 - 10 of 427
Next 10 →

Distributed Optimization and Statistical Learning via the Alternating Direction Method of Multipliers

by Stephen Boyd, Neal Parikh, Eric Chu, Borja Peleato, Jonathan Eckstein , 2010
"... ..."
Abstract - Cited by 1001 (20 self) - Add to MetaCart
Abstract not found
(Show Context)

Citation Context

...-inverse is a solution with minimum `2 norm to an underdetermined system. This problem plays a central role in modern statistical signal processing, particularly the theory of compressed sensing; see =-=[BDE09]-=- for a recent introductory survey. Basis pursuit can be written in a form suitable for ADMM as minimize f(x) + ‖z‖1 subject to x− z = 0, where f is the indicator function of {x ∈ Rn | Ax = b}. The ADM...

Sparse Representation For Computer Vision and Pattern Recognition

by John Wright, Yi Ma, Julien Mairal, Guillermo Sapiro, Thomas Huang, Shuicheng Yan , 2009
"... Techniques from sparse signal representation are beginning to see significant impact in computer vision, often on non-traditional applications where the goal is not just to obtain a compact high-fidelity representation of the observed signal, but also to extract semantic information. The choice of ..."
Abstract - Cited by 146 (9 self) - Add to MetaCart
Techniques from sparse signal representation are beginning to see significant impact in computer vision, often on non-traditional applications where the goal is not just to obtain a compact high-fidelity representation of the observed signal, but also to extract semantic information. The choice of dictionary plays a key role in bridging this gap: unconventional dictionaries consisting of, or learned from, the training samples themselves provide the key to obtaining state-of-theart results and to attaching semantic meaning to sparse signal representations. Understanding the good performance of such unconventional dictionaries in turn demands new algorithmic and analytical techniques. This review paper highlights a few representative examples of how the interaction between sparse signal representation and computer vision can enrich both fields, and raises a number of open questions for further study.
(Show Context)

Citation Context

... concatenations of such bases. Moreover, efficient and provably effective algorithms based on convex optimization or greedy pursuit are available for computing such representations with high fidelity =-=[10]-=-. While these successes in classical signal processing applications are inspiring, in computer vision we are often more interested in the content or semantics of an image rather than a compact, high-f...

Compressed Sensing: Theory and Applications

by Gitta Kutyniok , 2012
"... Compressed sensing is a novel research area, which was introduced in 2006, and since then has already become a key concept in various areas of applied mathematics, com-puter science, and electrical engineering. It surprisingly predicts that high-dimensional signals, which allow a sparse representati ..."
Abstract - Cited by 120 (30 self) - Add to MetaCart
Compressed sensing is a novel research area, which was introduced in 2006, and since then has already become a key concept in various areas of applied mathematics, com-puter science, and electrical engineering. It surprisingly predicts that high-dimensional signals, which allow a sparse representation by a suitable basis or, more generally, a frame, can be recovered from what was previously considered highly incomplete linear measurements by using efficient algorithms. This article shall serve as an introduction to and a survey about compressed sensing. Key Words. Dimension reduction. Frames. Greedy algorithms. Ill-posed inverse problems. `1 minimization. Random matrices. Sparse approximation. Sparse recovery.
(Show Context)

Citation Context

... published papers in the area of compressed sensing subdivided into different topics. We would also like to draw the reader’s attention to the recent books [29] and [32] as well as the survey article =-=[7]-=-. 1.6 Outline In Section 2, we start by discussing different sparsity models including structured sparsity and sparsifying dictionaries. The next section, Section 3, is concerned with presenting both ...

Dictionaries for Sparse Representation Modeling

by Ron Rubinstein, Alfred M. Bruckstein, Michael Elad
"... Sparse and redundant representation modeling of data assumes an ability to describe signals as linear combinations of a few atoms from a pre-specified dictionary. As such, the choice of the dictionary that sparsifies the signals is crucial for the success of this model. In general, the choice of a p ..."
Abstract - Cited by 109 (4 self) - Add to MetaCart
Sparse and redundant representation modeling of data assumes an ability to describe signals as linear combinations of a few atoms from a pre-specified dictionary. As such, the choice of the dictionary that sparsifies the signals is crucial for the success of this model. In general, the choice of a proper dictionary can be done using one of two ways: (i) building a sparsifying dictionary based on a mathematical model of the data, or (ii) learning a dictionary to perform best on a training set. In this paper we describe the evolution of these two paradigms. As manifestations of the first approach, we cover topics such as wavelets, wavelet packets, contourlets, and curvelets, all aiming to exploit 1-D and 2-D mathematical models for constructing effective dictionaries for signals and images. Dictionary learning takes a different route, attaching the dictionary to a set of examples it is supposed to serve. From the seminal work of Field and Olshausen, through the MOD, the K-SVD, the Generalized PCA and others, this paper surveys the various options such training has to offer, up to the most recent contributions and structures.

Structured compressed sensing: From theory to applications

by Marco F. Duarte, Yonina C. Eldar - IEEE TRANS. SIGNAL PROCESS , 2011
"... Compressed sensing (CS) is an emerging field that has attracted considerable research interest over the past few years. Previous review articles in CS limit their scope to standard discrete-to-discrete measurement architectures using matrices of randomized nature and signal models based on standard ..."
Abstract - Cited by 104 (16 self) - Add to MetaCart
Compressed sensing (CS) is an emerging field that has attracted considerable research interest over the past few years. Previous review articles in CS limit their scope to standard discrete-to-discrete measurement architectures using matrices of randomized nature and signal models based on standard sparsity. In recent years, CS has worked its way into several new application areas. This, in turn, necessitates a fresh look on many of the basics of CS. The random matrix measurement operator must be replaced by more structured sensing architectures that correspond to the characteristics of feasible acquisition hardware. The standard sparsity prior has to be extended to include a much richer class of signals and to encode broader data models, including continuous-time signals. In our overview, the theme is exploiting signal and measurement structure in compressive sensing. The prime focus is bridging theory and practice; that is, to pinpoint the potential of structured CS strategies to emerge from the math to the hardware. Our summary highlights new directions as well as relations to more traditional CS, with the hope of serving both as a review to practitioners wanting to join this emerging field, and as a reference for researchers that attempts to put some of the existing ideas in perspective of practical applications.
(Show Context)

Citation Context

...roblems in the last century4056 IEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL. 59, NO. 9, SEPTEMBER 2011 (particularly in imaging), including denoising, deconvolution, restoration, and inpainting [21]–=-=[23]-=-. To introduce the notion of sparsity, we rely on a signal representation in a given basis for . Every signal is representable in terms of coefficients as ; arranging the as columns into the matrix an...

Fast Linearized Bregman Iteration for Compressed Sensing

by Jian-feng Cai, Stanley Osher, Zuowei Shen - and Sparse Denoising, 2008. UCLA CAM Reprots , 2008
"... Abstract. Finding a solution of a linear equation Au = f with various minimization properties arises from many applications. One of such applications is compressed sensing, where an efficient and robust-to-noise algorithm to find a minimal ℓ1 norm solution is needed. This means that the algorithm sh ..."
Abstract - Cited by 96 (20 self) - Add to MetaCart
Abstract. Finding a solution of a linear equation Au = f with various minimization properties arises from many applications. One of such applications is compressed sensing, where an efficient and robust-to-noise algorithm to find a minimal ℓ1 norm solution is needed. This means that the algorithm should be tailored for large scale and completely dense matrices A, while Au and A T u can be computed by fast transforms and the solution to seek is sparse. Recently, a simple and fast algorithm based on linearized Bregman iteration was proposed in [28, 32] for this purpose. This paper is to analyze the convergence of linearized Bregman iterations and the minimization properties of their limit. Based on our analysis here, we derive also a new algorithm that is proven to be convergent with a rate. Furthermore, the new algorithm is as simple and fast as the algorithm given in [28, 32] in approximating a minimal ℓ1 norm solution of Au = f as shown by numerical simulations. Hence, it can be used as another choice of an efficient tool in compressed sensing. 1. Introduction. Let A ∈ R m×n with n> m and f ∈ R m be given. The aim of a basis pursuit problem is to find u ∈ R n by solving the following constrained minimization problem min
(Show Context)

Citation Context

...in a recent burst of research in compressed sensing, it amounts to solve (1.1) with J being the ℓ1 norm to obtain a sparse solution of the equation. The interested reader should consult, for example, =-=[2, 8, 9, 10, 11, 12, 19, 21, 22, 23, 24, 29, 30, 31, 33, 34]-=- for details. The problem (1.1) can be transformed into a linear programming one, and then solved by a conventional linear programming solver in many cases. However, such solvers are not tailored for ...

Non-Parametric Bayesian Dictionary Learning for Sparse Image Representations

by Mingyuan Zhou, Haojun Chen, John Paisley, Lu Ren, Guillermo Sapiro, Lawrence Carin
"... Non-parametric Bayesian techniques are considered for learning dictionaries for sparse image representations, with applications in denoising, inpainting and compressive sensing (CS). The beta process is employed as a prior for learning the dictionary, and this non-parametric method naturally infers ..."
Abstract - Cited by 92 (34 self) - Add to MetaCart
Non-parametric Bayesian techniques are considered for learning dictionaries for sparse image representations, with applications in denoising, inpainting and compressive sensing (CS). The beta process is employed as a prior for learning the dictionary, and this non-parametric method naturally infers an appropriate dictionary size. The Dirichlet process and a probit stick-breaking process are also considered to exploit structure within an image. The proposed method can learn a sparse dictionary in situ; training images may be exploited if available, but they are not required. Further, the noise variance need not be known, and can be nonstationary. Another virtue of the proposed method is that sequential inference can be readily employed, thereby allowing scaling to large images. Several example results are presented, using both Gibbs and variational Bayesian inference, with comparisons to other state-of-the-art approaches.
(Show Context)

Citation Context

...d ARO.2 DCT bases/dictionaries [20], but recent research has demonstrated the significant utility of learning an often over-complete dictionary matched to the signals of interest (e.g., images) [1], =-=[3]-=-, [12], [13], [24]– [26], [28], [29], [31], [33], [41]. Many of the existing methods for learning dictionaries are based on solving an optimization problem [1], [13], [24]–[26], [28], [29], in which o...

Compressed Channel Sensing: A New Approach to Estimating Sparse Multipath Channels

by Waheed U. Bajwa, Jarvis Haupt, Akbar M. Sayeed, Robert Nowak
"... High-rate data communication over a multipath wireless channel often requires that the channel response be known at the receiver. Training-based methods, which probe the channel in time, frequency, and space with known signals and reconstruct the channel response from the output signals, are most co ..."
Abstract - Cited by 87 (9 self) - Add to MetaCart
High-rate data communication over a multipath wireless channel often requires that the channel response be known at the receiver. Training-based methods, which probe the channel in time, frequency, and space with known signals and reconstruct the channel response from the output signals, are most commonly used to accomplish this task. Traditional training-based channel estimation methods, typically comprising of linear reconstruction techniques, are known to be optimal for rich multipath channels. However, physical arguments and growing experimental evidence suggest that many wireless channels encountered in practice tend to exhibit a sparse multipath structure that gets pronounced as the signal space dimension gets large (e.g., due to large bandwidth or large number of antennas). In this paper, we formalize the notion of multipath sparsity and present a new approach to estimating sparse (or effectively sparse) multipath channels that is based on some of the recent advances in the theory of compressed sensing. In particular, it is shown in the paper that the proposed approach, which is termed as compressed channel sensing, can potentially achieve a target reconstruction error using far less energy and, in many instances, latency and bandwidth than that dictated by the traditional least-squares-based training methods.

On the Role of Sparse and Redundant Representations in Image Processing

by Michael Elad, Mário A. T. Figueiredo, Yi Ma - PROCEEDINGS OF THE IEEE – SPECIAL ISSUE ON APPLICATIONS OF SPARSE REPRESENTATION AND COMPRESSIVE SENSING , 2009
"... Much of the progress made in image processing in the past decades can be attributed to better modeling of image content, and a wise deployment of these models in relevant applications. This path of models spans from the simple ℓ2-norm smoothness, through robust, thus edge preserving, measures of smo ..."
Abstract - Cited by 78 (1 self) - Add to MetaCart
Much of the progress made in image processing in the past decades can be attributed to better modeling of image content, and a wise deployment of these models in relevant applications. This path of models spans from the simple ℓ2-norm smoothness, through robust, thus edge preserving, measures of smoothness (e.g. total variation), and till the very recent models that employ sparse and redundant representations. In this paper, we review the role of this recent model in image processing, its rationale, and models related to it. As it turns out, the field of image processing is one of the main beneficiaries from the recent progress made in the theory and practice of sparse and redundant representations. We discuss ways to employ these tools for various image processing tasks, and present several applications in which state-of-the-art results are obtained.

The Cosparse Analysis Model and Algorithms

by S. Nam , M. E. Davies , M. Elad , R. Gribonval , 2011
"... After a decade of extensive study of the sparse representation synthesis model, we can safely say that this is a mature and stable field, with clear theoretical foundations, and appealing applications. Alongside this approach, there is an analysis counterpart model, which, despite its similarity to ..."
Abstract - Cited by 66 (14 self) - Add to MetaCart
After a decade of extensive study of the sparse representation synthesis model, we can safely say that this is a mature and stable field, with clear theoretical foundations, and appealing applications. Alongside this approach, there is an analysis counterpart model, which, despite its similarity to the synthesis alternative, is markedly different. Surprisingly, the analysis model did not get a similar attention, and its understanding today is shallow and partial. In this paper we take a closer look at the analysis approach, better define it as a generative model for signals, and contrast it with the synthesis one. This workproposeseffectivepursuitmethodsthat aimtosolveinverseproblemsregularized with the analysis-model prior, accompanied by a preliminary theoretical study of their performance. We demonstrate the effectiveness of the analysis model in several experiments.
Powered by: Apache Solr
  • About CiteSeerX
  • Submit and Index Documents
  • Privacy Policy
  • Help
  • Data
  • Source
  • Contact Us

Developed at and hosted by The College of Information Sciences and Technology

© 2007-2019 The Pennsylvania State University