Results 1  10
of
24
Sparse Representation For Computer Vision and Pattern Recognition
, 2009
"... Techniques from sparse signal representation are beginning to see significant impact in computer vision, often on nontraditional applications where the goal is not just to obtain a compact highfidelity representation of the observed signal, but also to extract semantic information. The choice of ..."
Abstract

Cited by 142 (9 self)
 Add to MetaCart
(Show Context)
Techniques from sparse signal representation are beginning to see significant impact in computer vision, often on nontraditional applications where the goal is not just to obtain a compact highfidelity representation of the observed signal, but also to extract semantic information. The choice of dictionary plays a key role in bridging this gap: unconventional dictionaries consisting of, or learned from, the training samples themselves provide the key to obtaining stateoftheart results and to attaching semantic meaning to sparse signal representations. Understanding the good performance of such unconventional dictionaries in turn demands new algorithmic and analytical techniques. This review paper highlights a few representative examples of how the interaction between sparse signal representation and computer vision can enrich both fields, and raises a number of open questions for further study.
TaskDriven Dictionary Learning
"... Abstract—Modeling data with linear combinations of a few elements from a learned dictionary has been the focus of much recent research in machine learning, neuroscience, and signal processing. For signals such as natural images that admit such sparse representations, it is now well established that ..."
Abstract

Cited by 86 (3 self)
 Add to MetaCart
Abstract—Modeling data with linear combinations of a few elements from a learned dictionary has been the focus of much recent research in machine learning, neuroscience, and signal processing. For signals such as natural images that admit such sparse representations, it is now well established that these models are well suited to restoration tasks. In this context, learning the dictionary amounts to solving a largescale matrix factorization problem, which can be done efficiently with classical optimization tools. The same approach has also been used for learning features from data for other purposes, e.g., image classification, but tuning the dictionary in a supervised way for these tasks has proven to be more difficult. In this paper, we present a general formulation for supervised dictionary learning adapted to a wide variety of tasks, and present an efficient algorithm for solving the corresponding optimization problem. Experiments on handwritten digit classification, digital art identification, nonlinear inverse image problems, and compressed sensing demonstrate that our approach is effective in largescale settings, and is well suited to supervised and semisupervised classification, as well as regression tasks for data that admit sparse representations. Index Terms—Basis pursuit, Lasso, dictionary learning, matrix factorization, semisupervised learning, compressed sensing. Ç 1
Learning to Sense Sparse Signals: Simultaneous Sensing Matrix and Sparsifying Dictionary Optimization
, 2008
"... Abstract Sparse signals representation, analysis, and sensing, has received a lot of attention in recent years from the signal processing, optimization, and learning communities. On one hand, the learning of overcomplete dictionaries that facilitate a sparse representation of the image as a liner c ..."
Abstract

Cited by 68 (5 self)
 Add to MetaCart
(Show Context)
Abstract Sparse signals representation, analysis, and sensing, has received a lot of attention in recent years from the signal processing, optimization, and learning communities. On one hand, the learning of overcomplete dictionaries that facilitate a sparse representation of the image as a liner combination of a few atoms from such dictionary, leads to stateoftheart results in image and video restoration and image classification. On the other hand, the framework of compressed sensing (CS) has shown that sparse signals can be recovered from far less samples than those required by the classical ShannonNyquist Theorem. The goal of this paper is to present a framework that unifies the learning of overcomplete dictionaries for sparse image representation with the concepts of signal recovery from very few samples put forward by the CS theory. The samples used in CS correspond to linear projections defined by a sampling projection matrix. It has been shown that, for example, a nonadaptive random sampling matrix satisfies the fundamental theoretical requirements of CS, enjoying the additional benefit of universality. On the other hand, a projection sensing matrix that is optimally designed for a certain signal class can further improve the reconstruction accuracy or further reduce the necessary number of samples. In this work we introduce a framework for the joint design and optimization, from a set of training images, of the
Understanding camera tradeoffs through a bayesian analysis of light field projections
 MIT CSAIL TR
, 2008
"... Computer vision has traditionally focused on extracting structure, such as depth, from images acquired using thinlens or pinhole optics. The development of computational imaging is broadening this scope; a variety of unconventional cameras do not directly capture a traditional image anymore, but in ..."
Abstract

Cited by 36 (7 self)
 Add to MetaCart
(Show Context)
Computer vision has traditionally focused on extracting structure, such as depth, from images acquired using thinlens or pinhole optics. The development of computational imaging is broadening this scope; a variety of unconventional cameras do not directly capture a traditional image anymore, but instead require the joint reconstruction of structure and image information. For example, recent coded aperture designs have been optimized to facilitate the joint reconstruction of depth and intensity. The breadth of imaging designs requires new tools to understand the tradeoffs implied by different strategies. This paper introduces a unified framework for analyzing computational imaging approaches. Each sensor element is modeled as an inner product over the 4D light field. The imaging task is then posed as Bayesian inference: given the observed noisy light field projections and a prior on light field signals, estimate the original light field. Under common imaging conditions, we compare the performance of various camera designs using 2D light field simulations. This framework allows us to better understand the tradeoffs of each camera type and analyze their limitations.
Compressive light transport sensing
 ACM Trans. Graph
"... Figure 1: An example of relit images of a scene generated from a reflectance field captured using just 1000 nonadaptive illumination patterns (emitted from the right onto the scene). The incident lighting resolution, and resolution of each reflectance function, is 128×128. Even though we only perfo ..."
Abstract

Cited by 27 (2 self)
 Add to MetaCart
Figure 1: An example of relit images of a scene generated from a reflectance field captured using just 1000 nonadaptive illumination patterns (emitted from the right onto the scene). The incident lighting resolution, and resolution of each reflectance function, is 128×128. Even though we only performed a small number of measurements, we are still able to capture and represent complex light transport paths. Left: the scene relit with a high frequency illumination condition (inset). Middle: the scene relit under a natural illumination condition (inset). Right: a ground truth reference photograph of the scene. In this paper we propose a new framework for capturing light transport data of a real scene, based on the recently developed theory of compressive sensing. Compressive sensing offers a solid mathematical framework to infer a sparse signal from a limited number of nonadaptive measurements. Besides introducing compressive sensing for fast acquisition of light transport to computer graphics, we develop several innovations that address specific challenges for imagebased relighting, and which may have broader implications. We develop a novel hierarchical decoding algorithm that improves reconstruction quality by exploiting interpixel coherency relations. Additionally, we design new nonadaptive illumination patterns that minimize measurement noise and further improve reconstruction quality. We illustrate our framework by capturing detailed highresolution reflectance fields for imagebased relighting.
Sequential Compressed Sensing
"... Abstract—Compressed sensing allows perfect recovery of sparse signals (or signals sparse in some basis) using only a small number of random measurements. Existing results in compressed sensing literature have focused on characterizing the achievable performance by bounding the number of samples requ ..."
Abstract

Cited by 15 (0 self)
 Add to MetaCart
(Show Context)
Abstract—Compressed sensing allows perfect recovery of sparse signals (or signals sparse in some basis) using only a small number of random measurements. Existing results in compressed sensing literature have focused on characterizing the achievable performance by bounding the number of samples required for a given level of signal sparsity. However, using these bounds to minimize the number of samples requires a priori knowledge of the sparsity of the unknown signal, or the decay structure for nearsparse signals. Furthermore, there are some popular recovery methods for which no such bounds are known. In this paper, we investigate an alternative scenario where observations are available in sequence. For any recovery method, this means that there is now a sequence of candidate reconstructions. We propose a method to estimate the reconstruction error directly from the samples themselves, for every candidate in this sequence. This estimate is universal in the sense that it is based only on the measurement ensemble, and not on the recovery method or any assumed level of sparsity of the unknown signal. With these estimates, one can now stop observations as soon as there is reasonable certainty of either exact or sufficiently accurate reconstruction. They also provide a way to obtain “runtime ” guarantees for recovery methods that otherwise lack a priori performance bounds. We investigate both continuous (e.g., Gaussian) and discrete (e.g., Bernoulli) random measurement ensembles, both for exactly sparse and general nearsparse signals, and with both noisy and noiseless measurements. Index Terms—Compressed sensing (CS), sequential measurements, stopping rule.
Sensing Matrix Optimization for BlockSparse Decoding
, 2011
"... Recent work has demonstrated that using a carefully designed sensing matrix rather than a random one, can improve the performance of compressed sensing. In particular, a welldesigned sensing matrix can reduce the coherence between the atoms of the equivalent dictionary, and as a consequence, reduce ..."
Abstract

Cited by 14 (2 self)
 Add to MetaCart
(Show Context)
Recent work has demonstrated that using a carefully designed sensing matrix rather than a random one, can improve the performance of compressed sensing. In particular, a welldesigned sensing matrix can reduce the coherence between the atoms of the equivalent dictionary, and as a consequence, reduce the reconstruction error. In some applications, the signals of interest can be well approximated by a union of a small number of subspaces (e.g., face recognition and motion segmentation). This implies the existence of a dictionary which leads to blocksparse representations. In this work, we propose a framework for sensing matrix design that improves the ability of blocksparse approximation techniques to reconstruct and classify signals. This method is based on minimizing a weighted sum of the interblock coherence and the subblock coherence of the equivalent dictionary. Our experiments show that the proposed algorithm significantly improves signal recovery and classification ability of the BlockOMP algorithm compared to sensing matrix optimization methods that do not employ block structure.
Opportunistic sampling by levelcrossing
, 2008
"... Levelcrossing A/D converters (LCA/D) have been considered in the literature and have been shown to ef�ciently sample certain classes of signals. In this paper we provide a stable algorithm to perfectly reconstruct signals of �nite rate of innovation using levelcrossing samples. Furthermore, we als ..."
Abstract

Cited by 6 (1 self)
 Add to MetaCart
(Show Context)
Levelcrossing A/D converters (LCA/D) have been considered in the literature and have been shown to ef�ciently sample certain classes of signals. In this paper we provide a stable algorithm to perfectly reconstruct signals of �nite rate of innovation using levelcrossing samples. Furthermore, we also apply levelcrossing sampling to detection of eventarrival signals. Index Terms — nonuniform sampling, levelcrossing, FRI, point processes, 1.
Deciphering subsampled data: adaptive compressive sampling as a principle of brain communication,” Advances in neural information processing systems
, 2010
"... A new algorithm is proposed for a) unsupervised learning of sparse representations from subsampled measurements and b) estimating the parameters required for linearly reconstructing signals from the sparse codes. We verify that the new algorithm performs efficient data compression on par with the re ..."
Abstract

Cited by 5 (0 self)
 Add to MetaCart
(Show Context)
A new algorithm is proposed for a) unsupervised learning of sparse representations from subsampled measurements and b) estimating the parameters required for linearly reconstructing signals from the sparse codes. We verify that the new algorithm performs efficient data compression on par with the recent method of compressive sampling. Further, we demonstrate that the algorithm performs robustly when stacked in several stages or when applied in undercomplete or overcomplete situations. The new algorithm can explain how neural populations in the brain that receive subsampled input through fiber bottlenecks are able to form coherent response properties. 1
Katsaggelos, “Use of tight frames for optimized compressed sensing
 in Proc. EUSIPCO
, 2012
"... Compressed sensing (CS) theory relies on sparse representations in order to recover signals from an undersampled set of measurements. The sensing mechanism is described by the projection matrix, which should possess certain properties to guarantee high quality signal recovery, using efficient algo ..."
Abstract

Cited by 4 (1 self)
 Add to MetaCart
Compressed sensing (CS) theory relies on sparse representations in order to recover signals from an undersampled set of measurements. The sensing mechanism is described by the projection matrix, which should possess certain properties to guarantee high quality signal recovery, using efficient algorithms. Although the major breakthrough in compressed sensing results is obtained for random matrices, recent efforts have shown that CS performance could be improved with optimized nonrandom projections. Designing matrices that satisfy CS theoretical requirements is closely related to the construction of equiangular tight frames, a problem that has applications in various scientific fields like sparse approximations, coding, and communications. In this paper, we employ frame theory and propose an algorithm for the optimization of the projection matrix that improves sparse signal recovery. Index Terms — Compressed sensing, tight frames, Grassmannian frames