Results 1  10
of
4,944
BlockSparse Recovery via Convex Optimization
, 2012
"... Given a dictionary that consists of multiple blocks and a signal that lives in the range space of only a few blocks, we study the problem of finding a blocksparse representation of the signal, i.e., a representation that uses the minimum number of blocks. Motivated by signal/image processing and co ..."
Abstract

Cited by 19 (1 self)
 Add to MetaCart
and computer vision applications, such as face recognition, we consider the blocksparse recovery problem in the case where the number of atoms in each block is arbitrary, possibly much larger than the dimension of the underlying subspace. To find a blocksparse representation of a signal, we propose two
Average Case Analysis of HighDimensional BlockSparse Recovery and Regression for Arbitrary Designs
, 2015
"... This paper studies conditions for highdimensional inference when the set of observations is given by a linear combination of a small number of groups of columns of a design matrix, termed the “blocksparse” case. In this regard, it first specifies conditions on the design matrix under which most of ..."
Abstract
 Add to MetaCart
of its block submatrices are well conditioned. It then leverages this result for averagecase analysis of highdimensional blocksparse recovery and regression. In contrast to earlier works: (i) this paper provides conditions on arbitrary designs that can be explicitly computed in polynomial time, (ii
Average Case Analysis of HighDimensional BlockSparse Recovery and Regression for Arbitrary Designs
"... Abstract This paper studies conditions for highdimensional inference when the set of observations is given by a linear combination of a small number of groups of columns of a design matrix, termed the "blocksparse" case. In this regard, it first specifies conditions on the design matrix ..."
Abstract
 Add to MetaCart
under which most of its block submatrices are well conditioned. It then leverages this result for averagecase analysis of highdimensional blocksparse recovery and regression. In contrast to earlier works: (i) this paper provides conditions on arbitrary designs that can be explicitly computed
Stable recovery of sparse overcomplete representations in the presence of noise
 IEEE TRANS. INFORM. THEORY
, 2006
"... Overcomplete representations are attracting interest in signal processing theory, particularly due to their potential to generate sparse representations of signals. However, in general, the problem of finding sparse representations must be unstable in the presence of noise. This paper establishes t ..."
Abstract

Cited by 460 (22 self)
 Add to MetaCart
the possibility of stable recovery under a combination of sufficient sparsity and favorable structure of the overcomplete system. Considering an ideal underlying signal that has a sufficiently sparse representation, it is assumed that only a noisy version of it can be observed. Assuming further
Sparse MRI: The Application of Compressed Sensing for Rapid MR Imaging
 MAGNETIC RESONANCE IN MEDICINE 58:1182–1195
, 2007
"... The sparsity which is implicit in MR images is exploited to significantly undersample kspace. Some MR images such as angiograms are already sparse in the pixel representation; other, more complicated images have a sparse representation in some transform domain–for example, in terms of spatial finit ..."
Abstract

Cited by 538 (11 self)
 Add to MetaCart
finitedifferences or their wavelet coefficients. According to the recently developed mathematical theory of compressedsensing, images with a sparse representation can be recovered from randomly undersampled kspace data, provided an appropriate nonlinear recovery scheme is used. Intuitively, artifacts
Stable signal recovery from incomplete and inaccurate measurements,”
 Comm. Pure Appl. Math.,
, 2006
"... Abstract Suppose we wish to recover a vector x 0 ∈ R m (e.g., a digital signal or image) from incomplete and contaminated observations y = Ax 0 + e; A is an n × m matrix with far fewer rows than columns (n m) and e is an error term. Is it possible to recover x 0 accurately based on the data y? To r ..."
Abstract

Cited by 1397 (38 self)
 Add to MetaCart
? To recover x 0 , we consider the solution x to the 1 regularization problem where is the size of the error term e. We show that if A obeys a uniform uncertainty principle (with unitnormed columns) and if the vector x 0 is sufficiently sparse, then the solution is within the noise level As a first example
Just Relax: Convex Programming Methods for Identifying Sparse Signals in Noise
, 2006
"... This paper studies a difficult and fundamental problem that arises throughout electrical engineering, applied mathematics, and statistics. Suppose that one forms a short linear combination of elementary signals drawn from a large, fixed collection. Given an observation of the linear combination that ..."
Abstract

Cited by 483 (2 self)
 Add to MetaCart
. This paper studies a method called convex relaxation, which attempts to recover the ideal sparse signal by solving a convex program. This approach is powerful because the optimization can be completed in polynomial time with standard scientific software. The paper provides general conditions which ensure
Near Optimal Signal Recovery From Random Projections: Universal Encoding Strategies?
, 2004
"... Suppose we are given a vector f in RN. How many linear measurements do we need to make about f to be able to recover f to within precision ɛ in the Euclidean (ℓ2) metric? Or more exactly, suppose we are interested in a class F of such objects— discrete digital signals, images, etc; how many linear m ..."
Abstract

Cited by 1513 (20 self)
 Add to MetaCart
measurements do we need to recover objects from this class to within accuracy ɛ? This paper shows that if the objects of interest are sparse or compressible in the sense that the reordered entries of a signal f ∈ F decay like a powerlaw (or if the coefficient sequence of f in a fixed basis decays like a power
Decoding by Linear Programming
, 2004
"... This paper considers the classical error correcting problem which is frequently discussed in coding theory. We wish to recover an input vector f ∈ Rn from corrupted measurements y = Af + e. Here, A is an m by n (coding) matrix and e is an arbitrary and unknown vector of errors. Is it possible to rec ..."
Abstract

Cited by 1399 (16 self)
 Add to MetaCart
for some ρ> 0. In short, f can be recovered exactly by solving a simple convex optimization problem (which one can recast as a linear program). In addition, numerical experiments suggest that this recovery procedure works unreasonably well; f is recovered exactly even in situations where a significant
Recognitionbycomponents: A theory of human image understanding
 Psychological Review
, 1987
"... The perceptual recognition of objects is conceptualized to be a process in which the image of the input is segmented at regions of deep concavity into an arrangement of simple geometric components, such as blocks, cylinders, wedges, and cones. The fundamental assumption of the proposed theory, recog ..."
Abstract

Cited by 1272 (23 self)
 Add to MetaCart
The perceptual recognition of objects is conceptualized to be a process in which the image of the input is segmented at regions of deep concavity into an arrangement of simple geometric components, such as blocks, cylinders, wedges, and cones. The fundamental assumption of the proposed theory
Results 1  10
of
4,944