Results 1  10
of
430
Near Optimal Signal Recovery From Random Projections: Universal Encoding Strategies?
, 2004
"... Suppose we are given a vector f in RN. How many linear measurements do we need to make about f to be able to recover f to within precision ɛ in the Euclidean (ℓ2) metric? Or more exactly, suppose we are interested in a class F of such objects— discrete digital signals, images, etc; how many linear m ..."
Abstract

Cited by 1513 (20 self)
 Add to MetaCart
Suppose we are given a vector f in RN. How many linear measurements do we need to make about f to be able to recover f to within precision ɛ in the Euclidean (ℓ2) metric? Or more exactly, suppose we are interested in a class F of such objects— discrete digital signals, images, etc; how many linear measurements do we need to recover objects from this class to within accuracy ɛ? This paper shows that if the objects of interest are sparse or compressible in the sense that the reordered entries of a signal f ∈ F decay like a powerlaw (or if the coefficient sequence of f in a fixed basis decays like a powerlaw), then it is possible to reconstruct f to within very high accuracy from a small number of random measurements. typical result is as follows: we rearrange the entries of f (or its coefficients in a fixed basis) in decreasing order of magnitude f  (1) ≥ f  (2) ≥... ≥ f  (N), and define the weakℓp ball as the class F of those elements whose entries obey the power decay law f  (n) ≤ C · n −1/p. We take measurements 〈f, Xk〉, k = 1,..., K, where the Xk are Ndimensional Gaussian
Compressive sampling
, 2006
"... Conventional wisdom and common practice in acquisition and reconstruction of images from frequency data follow the basic principle of the Nyquist density sampling theory. This principle states that to reconstruct an image, the number of Fourier samples we need to acquire must match the desired res ..."
Abstract

Cited by 1427 (15 self)
 Add to MetaCart
Conventional wisdom and common practice in acquisition and reconstruction of images from frequency data follow the basic principle of the Nyquist density sampling theory. This principle states that to reconstruct an image, the number of Fourier samples we need to acquire must match the desired resolution of the image, i.e. the number of pixels in the image. This paper surveys an emerging theory which goes by the name of “compressive sampling” or “compressed sensing,” and which says that this conventional wisdom is inaccurate. Perhaps surprisingly, it is possible to reconstruct images or signals of scientific interest accurately and sometimes even exactly from a number of samples which is far smaller than the desired resolution of the image/signal, e.g. the number of pixels in the image. It is believed that compressive sampling has far reaching implications. For example, it suggests the possibility of new data acquisition protocols that translate analog information into digital form with fewer sensors than what was considered necessary. This new sampling theory may come to underlie procedures for sampling and compressing data simultaneously. In this short survey, we provide some of the key mathematical insights underlying this new theory, and explain some of the interactions between compressive sampling and other fields such as statistics, information theory, coding theory, and theoretical computer science.
From Sparse Solutions of Systems of Equations to Sparse Modeling of Signals and Images
, 2007
"... A fullrank matrix A ∈ IR n×m with n < m generates an underdetermined system of linear equations Ax = b having infinitely many solutions. Suppose we seek the sparsest solution, i.e., the one with the fewest nonzero entries: can it ever be unique? If so, when? As optimization of sparsity is combin ..."
Abstract

Cited by 423 (37 self)
 Add to MetaCart
(Show Context)
A fullrank matrix A ∈ IR n×m with n < m generates an underdetermined system of linear equations Ax = b having infinitely many solutions. Suppose we seek the sparsest solution, i.e., the one with the fewest nonzero entries: can it ever be unique? If so, when? As optimization of sparsity is combinatorial in nature, are there efficient methods for finding the sparsest solution? These questions have been answered positively and constructively in recent years, exposing a wide variety of surprising phenomena; in particular, the existence of easilyverifiable conditions under which optimallysparse solutions can be found by concrete, effective computational methods. Such theoretical results inspire a bold perspective on some important practical problems in signal and image processing. Several wellknown signal and image processing problems can be cast as demanding solutions of undetermined systems of equations. Such problems have previously seemed, to many, intractable. There is considerable evidence that these problems often have sparse solutions. Hence, advances in finding sparse solutions to underdetermined systems energizes research on such signal and image processing problems – to striking effect. In this paper we review the theoretical results on sparse solutions of linear systems, empirical
Fast Discrete Curvelet Transforms
, 2005
"... This paper describes two digital implementations of a new mathematical transform, namely, the second generation curvelet transform [12, 10] in two and three dimensions. The first digital transformation is based on unequallyspaced fast Fourier transforms (USFFT) while the second is based on the wrap ..."
Abstract

Cited by 170 (9 self)
 Add to MetaCart
(Show Context)
This paper describes two digital implementations of a new mathematical transform, namely, the second generation curvelet transform [12, 10] in two and three dimensions. The first digital transformation is based on unequallyspaced fast Fourier transforms (USFFT) while the second is based on the wrapping of specially selected Fourier samples. The two implementations essentially differ by the choice of spatial grid used to translate curvelets at each scale and angle. Both digital transformations return a table of digital curvelet coefficients indexed by a scale parameter, an orientation parameter, and a spatial location parameter. And both implementations are fast in the sense that they run in O(n 2 log n) flops for n by n Cartesian arrays; in addition, they are also invertible, with rapid inversion algorithms of about the same complexity. Our digital transformations improve upon earlier implementations—based upon the first generation of curvelets—in the sense that they are conceptually simpler, faster and far less redundant. The software CurveLab, which implements both transforms presented in this paper, is available at
The easy path wavelet transform: A new adaptive wavelet transform for sparse representation of twodimensional data
 Multiscale Model. Simul
"... Dedicated to Manfred Tasche on the occasion of his 65th birthday We introduce a new locally adaptive wavelet transform, called Easy Path Wavelet Transform (EPWT), that works along pathways through the array of function values and exploits the local correlations of the data in a simple appropriate ma ..."
Abstract

Cited by 137 (9 self)
 Add to MetaCart
Dedicated to Manfred Tasche on the occasion of his 65th birthday We introduce a new locally adaptive wavelet transform, called Easy Path Wavelet Transform (EPWT), that works along pathways through the array of function values and exploits the local correlations of the data in a simple appropriate manner. The usual discrete orthogonal and biorthogonal wavelet transform can be formulated in this approach. The EPWT can be incorporated into a multiresolution analysis structure and generates data dependent scaling spaces and wavelet spaces. Numerical results show the enormous efficiency of the EPWT for representation of twodimensional data. Key words. wavelet transform along pathways, data compression, adaptive wavelet bases, directed wavelets AMS Subject classifications. 65T60, 42C40, 68U10, 94A08 1
A review of curvelets and recent applications
 IEEE Signal Processing Magazine
, 2009
"... Multiresolution methods are deeply related to image processing, biological and computer vision, scientific computing, etc. The curvelet transform is a multiscale directional transform which allows an almost optimal nonadaptive sparse representation of objects with edges. It has generated increasing ..."
Abstract

Cited by 127 (10 self)
 Add to MetaCart
(Show Context)
Multiresolution methods are deeply related to image processing, biological and computer vision, scientific computing, etc. The curvelet transform is a multiscale directional transform which allows an almost optimal nonadaptive sparse representation of objects with edges. It has generated increasing interest in the community of applied mathematics and signal processing over the past years. In this paper, we present a review on the curvelet transform, including its history beginning from wavelets, its logical relationship to other multiresolution multidirectional methods like contourlets and shearlets, its basic theory and discrete algorithm. Further, we consider recent applications in image/video processing, seismic exploration, fluid mechanics, simulation of partial different equations, and compressed sensing.
CurveletWavelet Regularized Split Bregman Iteration for Compressed Sensing
"... Compressed sensing is a new concept in signal processing. Assuming that a signal can be represented or approximated by only a few suitably chosen terms in a frame expansion, compressed sensing allows to recover this signal from much fewer samples than the ShannonNyquist theory requires. Many images ..."
Abstract

Cited by 118 (6 self)
 Add to MetaCart
(Show Context)
Compressed sensing is a new concept in signal processing. Assuming that a signal can be represented or approximated by only a few suitably chosen terms in a frame expansion, compressed sensing allows to recover this signal from much fewer samples than the ShannonNyquist theory requires. Many images can be sparsely approximated in expansions of suitable frames as wavelets, curvelets, wave atoms and others. Generally, wavelets represent pointlike features while curvelets represent linelike features well. For a suitable recovery of images, we propose models that contain weighted sparsity constraints in two different frames. Given the incomplete measurements f = Φu + ɛ with the measurement matrix Φ ∈ R K×N, K<<N, we consider a jointly sparsityconstrained optimization problem of the form argmin{‖ΛcΨcu‖1 + ‖ΛwΨwu‖1 + u 1 2‖f − Φu‖22}. Here Ψcand Ψw are the transform matrices corresponding to the two frames, and the diagonal matrices Λc, Λw contain the weights for the frame coefficients. We present efficient iteration methods to solve the optimization problem, based on Alternating Split Bregman algorithms. The convergence of the proposed iteration schemes will be proved by showing that they can be understood as special cases of the DouglasRachford Split algorithm. Numerical experiments for compressed sensing based Fourierdomain random imaging show good performances of the proposed curveletwavelet regularized split Bregman (CWSpB) methods,whereweparticularlyuseacombination of wavelet and curvelet coefficients as sparsity constraints.
Optimally Sparse Image Representation by the Easy Path Wavelet Transform
"... The Easy Path Wavelet Transform (EPWT) [19] has recently been proposed by one of the authors as a tool for sparse representations of bivariate functions from discrete data, in particular from image data. The EPWT is a locally adaptive wavelet transform. It works along pathways through the array of f ..."
Abstract

Cited by 115 (8 self)
 Add to MetaCart
(Show Context)
The Easy Path Wavelet Transform (EPWT) [19] has recently been proposed by one of the authors as a tool for sparse representations of bivariate functions from discrete data, in particular from image data. The EPWT is a locally adaptive wavelet transform. It works along pathways through the array of function values and it exploits the local correlations of the given data in a simple appropriate manner. In this paper, we show that the EPWT leads, for a suitable choice of the pathways, to optimal Nterm approximations for piecewise Hölder continuous functions with singularities along curves.
Recovery algorithms for vector valued data with joint sparsity constraints
, 2006
"... Vector valued data appearing in concrete applications often possess sparse expansions with respect to a preassigned frame for each vector component individually. Additionally, different components may also exhibit common sparsity patterns. Recently, there were introduced sparsity measures that take ..."
Abstract

Cited by 112 (23 self)
 Add to MetaCart
(Show Context)
Vector valued data appearing in concrete applications often possess sparse expansions with respect to a preassigned frame for each vector component individually. Additionally, different components may also exhibit common sparsity patterns. Recently, there were introduced sparsity measures that take into account such joint sparsity patterns, promoting coupling of nonvanishing components. These measures are typically constructed as weighted ℓ1 norms of componentwise ℓq norms of frame coefficients. We show how to compute solutions of linear inverse problems with such joint sparsity regularization constraints by fast thresholded Landweber algorithms. Next we discuss the adaptive choice of suitable weights appearing in the definition of sparsity measures. The weights are interpreted as indicators of the sparsity pattern and are iteratively updated after each new application of the thresholded Landweber algorithm. The resulting twostep algorithm is interpreted as a doubleminimization scheme for a suitable target functional. We show its ℓ2norm convergence. An implementable version of the algorithm is also formulated, and its norm convergence is proven. Numerical experiments in color image restoration are presented.
A New Hybrid Method for Image Approximation using the Easy Path Wavelet Transform
"... The Easy Path Wavelet Transform (EPWT) has recently been proposed by one of the authors as a tool for sparse representations of bivariate functions from discrete data, in particular from image data. The EPWT is a locally adaptive wavelet transform. It works along pathways through the array of functi ..."
Abstract

Cited by 110 (4 self)
 Add to MetaCart
(Show Context)
The Easy Path Wavelet Transform (EPWT) has recently been proposed by one of the authors as a tool for sparse representations of bivariate functions from discrete data, in particular from image data. The EPWT is a locally adaptive wavelet transform. It works along pathways through the array of function values and exploits the local correlations of the given data in a simple appropriate manner. However, the EPWT suffers from its adaptivity costs that arise from the storage of path vectors. In this paper, we propose a new hybrid method for image compression that exploits the advantages of the usual tensor product wavelet transform for the representation of smooth images and uses the EPWT for an efficient representation of edges and texture. Numerical results show the efficiency of this procedure. Key words. sparse data representation, tensor product wavelet transform, easy path wavelet transform, linear diffusion, smoothing filters, adaptive wavelet bases, Nterm approximation AMS Subject classifications. 41A25, 42C40, 68U10, 94A08 1