Results 1  10
of
661
ATOMIC DECOMPOSITION BY BASIS PURSUIT
, 1995
"... The TimeFrequency and TimeScale communities have recently developed a large number of overcomplete waveform dictionaries  stationary wavelets, wavelet packets, cosine packets, chirplets, and warplets, to name a few. Decomposition into overcomplete systems is not unique, and several methods for d ..."
Abstract

Cited by 2731 (61 self)
 Add to MetaCart
(Show Context)
The TimeFrequency and TimeScale communities have recently developed a large number of overcomplete waveform dictionaries  stationary wavelets, wavelet packets, cosine packets, chirplets, and warplets, to name a few. Decomposition into overcomplete systems is not unique, and several methods for decomposition have been proposed, including the Method of Frames (MOF), Matching Pursuit (MP), and, for special dictionaries, the Best Orthogonal Basis (BOB). Basis Pursuit (BP) is a principle for decomposing a signal into an "optimal" superposition of dictionary elements, where optimal means having the smallest l 1 norm of coefficients among all such decompositions. We give examples exhibiting several advantages over MOF, MP and BOB, including better sparsity, and superresolution. BP has interesting relations to ideas in areas as diverse as illposed problems, in abstract harmonic analysis, total variation denoising, and multiscale edge denoising. Basis Pursuit in highly overcomplete dictionaries leads to largescale optimization problems. With signals of length 8192 and a wavelet packet dictionary, one gets an equivalent linear program of size 8192 by 212,992. Such problems can be attacked successfully only because of recent advances in linear programming by interiorpoint methods. We obtain reasonable success with a primaldual logarithmic barrier method and conjugategradient solver.
Matching pursuits with timefrequency dictionaries
 IEEE Transactions on Signal Processing
, 1993
"... AbstractWe introduce an algorithm, called matching pursuit, that decomposes any signal into a linear expansion of waveforms that are selected from a redundant dictionary of functions. These waveforms are chosen in order to best match the signal structures. Matching pursuits are general procedures t ..."
Abstract

Cited by 1654 (13 self)
 Add to MetaCart
(Show Context)
AbstractWe introduce an algorithm, called matching pursuit, that decomposes any signal into a linear expansion of waveforms that are selected from a redundant dictionary of functions. These waveforms are chosen in order to best match the signal structures. Matching pursuits are general procedures to compute adaptive signal representations. With a dictionary of Gabor functions a matching pursuit defines an adaptive timefrequency transform. We derive a signal energy distribution in the timefrequency plane, which does not include interference terms, unlike Wigner and Cohen class distributions. A matching pursuit isolates the signal structures that are coherent with respect to a given dictionary. An application to pattern extraction from noisy signals is described. We compare a matching pursuit decomposition with a signal expansion over an optimized wavepacket orthonormal basis, selected with the algorithm of Coifman and Wickerhauser. I.
Greed is Good: Algorithmic Results for Sparse Approximation
, 2004
"... This article presents new results on using a greedy algorithm, orthogonal matching pursuit (OMP), to solve the sparse approximation problem over redundant dictionaries. It provides a sufficient condition under which both OMP and Donoho’s basis pursuit (BP) paradigm can recover the optimal representa ..."
Abstract

Cited by 916 (8 self)
 Add to MetaCart
(Show Context)
This article presents new results on using a greedy algorithm, orthogonal matching pursuit (OMP), to solve the sparse approximation problem over redundant dictionaries. It provides a sufficient condition under which both OMP and Donoho’s basis pursuit (BP) paradigm can recover the optimal representation of an exactly sparse signal. It leverages this theory to show that both OMP and BP succeed for every sparse input signal from a wide class of dictionaries. These quasiincoherent dictionaries offer a natural generalization of incoherent dictionaries, and the cumulative coherence function is introduced to quantify the level of incoherence. This analysis unifies all the recent results on BP and extends them to OMP. Furthermore, the paper develops a sufficient condition under which OMP can identify atoms from an optimal approximation of a nonsparse signal. From there, it argues that OMP is an approximation algorithm for the sparse problem over a quasiincoherent dictionary. That is, for every input signal, OMP calculates a sparse approximant whose error is only a small factor worse than the minimal error that can be attained with the same number of terms.
Quantization
 IEEE TRANS. INFORM. THEORY
, 1998
"... The history of the theory and practice of quantization dates to 1948, although similar ideas had appeared in the literature as long ago as 1898. The fundamental role of quantization in modulation and analogtodigital conversion was first recognized during the early development of pulsecode modula ..."
Abstract

Cited by 877 (12 self)
 Add to MetaCart
The history of the theory and practice of quantization dates to 1948, although similar ideas had appeared in the literature as long ago as 1898. The fundamental role of quantization in modulation and analogtodigital conversion was first recognized during the early development of pulsecode modulation systems, especially in the 1948 paper of Oliver, Pierce, and Shannon. Also in 1948, Bennett published the first highresolution analysis of quantization and an exact analysis of quantization noise for Gaussian processes, and Shannon published the beginnings of rate distortion theory, which would provide a theory for quantization as analogtodigital conversion and as data compression. Beginning with these three papers of fifty years ago, we trace the history of quantization from its origins through this decade, and we survey the fundamentals of the theory and many of the popular and promising techniques for quantization.
A tutorial on support vector regression
, 2004
"... In this tutorial we give an overview of the basic ideas underlying Support Vector (SV) machines for function estimation. Furthermore, we include a summary of currently used algorithms for training SV machines, covering both the quadratic (or convex) programming part and advanced methods for dealing ..."
Abstract

Cited by 828 (3 self)
 Add to MetaCart
In this tutorial we give an overview of the basic ideas underlying Support Vector (SV) machines for function estimation. Furthermore, we include a summary of currently used algorithms for training SV machines, covering both the quadratic (or convex) programming part and advanced methods for dealing with large datasets. Finally, we mention some modifications and extensions that have been applied to the standard SV algorithm, and discuss the aspect of regularization from a SV perspective.
Curvelets: a surprisingly effective nonadaptive representation of objects with edges
 IN CURVE AND SURFACE FITTING: SAINTMALO
, 2000
"... It is widely believed that to efficiently represent an otherwise smooth object with discontinuities along edges, one must use an adaptive representation that in some sense ‘tracks ’ the shape of the discontinuity set. This folkbelief — some would say folktheorem — is incorrect. At the very least ..."
Abstract

Cited by 390 (23 self)
 Add to MetaCart
It is widely believed that to efficiently represent an otherwise smooth object with discontinuities along edges, one must use an adaptive representation that in some sense ‘tracks ’ the shape of the discontinuity set. This folkbelief — some would say folktheorem — is incorrect. At the very least, the possible quantitative advantage of such adaptation is vastly smaller than commonly believed. We have recently constructed a tight frame of curvelets which provides stable, efficient, and nearoptimal representation of otherwise smooth objects having discontinuities along smooth curves. By applying naive thresholding to the curvelet transform of such an object, one can form mterm approximations with rate of L 2 approximation rivaling the rate obtainable by complex adaptive schemes which attempt to ‘track ’ the discontinuity set. In this article we explain the basic issues of efficient mterm approximation, the construction of efficient adaptive representation, the construction of the curvelet frame, and a crude analysis of the performance of curvelet schemes.
Regularization networks and support vector machines
 Advances in Computational Mathematics
, 2000
"... Regularization Networks and Support Vector Machines are techniques for solving certain problems of learning from examples – in particular the regression problem of approximating a multivariate function from sparse data. Radial Basis Functions, for example, are a special case of both regularization a ..."
Abstract

Cited by 366 (38 self)
 Add to MetaCart
(Show Context)
Regularization Networks and Support Vector Machines are techniques for solving certain problems of learning from examples – in particular the regression problem of approximating a multivariate function from sparse data. Radial Basis Functions, for example, are a special case of both regularization and Support Vector Machines. We review both formulations in the context of Vapnik’s theory of statistical learning which provides a general foundation for the learning problem, combining functional analysis and statistics. The emphasis is on regression: classification is treated as a special case.
Learning Overcomplete Representations
, 2000
"... In an overcomplete basis, the number of basis vectors is greater than the dimensionality of the input, and the representation of an input is not a unique combination of basis vectors. Overcomplete representations have been advocated because they have greater robustness in the presence of noise, can ..."
Abstract

Cited by 356 (11 self)
 Add to MetaCart
In an overcomplete basis, the number of basis vectors is greater than the dimensionality of the input, and the representation of an input is not a unique combination of basis vectors. Overcomplete representations have been advocated because they have greater robustness in the presence of noise, can be sparser, and can have greater flexibility in matching structure in the data. Overcomplete codes have also been proposed as a model of some of the response properties of neurons in primary visual cortex. Previous work has focused on finding the best representation of a signal using a fixed overcomplete basis (or dictionary). We present an algorithm for learning an overcomplete basis by viewing it as probabilistic model of the observed data. We show that overcomplete bases can yield a better approximation of the underlying statistical distribution of the data and can thus lead to greater coding efficiency. This can be viewed as a generalization of the technique of independent component analysis and provides a method for Bayesian reconstruction of signals in the presence of noise and for blind source separation when there are more sources than mixtures.
Texture analysis and classification with treestructured wavelet transform
 IEEE Trans. Image Processing
, 1993
"... AbstractOne difficulty of texture analysis in the past was the lack of adequate tools to characterize different scales of textures effectively. Recent developments in multiresolution analysis such as the Gabor and wavelet transforms help to overcome this difficulty. In this research, we propose a m ..."
Abstract

Cited by 314 (1 self)
 Add to MetaCart
(Show Context)
AbstractOne difficulty of texture analysis in the past was the lack of adequate tools to characterize different scales of textures effectively. Recent developments in multiresolution analysis such as the Gabor and wavelet transforms help to overcome this difficulty. In this research, we propose a multiresolution approach based on a modified wavelet transform called the treestructured wavelet transform or wavelet packets for texture analysis and classification. The development of this new transform is motivated by the observation that a large class of natural textures can be modeled as quasiperiodic signals whose dominant frequencies are located in the middle frequency channels. With the transform, we are able to zoom into any desired frequency channels for further decomposition. In contrast, the conventional pyramidstructured wavelet transform performs further decomposition only in low frequency channels. We develop a progressive texture classification algorithm which is not only computationally attractive but also has excellent performance. The performance of our new method is compared with that of several other methods using the DCT, DST, DHT, pyramidstructured wavelet transforms, Gabor filters, and Laws filters.
Translationinvariant denoising
, 1995
"... DeNoising with the traditional (orthogonal, maximallydecimated) wavelet transform sometimes exhibits visual artifacts; we attribute some of these – for example, Gibbs phenomena in the neighborhood of discontinuities – to the lack of translation invariance of the wavelet basis. One method to suppre ..."
Abstract

Cited by 307 (8 self)
 Add to MetaCart
(Show Context)
DeNoising with the traditional (orthogonal, maximallydecimated) wavelet transform sometimes exhibits visual artifacts; we attribute some of these – for example, Gibbs phenomena in the neighborhood of discontinuities – to the lack of translation invariance of the wavelet basis. One method to suppress such artifacts, termed “cycle spinning ” by Coifman, is to “average out ” the translation dependence. For a range of shifts, one shifts the data (right or left as the case may be), DeNoises the shifted data, and then unshifts the denoised data. Doing this for each of a range of shifts, and averaging the several results so obtained, produces a reconstruction subject to far weaker Gibbs phenomena than thresholding based DeNoising using the traditional orthogonal wavelet transform. CycleSpinning over the range of all circulant shifts can be accomplished in order nlog 2(n) time; it is equivalent to denoising using the undecimated or stationary wavelet transform. Cyclespinning exhibits benefits outside of wavelet denoising, for example in cosine packet denoising, where it helps suppress ‘clicks’. It also has a counterpart in frequency domain denoising, where the goal of translationinvariance is replaced by modulation invariance, and the central shiftDeNoiseunshift operation is replaced by modulateDeNoisedemodulate. We illustrate these concepts with extensive computational examples; all figures presented here are reproducible using the WaveLab software package. 1