Results 1  10
of
10
Near Optimal Signal Recovery From Random Projections: Universal Encoding Strategies?
, 2004
"... Suppose we are given a vector f in RN. How many linear measurements do we need to make about f to be able to recover f to within precision ɛ in the Euclidean (ℓ2) metric? Or more exactly, suppose we are interested in a class F of such objects— discrete digital signals, images, etc; how many linear m ..."
Abstract

Cited by 1513 (20 self)
 Add to MetaCart
Suppose we are given a vector f in RN. How many linear measurements do we need to make about f to be able to recover f to within precision ɛ in the Euclidean (ℓ2) metric? Or more exactly, suppose we are interested in a class F of such objects— discrete digital signals, images, etc; how many linear measurements do we need to recover objects from this class to within accuracy ɛ? This paper shows that if the objects of interest are sparse or compressible in the sense that the reordered entries of a signal f ∈ F decay like a powerlaw (or if the coefficient sequence of f in a fixed basis decays like a powerlaw), then it is possible to reconstruct f to within very high accuracy from a small number of random measurements. typical result is as follows: we rearrange the entries of f (or its coefficients in a fixed basis) in decreasing order of magnitude f  (1) ≥ f  (2) ≥... ≥ f  (N), and define the weakℓp ball as the class F of those elements whose entries obey the power decay law f  (n) ≤ C · n −1/p. We take measurements 〈f, Xk〉, k = 1,..., K, where the Xk are Ndimensional Gaussian
Bandelet Image Approximation and Compression
 SIAM JOURNAL OF MULTISCALE MODELING AND SIMULATION
, 2005
"... Finding efficient geometric representations of images is a central issue to improving image compression and noise removal algorithms. We introduce bandelet orthogonal bases and frames that are adapted to the geometric regularity of an image. Images are approximated by finding a best bandelet basis o ..."
Abstract

Cited by 36 (4 self)
 Add to MetaCart
Finding efficient geometric representations of images is a central issue to improving image compression and noise removal algorithms. We introduce bandelet orthogonal bases and frames that are adapted to the geometric regularity of an image. Images are approximated by finding a best bandelet basis or frame that produces a sparse representation. For functions that are uniformly regular outside a set of edge curves that are geometrically regular, the main theorem proves that bandelet approximations satisfy an optimal asymptotic error decay rate. A bandelet image compression scheme is derived. For computational applications, a fast discrete bandelet transform algorithm is introduced, with a fast best basis search which preserves asymptotic approximation and coding error decay rates.
Wavelet Frame Accelerated Reduced Support Vector Machines
 IEEE Trans. Image Processing
, 2008
"... Abstract—In this paper, a novel method for reducing the runtime complexity of a support vector machine classifier is presented. The new training algorithm is fast and simple. This is achieved by an overcomplete wavelet transform that finds the optimal approximation of the support vectors. The prese ..."
Abstract

Cited by 6 (2 self)
 Add to MetaCart
(Show Context)
Abstract—In this paper, a novel method for reducing the runtime complexity of a support vector machine classifier is presented. The new training algorithm is fast and simple. This is achieved by an overcomplete wavelet transform that finds the optimal approximation of the support vectors. The presented derivation shows that the wavelet theory provides an upper bound on the distance between the decision function of the support vector machine and our classifier. The obtained classifier is fast, since a Haar wavelet approximation of the support vectors is used, enabling efficient integral imagebased kernel evaluations. This provides a set of cascaded classifiers of increasing complexity for an early rejection of vectors easy to discriminate. This excellent runtime performance is achieved by using a hierarchical evaluation over the number of incorporated and additional over the approximation accuracy of the reduced set vectors. Here, this algorithm is applied to the problem of face detection, but it can also be used for other imagebased classifications. The algorithm presented, provides a 530fold speedup over the support vector machine, enabling face detection at more than 25 fps on a standard PC. Index Terms—Cascaded evaluation, coarsetofine classifier, face detection, machine learning, overcomplete wavelet transform (OCWT), reduced support vector machine (RVM). I.
The Size of Objects in Natural Images
, 1999
"... This paper introduces a new method for analyzing scaling phenomena in natural images, and draws some consequences as to whether natural images belong to the space of functions with bounded variation. In some sense, our analysis computes the size distribution of objects in an image. By using the ..."
Abstract

Cited by 5 (0 self)
 Add to MetaCart
This paper introduces a new method for analyzing scaling phenomena in natural images, and draws some consequences as to whether natural images belong to the space of functions with bounded variation. In some sense, our analysis computes the size distribution of objects in an image. By using the dead leaves model, we study the influence of occlusion on size distribution, and prove compatibility with our experimental results. Contents 1 Introduction 3 2 Statistics of natural images: a review 3 2.1 Motivations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3 2.2 First order statistics . . . . . . . . . . . . . . . . . . . . . . . . . 5 2.3 Second order statistics . . . . . . . . . . . . . . . . . . . . . . . . 5 2.3.1 Covariance and power spectrum . . . . . . . . . . . . . . 5 2.3.2 Other second order statistics . . . . . . . . . . . . . . . . 7 2.4 Linear decomposition of images . . . . . . . . . . . . . . . . . . . 7 2.5 Scale invariance in natural images . . . . . ....
Nearly optimal signal recovery from random projections: Universal encoding strategies?
 IEEE TRANS. INFO. THEORY
, 2006
"... Suppose we are given a vector f in a class F, e.g., a class of digital signals or digital images. How many linear measurements do we need to make about f to be able to recover f to within precision in the Euclidean (`2) metric? This paper shows that if the objects of interest are sparse in a fixed ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
Suppose we are given a vector f in a class F, e.g., a class of digital signals or digital images. How many linear measurements do we need to make about f to be able to recover f to within precision in the Euclidean (`2) metric? This paper shows that if the objects of interest are sparse in a fixed basis or compressible, then it is possible to reconstruct f to within very high accuracy from a small number of random measurements by solving a simple linear program. More precisely, suppose that the nth largest entry of the vector jfj (or of its coefficients in a fixed basis) obeys jfj(n) R 1 n01=p, where R>0 and p>0. Suppose that we take measurements yk = hf; Xki;k =1;...;K, where the Xk are Ndimensional Gaussian vectors with independent standard normal entries. Then for each f obeying the decay estimate above for some 0 < p < 1 and with overwhelming probability, our reconstruction f] , defined as the solution to the constraints
Greedy Wavelet Projections are Bounded on BV
, 2006
"... Let BV = BV(Rd) be the space of functions of bounded variation on Rd with d ≥ 2. Let ψλ, λ ∈ ∆, be a wavelet system of compactly supported functions normalized in BV, i.e., ψλ  BV(Rd) =1,λ ∈ ∆. Each f ∈ BV has a unique wavelet expansion ∑ λ∈ ∆ cλ(f)ψλ with convergence in L1(Rd). If ΛN (f) is the ..."
Abstract
 Add to MetaCart
Let BV = BV(Rd) be the space of functions of bounded variation on Rd with d ≥ 2. Let ψλ, λ ∈ ∆, be a wavelet system of compactly supported functions normalized in BV, i.e., ψλ  BV(Rd) =1,λ ∈ ∆. Each f ∈ BV has a unique wavelet expansion ∑ λ∈ ∆ cλ(f)ψλ with convergence in L1(Rd). If ΛN (f) is the set of N indicies λ ∈ ∆ for which cλ(f)  are largest (with ties handled in an arbitrary way), then GN (f): = ∑ λ∈ΛN (f) cλ(f)ψλ is called a greedy approximation to f. It is shown that GN (f)  BV(Rd) ≤ Cf  BV(Rd) with C a constant independent of f. This answers in the affirmative a conjecture of Meyer (2001).
based on Variational Models
"... (Under the direction of Professor Mingjun Lai) In this work, we use bivariate splines to find the approximation of the solutions to three variational models, the ROF model, TVLp model and the ChanVese Active Contour model. We start by showing first variational models have solutions in the spline ..."
Abstract
 Add to MetaCart
(Show Context)
(Under the direction of Professor Mingjun Lai) In this work, we use bivariate splines to find the approximation of the solutions to three variational models, the ROF model, TVLp model and the ChanVese Active Contour model. We start by showing first variational models have solutions in the spline space, and the solutions are unique and stable. And then we go on to prove that the solutions in the spline space approximate the solution in the Sobolev space or the BV space, according to the shape of the domain. Finally, we study the discretization of the ChanVese Active Contour Model in the level set setting. Numerical examples in image processing based on finite difference are included. Index words:
unknown title
, 2008
"... Estimating the probability law of the codelength as a function of the approximation error in image compression ..."
Abstract
 Add to MetaCart
(Show Context)
Estimating the probability law of the codelength as a function of the approximation error in image compression
Proceedings of the Estonian Academy of Sciences,
, 2010
"... Available online at www.eap.ee/proceedings The Besicovitch covering theorem and nearminimizers for the couple (L2,BV) ..."
Abstract
 Add to MetaCart
Available online at www.eap.ee/proceedings The Besicovitch covering theorem and nearminimizers for the couple (L2,BV)