Results 1  10
of
23
Sickel: Optimal approximation of elliptic problems by linear and nonlinear mappings III
 Triebel, Function Spaces, Entropy Numbers, Differential Operators
, 1996
"... We study the optimal approximation of the solution of an operator equation A(u) = f by four types of mappings: a) linear mappings of rank n; b) nterm approximation with respect to a Riesz basis; c) approximation based on linear information about the right hand side f; d) continuous mappings. We co ..."
Abstract

Cited by 135 (28 self)
 Add to MetaCart
We study the optimal approximation of the solution of an operator equation A(u) = f by four types of mappings: a) linear mappings of rank n; b) nterm approximation with respect to a Riesz basis; c) approximation based on linear information about the right hand side f; d) continuous mappings. We consider worst case errors, where f is an element of the unit ball of a Sobolev or Besov space Br q(Lp(Ω)) and Ω ⊂ Rd is a bounded Lipschitz domain; the error is always measured in the Hsnorm. The respective widths are the linear widths (or approximation numbers), the nonlinear widths, the Gelfand widths, and the manifold widths. As a technical tool, we also study the Bernstein numbers. Our main results are the following. If p ≥ 2 then the order of convergence is the same for all four classes of approximations. In particular, the best linear approximations are of the same order as the best nonlinear ones. The best linear approximation can be quite difficult to realize as a numerical algorithm since the optimal Galerkin space usually depends on the operator and of the shape of the domain Ω. For p < 2 there is a difference, nonlinear approximations are better than linear ones. However, in this case, it turns out that linear information about the right hand side f is again optimal. Our main theoretical tool is the best nterm approximation with respect to an optimal Riesz basis and related nonlinear widths. These general results are used to study the Poisson equation in a polygonal domain. It turns out that best nterm wavelet approximation is (almost) optimal. The main results of
Vector Greedy Algorithms
"... Our objective is to study nonlinear approximation with regard to redundant systems. Redundancy on the one hand offers much promise for greater efficiency in terms of approximation rate, but on the other hand gives rise to highly nontrivial theoretical and practical problems. Greedy type approximati ..."
Abstract

Cited by 64 (11 self)
 Add to MetaCart
Our objective is to study nonlinear approximation with regard to redundant systems. Redundancy on the one hand offers much promise for greater efficiency in terms of approximation rate, but on the other hand gives rise to highly nontrivial theoretical and practical problems. Greedy type approximations proved to be convenient and efficient ways of constructing mterm approximants. We introduce and study vector greedy algorithms that are designed with aim of constructing mth greedy approximants simultaneously for a given finite number of elements. We prove convergence theorems and obtain some estimates for the rate of convergence of vector greedy algorithms when elements come from certain classes.
Simultaneous approximation by greedy algorithms
, 2003
"... Abstract. We study nonlinear mterm approximation with regard to a redundant dictionary D in a Hilbert space H. It is known that the Pure Greedy Algorithm (or, more generally, the Weak Greedy Algorithm) provides for each f ∈ H and any dictionary D an expansion into a series f = cj(f)ϕj(f), ϕj(f) ∈ ..."
Abstract

Cited by 27 (1 self)
 Add to MetaCart
(Show Context)
Abstract. We study nonlinear mterm approximation with regard to a redundant dictionary D in a Hilbert space H. It is known that the Pure Greedy Algorithm (or, more generally, the Weak Greedy Algorithm) provides for each f ∈ H and any dictionary D an expansion into a series f = cj(f)ϕj(f), ϕj(f) ∈ D, j = 1, 2,... j=1 with the Parseval property: ‖f ‖ 2 = ∑ j cj(f)  2. Following the paper of A. Lutoborski and the second author [21] we study analogs of the above expansions for a given finite number of functions f 1,..., f N with a requirement that the dictionary elements ϕj of these expansions are the same for all f i, i = 1,..., N. We study convergence and rate of convergence of such expansions which we call simultanious expansions. 1.
Comparison of worst case errors in linear and neural network approximation
 IEEE Trans. Inform. Theory
, 2002
"... Abstract—Sets of multivariable functions are described for which worst case errors in linear approximation are larger than those in approximation by neural networks. A theoretical framework for such a description is developed in the context of nonlinear approximation by fixed versus variable basis f ..."
Abstract

Cited by 16 (8 self)
 Add to MetaCart
(Show Context)
Abstract—Sets of multivariable functions are described for which worst case errors in linear approximation are larger than those in approximation by neural networks. A theoretical framework for such a description is developed in the context of nonlinear approximation by fixed versus variable basis functions. Comparisons of approximation rates are formulated in terms of certain norms tailored to sets of basis functions. The results are applied to perceptron networks. Index Terms—Complexity of neural networks, curse of dimensionality, highdimensional optimization, linear and nonlinear approximation, rates of approximation. SUMMARY OF NOTATION Normed linear space. Ball of radius in.dimensional subspace of. Subset of.
The polyharmonic local sine transform: A new tool for local image analysis and synthesis without edge effect
 Applied and Computational Harmonic Analysis
, 2006
"... We introduce a new local sine transform that can completely localize image information both in the space domain and in the spatial frequency domain. This transform, which we shall call the polyharmonic local sine transform (PHLST), first segments an image into local pieces using the characteristic f ..."
Abstract

Cited by 12 (9 self)
 Add to MetaCart
(Show Context)
We introduce a new local sine transform that can completely localize image information both in the space domain and in the spatial frequency domain. This transform, which we shall call the polyharmonic local sine transform (PHLST), first segments an image into local pieces using the characteristic functions, then decomposes each piece into two components: the polyharmonic component and the residual. The polyharmonic component is obtained by solving the elliptic boundary value problem associated with the socalled polyharmonic equation (e.g., Laplace’s equation, biharmonic equation, etc.) given the boundary values (the pixel values along the boundary created by the characteristic function). Subsequently this component is subtracted from the original local piece to obtain the residual. Since the boundary values of the residual vanish, its Fourier sine series expansion has quickly decaying coefficients. Consequently, PHLST can distinguish intrinsic singularities in the data from the artificial discontinuities created by the local windowing. Combining this ability with the quickly decaying coefficients of the residuals, PHLST is also effective for image approximation, which we demonstrate using both synthetic and real images. In addition, we introduce the polyharmonic local Fourier transform (PHLFT) by replacing the Fourier sine series above by the complex Fourier series. With a slight sacrifice of the decay rate of the expansion coefficients, PHLFT allows one to compute local Fourier magnitudes and phases without revealing the edge effect (or Gibbs phenomenon), yet is invertible and useful for various filtering, analysis, and approximation purposes.
Greedy Algorithms With Regard To Multivariate Systems With Special Structure
 CONSTR. APPROX
, 2000
"... The question of finding an optimal dictionary for nonlinear mterm approximation is studied in the paper. We consider this problem in the periodic multivariate (d variables) case for classes of functions with mixed smoothness. We prove that the well known dictionary U d which consists of trigonom ..."
Abstract

Cited by 11 (4 self)
 Add to MetaCart
(Show Context)
The question of finding an optimal dictionary for nonlinear mterm approximation is studied in the paper. We consider this problem in the periodic multivariate (d variables) case for classes of functions with mixed smoothness. We prove that the well known dictionary U d which consists of trigonometric polynomials (shifts of the Dirichlet kernels) is nearly optimal among orthonormal dictionaries. Next, it is established that for these classes near best mterm approximation with regard to U d can be achieved by simple greedy type (thresholding type) algorithm. The univariate dictionary U is used to construct a dictionary which is optimal among dictionaries with the tensor product structure.
Convergence of some greedy algorithms in Banach spaces
 J. Fourier Anal. Appl
"... Abstract. We consider some theoretical greedy algorithms for approximation in Banach spaces with respect to a general dictionary. We prove convergence of the algorithms for Banach spaces which satisfy certain smoothness assumptions. We compare the algorithms and their rates of convergence when the B ..."
Abstract

Cited by 8 (6 self)
 Add to MetaCart
(Show Context)
Abstract. We consider some theoretical greedy algorithms for approximation in Banach spaces with respect to a general dictionary. We prove convergence of the algorithms for Banach spaces which satisfy certain smoothness assumptions. We compare the algorithms and their rates of convergence when the Banach space is Lp(Td)(1<p<∞) and the dictionary is the trigonometric system.
Complexity of Neural Network Approximation with Limited Information: a Worst Case Approach
, 2000
"... In neural network theory the complexity of constructing networks to approximate inputoutput (io) functions has been of recent interest. We study such complexity in somewhat more general context of approximation of elements f of a normed space F . We assume, as is standard for radial basis function ..."
Abstract

Cited by 2 (1 self)
 Add to MetaCart
(Show Context)
In neural network theory the complexity of constructing networks to approximate inputoutput (io) functions has been of recent interest. We study such complexity in somewhat more general context of approximation of elements f of a normed space F . We assume, as is standard for radial basis function (RBF) networks, that available approximations of f , as well as information about f , are limited. That is, the approximation of f is constructed as a linear combination of a limited collection of basis elements (neuron activation functions), and the construction uses only values of some functionals at f (e.g., examples or point values of f ). Such situations are typical in RBF network models, where one wants to build a network that approximates a multivariate io function f . We show that the complexity can be essentially split into two independent parts related to information "complexity and neural "complexity. Our analysis is done in the worst case setting, and integrates elements of i...
Best basis selection for approximation in Lp ∗
, 2002
"... We study the approximation of a function class F in Lp by choosing first a basis B and then using nterm approximation with the elements of B. Into the competition for best bases we enter all greedy (i.e. democratic and unconditional [20]) bases for Lp. We show that if the function class F is well o ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
(Show Context)
We study the approximation of a function class F in Lp by choosing first a basis B and then using nterm approximation with the elements of B. Into the competition for best bases we enter all greedy (i.e. democratic and unconditional [20]) bases for Lp. We show that if the function class F is well oriented with respect to a particular basis B then, in a certain sense, this basis is the best choice for this type of approximation. Our results extend the recent results of Donoho [9] from L2 to