Results 11  20
of
305
The Curse of Dimensionality for Monotone and Convex Functions of Many Variables
, 2010
"... ..."
(Show Context)
DimensionAdaptive TensorProduct Quadrature
 Computing
, 2003
"... We consider the numerical integration of multivariate functions defined over the unit hypercube. Here, we especially address the highdimensional case, where in general the curse of dimension is encountered. Due to the concentration of measure phenomenon, such functions can often be well approxi ..."
Abstract

Cited by 74 (12 self)
 Add to MetaCart
(Show Context)
We consider the numerical integration of multivariate functions defined over the unit hypercube. Here, we especially address the highdimensional case, where in general the curse of dimension is encountered. Due to the concentration of measure phenomenon, such functions can often be well approximated by sums of lowerdimensional terms. The problem, however, is to find a good expansion given little knowledge of the integrand itself.
Estimating the Largest Eigenvalue by the Power and Lanczos Algorithms with a Random Start
, 1992
"... Our problem is to compute an approximation to the largest eigenvalue of an n \Theta n large symmetric positive definite matrix with relative error at most ". We consider only algorithms that use Krylov information [b; Ab; : : : ; A k b] consisting of k matrixvector multiplications for some ..."
Abstract

Cited by 71 (3 self)
 Add to MetaCart
Our problem is to compute an approximation to the largest eigenvalue of an n \Theta n large symmetric positive definite matrix with relative error at most ". We consider only algorithms that use Krylov information [b; Ab; : : : ; A k b] consisting of k matrixvector multiplications for some unit vector b. If the vector b is chosen deterministically then the problem cannot be solved no matter how many matrixvector multiplications are performed and what algorithm is used. If, however, the vector b is chosen randomly with respect to the uniform distribution over the unit sphere, then the problem can be solved on the average and probabilistically. More precisely, for a randomly chosen vector b we study the power and Lanczos algorithms. For the power algorithm (method) we prove sharp bounds on the average relative error and on the probabilistic relative failure. For the Lanczos algorithm we present only upper bounds. In particular, ln(n)=k characterizes the average relative error of ...
Explicit Cost Bounds of Algorithms for Multivariate Tensor Product Problems
 J. Complexity
, 1994
"... We study multivariate tensor product problems in the worst case and average case settings. They are defined on functions of d variables. For arbitrary d, we provide explicit upper bounds on the costs of algorithms which compute an "approximation to the solution. The cost bounds are of the for ..."
Abstract

Cited by 70 (10 self)
 Add to MetaCart
We study multivariate tensor product problems in the worst case and average case settings. They are defined on functions of d variables. For arbitrary d, we provide explicit upper bounds on the costs of algorithms which compute an "approximation to the solution. The cost bounds are of the form (c(d) + 2) fi 1 ` fi 2 + fi 3 ln 1=" d \Gamma 1 ' fi 4 (d\Gamma1) ` 1 " ' fi 5 : Here c(d) is the cost of one function evaluation (or one linear functional evaluation), and fi i 's do not depend on d; they are determined by the properties of the problem for d = 1. For certain tensor product problems, these cost bounds do not exceed c(d) K " \Gammap for some numbers K and p, both independent of d. We apply these general estimates to certain integration and approximation problems in the worst and average case settings. We also obtain an upper bound, which is independent of d, for the number, n("; d), of points for which discrepancy (with unequal weights) is at most ", n("; d) 7:26 ...
Generalized binary search
 In Proceedings of the 46th Allerton Conference on Communications, Control, and Computing
, 2008
"... This paper addresses the problem of noisy Generalized Binary Search (GBS). GBS is a wellknown greedy algorithm for determining a binaryvalued hypothesis through a sequence of strategically selected queries. At each step, a query is selected that most evenly splits the hypotheses under consideratio ..."
Abstract

Cited by 59 (0 self)
 Add to MetaCart
(Show Context)
This paper addresses the problem of noisy Generalized Binary Search (GBS). GBS is a wellknown greedy algorithm for determining a binaryvalued hypothesis through a sequence of strategically selected queries. At each step, a query is selected that most evenly splits the hypotheses under consideration into two disjoint subsets, a natural generalization of the idea underlying classic binary search. GBS is used in many applications, including fault testing, machine diagnostics, disease diagnosis, job scheduling, image processing, computer vision, and active learning. In most of these cases, the responses to queries can be noisy. Past work has provided a partial characterization of GBS, but existing noisetolerant versions of GBS are suboptimal in terms of query complexity. This paper presents an optimal algorithm for noisy GBS and demonstrates its application to learning multidimensional threshold functions. 1
Quasirandom methods for estimating integrals using relatively small samples
 SIAM Review
, 1994
"... Abstract. Much of the recent work dealing with quasirandom methods has been aimed at establishing the best possible asymptotic rates of convergence to zero of the error resulting when a finitedimensional integral is replaced by a finite sum of integrand values. In contrast with this perspective to ..."
Abstract

Cited by 53 (1 self)
 Add to MetaCart
(Show Context)
Abstract. Much of the recent work dealing with quasirandom methods has been aimed at establishing the best possible asymptotic rates of convergence to zero of the error resulting when a finitedimensional integral is replaced by a finite sum of integrand values. In contrast with this perspective to concentrate on asymptotic convergence rates, this paper emphasizes quasirandom methods that are effective for all sample sizes. Throughout the paper, the problem of estimating finitedimensional integrals is used to illustrate the major ideas, although much of what is done applies equally to the problem of solving certain Fredholm integral equations. Some new techniques, based on errorreducing transformations of the integrand, are described that have been shown to be useful both in estimating highdimensional integrals and in solving integral equations. These techniques illustrate the utility of carrying over to the quasiMonte Carlo method certain devices that have proven to be very valuable in statistical (pseudorandom) Monte Carlo applications. Key words, quasiMonte Carlo, asymptotic rate of convergence, numerical integration
Quantum summation with an application to integration
, 2001
"... We study summation of sequences and integration in the quantum model of computation. We develop quantum algorithms for computing the mean of sequences which satisfy a psummability ( condition and for d integration of functions from Lebesgue spaces Lp [0, 1] ) and analyze their convergence rates. We ..."
Abstract

Cited by 43 (11 self)
 Add to MetaCart
We study summation of sequences and integration in the quantum model of computation. We develop quantum algorithms for computing the mean of sequences which satisfy a psummability ( condition and for d integration of functions from Lebesgue spaces Lp [0, 1] ) and analyze their convergence rates. We also prove lower bounds which show that the proposed algorithms are, in many cases, optimal within the setting of quantum computing. This extends recent results of Brassard, Høyer, Mosca, and Tapp (2000) on computing the mean for bounded sequences and complements results of Novak (2001) on integration of functions from Hölder classes.
Discontinuous information in the worst case and randomized settings
 Math. Nachr
, 1969
"... Dedicated to Hans Triebel on the occasion of his 75th birthday We believe that discontinuous linear information is never more powerful than continuous linear information for approximating continuous operators. We prove such a result in the worst case setting. In the randomized setting we consider co ..."
Abstract

Cited by 43 (0 self)
 Add to MetaCart
(Show Context)
Dedicated to Hans Triebel on the occasion of his 75th birthday We believe that discontinuous linear information is never more powerful than continuous linear information for approximating continuous operators. We prove such a result in the worst case setting. In the randomized setting we consider compact linear operators defined between Hilbert spaces. In this case, the use of discontinuous linear information in the randomized setting cannot be much more powerful than continuous linear information in the worst case setting. These results can be applied when function evaluations are used even if function values are defined only almost everywhere. 1
Optimized TensorProduct Approximation Spaces
"... . This paper is concerned with the construction of optimized grids and approximation spaces for elliptic differential and integral equations. The main result is the analysis of the approximation of the embedding of the intersection of classes of functions with bounded mixed derivatives in standard S ..."
Abstract

Cited by 42 (15 self)
 Add to MetaCart
. This paper is concerned with the construction of optimized grids and approximation spaces for elliptic differential and integral equations. The main result is the analysis of the approximation of the embedding of the intersection of classes of functions with bounded mixed derivatives in standard Sobolev spaces. Based on the framework of tensorproduct biorthogonal wavelet bases and stable subspace splittings, the problem is reduced to diagonal mappings between Hilbert sequence spaces. We construct operator adapted finiteelement subspaces with a lower dimension than the standard fullgrid spaces. These new approximation spaces preserve the approximation order of the standard fullgrid spaces, provided that certain additional regularity assumptions are fulfilled. The form of the approximation spaces is governed by the ratios of the smoothness exponents of the considered classes of functions. We show in which cases the so called curse of dimensionality can be broken. The theory covers e...