Results 1  10
of
13
Deterministic Multilevel Algorithms for INFINITEDIMENSIONAL INTEGRATION ON R^N
 PREPRINT 40, DFGSPP 1324
, 2010
"... ..."
QuasiMonte Carlo finite element methods for a class of elliptic partial differential equations with random coefficients (2012)
, 2011
"... ..."
On Weighted Hilbert Spaces and Integration of Functions of Infinitely Many Variables
, 2012
"... The consecutive numbering of the publications is determined by their chronological order. The aim of this preprint series is to make new research rapidly available for scientific discussion. Therefore, the responsibility for the contents is solely due to the authors. The publications will be distrib ..."
Abstract

Cited by 26 (6 self)
 Add to MetaCart
(Show Context)
The consecutive numbering of the publications is determined by their chronological order. The aim of this preprint series is to make new research rapidly available for scientific discussion. Therefore, the responsibility for the contents is solely due to the authors. The publications will be distributed by the authors.
Optimal randomized multilevel algorithms for infinitedimensional integration on function spaces with ANOVAtype decomposition
 Erich Novak
"... ar ..."
(Show Context)
Lower Error Bounds for Randomized Multilevel and Changing Dimension Algorithms
"... Abstract We provide lower error bounds for randomized algorithms that approximate integrals of functions depending on an unrestricted or even infinite number of variables. More precisely, we consider the infinitedimensional integration problem on weighted Hilbert spaces with an underlying anchore ..."
Abstract

Cited by 2 (2 self)
 Add to MetaCart
(Show Context)
Abstract We provide lower error bounds for randomized algorithms that approximate integrals of functions depending on an unrestricted or even infinite number of variables. More precisely, we consider the infinitedimensional integration problem on weighted Hilbert spaces with an underlying anchored decomposition and arbitrary weights. We focus on randomized algorithms and the randomized worst case error. We study two cost models for function evaluation which depend on the number of active variables of the chosen sample points. Multilevel algorithms behave very well with respect to the first cost model, while changing dimension algorithms and also dimensionwise quadrature methods, which are based on a similar idea, can take advantage of the more generous second cost model. We prove the first nontrivial lower error bounds for randomized algorithms in these cost models and demonstrate their quality in the case of product weights. In particular, we show that the randomized changing dimension algorithms provided in [L. Plaskota, G. W. Wasilkowski, J. Complexity 27 (2011), 505–518] achieve convergence rates arbitrarily close to the optimal convergence rate. 1
Existence and Construction of Shifted Lattice Rules with an Arbitrary Number of Points and Bounded Weighted Star Discrepancy for General Decreasing Weights
"... We study the problem of constructing shifted rank1 lattice rules for the approximation of highdimensional integrals with a low weighted star discrepancy, for classes of functions having bounded weighted variation, where the weighted variation is defined as the weighted sum of HardyKrause variatio ..."
Abstract

Cited by 1 (1 self)
 Add to MetaCart
(Show Context)
We study the problem of constructing shifted rank1 lattice rules for the approximation of highdimensional integrals with a low weighted star discrepancy, for classes of functions having bounded weighted variation, where the weighted variation is defined as the weighted sum of HardyKrause variations over all lower dimensional projections of the integrand. Under general conditions on the weights, we prove the existence of rank1 lattice rules such that for any
doi:10.1017/S09624929XXXXXX Printed in the United Kingdom Acta Numerica: High dimensional integration – the QuasiMonte Carlo way
"... This paper is a contemporary review of QMC (“QuasiMonte Carlo”) methods, i.e., equalweight rules for the approximate evaluation of high dimensional integrals over the unit cube [0, 1]s, where s may be large, or even infinite. After a general introduction, the paper surveys recent developments in ..."
Abstract
 Add to MetaCart
This paper is a contemporary review of QMC (“QuasiMonte Carlo”) methods, i.e., equalweight rules for the approximate evaluation of high dimensional integrals over the unit cube [0, 1]s, where s may be large, or even infinite. After a general introduction, the paper surveys recent developments in lattice methods, digital nets, and related themes. Among those recent developments are methods of construction of both lattices and digital nets, to yield QMC rules that have a prescribed rate of convergence for sufficiently smooth functions, and ideally also guaranteed slow growth (or no growth) of the worst case error as s increases. A crucial role is played by parameters called “weights”, since a careful use of the weight parameters is needed to ensure that the worst case errors in an appropriately weighted function space are bounded, or grow only slowly, as the dimension s increases. Important tools for the analysis are weighted function spaces, reproducing kernel Hilbert spaces, and discrepancy,
Evaluating Expectations of Functionals of Brownian Motions: a Multilevel Idea
"... Abstract. Pricing a pathdependent financial derivative, such as an Asian option, requires the computation of E[g(B(·))], the expectation of a payoff functional, g, that depends on a Brownian motion, (B(t))Tt=0. The expectation corresponds to an infinite dimensional integral, which is approximated b ..."
Abstract
 Add to MetaCart
(Show Context)
Abstract. Pricing a pathdependent financial derivative, such as an Asian option, requires the computation of E[g(B(·))], the expectation of a payoff functional, g, that depends on a Brownian motion, (B(t))Tt=0. The expectation corresponds to an infinite dimensional integral, which is approximated by the sample average of a ddimensional approximation to the integrand. In this article, a multilevel algorithm with low discrepancy designs is used to improve the convergence rate of the worst case error with respect to a single level algorithm. The worst case error is derived as a function of each level l’s sample size, nl, and truncated dimension, dl, for payoff functionals that arise from certain Hilbert spaces with moderate smoothness. If the error in approximating an infinite dimensional expectation by a ddimensional integral is O(d−q), and the error for approximating a ddimensional integral by an nterm sample average is O(n−p), independent of d, then it is shown that the error in computing the infinite dimensional expectation may be as small as N−min(p,q/s) for a wellchosen multilevel algorithm, where N, the cost of the algorithm is defined as N = n1d s 1 + · · · + nLd s L for some s ≥ 0. This optimal convergence rate is achieved for either small or large q for rank1 lattice rule designs, or alternatively for Niederretier net designs for large q.