Results 1 - 10
of
95
When are Quasi-Monte Carlo Algorithms Efficient for High Dimensional Integrals?
- J. Complexity
, 1997
"... Recently quasi-Monte Carlo algorithms have been successfully used for multivariate integration of high dimension d, and were significantly more efficient than Monte Carlo algorithms. The existing theory of the worst case error bounds of quasi-Monte Carlo algorithms does not explain this phenomenon. ..."
Abstract
-
Cited by 188 (23 self)
- Add to MetaCart
Recently quasi-Monte Carlo algorithms have been successfully used for multivariate integration of high dimension d, and were significantly more efficient than Monte Carlo algorithms. The existing theory of the worst case error bounds of quasi-Monte Carlo algorithms does not explain this phenomenon. This paper presents a partial answer to why quasi-Monte Carlo algorithms can work well for arbitrarily large d. It is done by identifying classes of functions for which the effect of the dimension d is negligible. These are weighted classes in which the behavior in the successive dimensions is moderated by a sequence of weights. We prove that the minimal worst case error of quasi-Monte Carlo algorithms does not depend on the dimension d iff the sum of the weights is finite. We also prove that under this assumption the minimal number of function values in the worst case setting needed to reduce the initial error by " is bounded by C " \Gammap , where the exponent p 2 [1; 2], and C depends ...
Pricing American options: a duality approach.
- Operation Research
, 2004
"... Abstract We develop a new method for pricing American options. The main practical contribution of this paper is a general algorithm for constructing upper and lower bounds on the true price of the option using any approximation to the option price. We show that our bounds are tight, so that if the ..."
Abstract
-
Cited by 147 (6 self)
- Add to MetaCart
(Show Context)
Abstract We develop a new method for pricing American options. The main practical contribution of this paper is a general algorithm for constructing upper and lower bounds on the true price of the option using any approximation to the option price. We show that our bounds are tight, so that if the initial approximation is close to the true price of the option, the bounds are also guaranteed to be close. We also explicitly characterize the worst-case performance of the pricing bounds. The computation of the lower bound is straightforward and relies on simulating the suboptimal exercise strategy implied by the approximate option price. The upper bound is also computed using Monte Carlo simulation. This is made feasible by the representation of the American option price as a solution of a properly defined dual minimization problem, which is the main theoretical result of this paper. Our algorithm proves to be accurate on a set of sample problems where we price call options on the maximum and the geometric mean of a collection of stocks. These numerical results suggest that our pricing method can be successfully applied to problems of practical interest.
Valuation of Mortgage Backed Securities Using Brownian Bridges to Reduce Effective Dimension
, 1997
"... The quasi-Monte Carlo method for financial valuation and other integration problems has error bounds of size O((log N) k N \Gamma1 ), or even O((log N) k N \Gamma3=2 ), which suggests significantly better performance than the error size O(N \Gamma1=2 ) for standard Monte Carlo. But in hig ..."
Abstract
-
Cited by 100 (15 self)
- Add to MetaCart
The quasi-Monte Carlo method for financial valuation and other integration problems has error bounds of size O((log N) k N \Gamma1 ), or even O((log N) k N \Gamma3=2 ), which suggests significantly better performance than the error size O(N \Gamma1=2 ) for standard Monte Carlo. But in high dimensional problems this benefit might not appear at feasible sample sizes. Substantial improvements from quasi-Monte Carlo integration have, however, been reported for problems such as the valuation of mortgage-backed securities, in dimensions as high as 360. We believe that this is due to a lower effective dimension of the integrand in those cases. This paper defines the effective dimension and shows in examples how the effective dimension may be reduced by using a Brownian bridge representation. 1 Introduction Simulation is often the only effective numerical method for the accurate valuation of securities whose value depends on the whole trajectory of interest Mathematics Departmen...
Numerical Integration using Sparse Grids
- NUMER. ALGORITHMS
, 1998
"... We present and review algorithms for the numerical integration of multivariate functions defined over d-dimensional cubes using several variants of the sparse grid method first introduced by Smolyak [51]. In this approach, multivariate quadrature formulas are constructed using combinations of tensor ..."
Abstract
-
Cited by 91 (16 self)
- Add to MetaCart
(Show Context)
We present and review algorithms for the numerical integration of multivariate functions defined over d-dimensional cubes using several variants of the sparse grid method first introduced by Smolyak [51]. In this approach, multivariate quadrature formulas are constructed using combinations of tensor products of suited one--dimensional formulas. The computing cost is almost independent of the dimension of the problem if the function under consideration has bounded mixed derivatives. We suggest the usage of extended Gauss (Patterson) quadrature formulas as the one-dimensional basis of the construction and show their superiority in comparison to previously used sparse grid approaches based on the trapezoidal, Clenshaw-Curtis and Gauss rules in several numerical experiments and applications.
Latin Supercube Sampling for Very High Dimensional Simulations
, 1997
"... This paper introduces Latin supercube sampling (LSS) for very high dimensional simulations, such as arise in particle transport, finance and queuing. LSS is developed as a combination of two widely used methods: Latin hypercube sampling (LHS), and Quasi-Monte Carlo (QMC). In LSS, the input variables ..."
Abstract
-
Cited by 81 (8 self)
- Add to MetaCart
This paper introduces Latin supercube sampling (LSS) for very high dimensional simulations, such as arise in particle transport, finance and queuing. LSS is developed as a combination of two widely used methods: Latin hypercube sampling (LHS), and Quasi-Monte Carlo (QMC). In LSS, the input variables are grouped into subsets, and a lower dimensional QMC method is used within each subset. The QMC points are presented in random order within subsets. QMC methods have been observed to lose effectiveness in high dimensional problems. This paper shows that LSS can extend the benefits of QMC to much higher dimensions, when one can make a good grouping of input variables. Some suggestions for grouping variables are given for the motivating examples. Even a poor grouping can still be expected to do as well as LHS. The paper also extends LHS and LSS to infinite dimensional problems. The paper includes a survey of QMC methods, randomized versions of them (RQMC) and previous methods for extending Q...
Dimension-Adaptive Tensor-Product Quadrature
- Computing
, 2003
"... We consider the numerical integration of multivariate functions defined over the unit hypercube. Here, we especially address the high--dimensional case, where in general the curse of dimension is encountered. Due to the concentration of measure phenomenon, such functions can often be well approxi ..."
Abstract
-
Cited by 74 (12 self)
- Add to MetaCart
(Show Context)
We consider the numerical integration of multivariate functions defined over the unit hypercube. Here, we especially address the high--dimensional case, where in general the curse of dimension is encountered. Due to the concentration of measure phenomenon, such functions can often be well approximated by sums of lower--dimensional terms. The problem, however, is to find a good expansion given little knowledge of the integrand itself.
Sparse grids and related approximation schemes for higher dimensional problems
"... The efficient numerical treatment of high-dimensional problems is hampered by the curse of dimensionality. We review approximation techniques which overcome this problem to some extent. Here, we focus on methods stemming from Kolmogorov’s theorem, the ANOVA decomposition and the sparse grid approach ..."
Abstract
-
Cited by 46 (12 self)
- Add to MetaCart
(Show Context)
The efficient numerical treatment of high-dimensional problems is hampered by the curse of dimensionality. We review approximation techniques which overcome this problem to some extent. Here, we focus on methods stemming from Kolmogorov’s theorem, the ANOVA decomposition and the sparse grid approach and discuss their prerequisites and properties. Moreover, we present energy-norm based sparse grids and demonstrate that, for functions with bounded mixed derivatives on the unit hypercube, the associated approximation rate in terms of the involved degrees of freedom shows no dependence on the dimension at all, neither in the approximation order nor in the order constant.
Extensible Lattice Sequences For Quasi-Monte Carlo Quadrature
- SIAM Journal on Scientific Computing
, 1999
"... Integration lattices are one of the main types of low discrepancy sets used in quasi-Monte Carlo methods. However, they have the disadvantage of being of fixed size. This article describes the construction of an infinite sequence of points, the first b m of which form a lattice for any non-negative ..."
Abstract
-
Cited by 35 (11 self)
- Add to MetaCart
Integration lattices are one of the main types of low discrepancy sets used in quasi-Monte Carlo methods. However, they have the disadvantage of being of fixed size. This article describes the construction of an infinite sequence of points, the first b m of which form a lattice for any non-negative integer m. Thus, if the quadrature error using an initial lattice is too large, the lattice can be extended without discarding the original points. Generating vectors for extensible lattices are found by minimizing a loss function based on some measure of discrepancy or nonuniformity of the lattice. The spectral test used for finding pseudo-random number generators is one important example of such a discrepancy. The performance of the extensible lattices proposed here is compared to that of other methods for some practical quadrature problems.
Weighted Tensor Product Algorithms for Linear Multivariate Problems
, 1998
"... We study the "-approximation of linear multivariate problems defined over weighted tensor product Hilbert spaces of functions f of d variables. A class of weighted tensor product (WTP) algorithms is defined which depends on a number of parameters. Two classes of permissible information are stud ..."
Abstract
-
Cited by 30 (8 self)
- Add to MetaCart
We study the "-approximation of linear multivariate problems defined over weighted tensor product Hilbert spaces of functions f of d variables. A class of weighted tensor product (WTP) algorithms is defined which depends on a number of parameters. Two classes of permissible information are studied. all consists of all linear functionals while std consists of evaluations of f or its derivatives. We show that these multivariate problems are sometimes tractable even with a worst-case assurance. We study problem tractability by investigating when a WTP algorithm is a polynomial-time algorithm, that is, when the minimal number of information evaluations is a polynomial in 1=" and d. For all we construct an optimal WTP algorithm and provide a necessary and sufficient condition for tractability in terms of the sequence of weights and the sequence of singular values for d = 1. For std we obtain a weaker result by constructing a WTP algorithm which is optimal only for some weight se...