Results 1  10
of
1,081
Centroidal Voronoi tessellations: Applications and algorithms
 SIAM REV
, 1999
"... A centroidal Voronoi tessellation is a Voronoi tessellation whose generating points are the centroids (centers of mass) of the corresponding Voronoi regions. We give some applications of such tessellations to problems in image compression, quadrature, finite difference methods, distribution of res ..."
Abstract

Cited by 389 (37 self)
 Add to MetaCart
(Show Context)
A centroidal Voronoi tessellation is a Voronoi tessellation whose generating points are the centroids (centers of mass) of the corresponding Voronoi regions. We give some applications of such tessellations to problems in image compression, quadrature, finite difference methods, distribution of resources, cellular biology, statistics, and the territorial behavior of animals. We discuss methods for computing these tessellations, provide some analyses concerning both the tessellations and the methods for their determination, and, finally, present the results of some numerical experiments.
Instant Radiosity
, 1997
"... We present a fundamental procedure for instant rendering from the radiance equation. Operating directly on the textured scene description, the very efficient and simple algorithm produces photorealistic images without any finite element kernel or solution discretization of the underlying integral eq ..."
Abstract

Cited by 234 (4 self)
 Add to MetaCart
We present a fundamental procedure for instant rendering from the radiance equation. Operating directly on the textured scene description, the very efficient and simple algorithm produces photorealistic images without any finite element kernel or solution discretization of the underlying integral equation. Rendering rates of a few seconds are obtained by exploiting graphics hardware, the deterministic technique of the quasirandom walk for the solution of the global illumination problem, and the new method of jittered low discrepancy sampling.
When are QuasiMonte Carlo Algorithms Efficient for High Dimensional Integrals?
 J. Complexity
, 1997
"... Recently quasiMonte Carlo algorithms have been successfully used for multivariate integration of high dimension d, and were significantly more efficient than Monte Carlo algorithms. The existing theory of the worst case error bounds of quasiMonte Carlo algorithms does not explain this phenomenon. ..."
Abstract

Cited by 183 (23 self)
 Add to MetaCart
Recently quasiMonte Carlo algorithms have been successfully used for multivariate integration of high dimension d, and were significantly more efficient than Monte Carlo algorithms. The existing theory of the worst case error bounds of quasiMonte Carlo algorithms does not explain this phenomenon. This paper presents a partial answer to why quasiMonte Carlo algorithms can work well for arbitrarily large d. It is done by identifying classes of functions for which the effect of the dimension d is negligible. These are weighted classes in which the behavior in the successive dimensions is moderated by a sequence of weights. We prove that the minimal worst case error of quasiMonte Carlo algorithms does not depend on the dimension d iff the sum of the weights is finite. We also prove that under this assumption the minimal number of function values in the worst case setting needed to reduce the initial error by " is bounded by C " \Gammap , where the exponent p 2 [1; 2], and C depends ...
HighOrder Collocation Methods for Differential Equations with Random Inputs
 SIAM Journal on Scientific Computing
"... Abstract. Recently there has been a growing interest in designing efficient methods for the solution of ordinary/partial differential equations with random inputs. To this end, stochastic Galerkin methods appear to be superior to other nonsampling methods and, in many cases, to several sampling met ..."
Abstract

Cited by 180 (9 self)
 Add to MetaCart
(Show Context)
Abstract. Recently there has been a growing interest in designing efficient methods for the solution of ordinary/partial differential equations with random inputs. To this end, stochastic Galerkin methods appear to be superior to other nonsampling methods and, in many cases, to several sampling methods. However, when the governing equations take complicated forms, numerical implementations of stochastic Galerkin methods can become nontrivial and care is needed to design robust and efficient solvers for the resulting equations. On the other hand, the traditional sampling methods, e.g., Monte Carlo methods, are straightforward to implement, but they do not offer convergence as fast as stochastic Galerkin methods. In this paper, a highorder stochastic collocation approach is proposed. Similar to stochastic Galerkin methods, the collocation methods take advantage of an assumption of smoothness of the solution in random space to achieve fast convergence. However, the numerical implementation of stochastic collocation is trivial, as it requires only repetitive runs of an existing deterministic solver, similar to Monte Carlo methods. The computational cost of the collocation methods depends on the choice of the collocation points, and we present several feasible constructions. One particular choice, based on sparse grids, depends weakly on the dimensionality of the random space and is more suitable for highly accurate computations of practical applications with large dimensional random inputs. Numerical examples are presented to demonstrate the accuracy and efficiency of the stochastic collocation methods. Key words. collocation methods, stochastic inputs, differential equations, uncertainty quantification
Pricing american options: a duality approach
 Operations Research
, 2001
"... We develop a new method for pricing American options. The main practical contribution of this paper is a general algorithm for constructing upper and lower bounds on the true price of the option using any approximation to the option price. We show that our bounds are tight, so that if the initial ap ..."
Abstract

Cited by 152 (5 self)
 Add to MetaCart
We develop a new method for pricing American options. The main practical contribution of this paper is a general algorithm for constructing upper and lower bounds on the true price of the option using any approximation to the option price. We show that our bounds are tight, so that if the initial approximation is close to the true price of the option, the bounds are also guaranteed to be close. We also explicitly characterize the worstcase performance of the pricing bounds. The computation of the lower bound is straightforward and relies on simulating the suboptimal exercise strategy implied by the approximate option price. The upper bound is also computed using Monte Carlo simulation. This is made feasible by the representation of the American option price as a solution of a properly defined dual minimization problem, which is the main theoretical result of this paper. Our algorithm proves to be accurate on a set of sample problems where we price call options on the maximum and the geometric mean of a collection of stocks. These numerical results suggest that our pricing method can be successfully applied to problems of practical interest. ∗An earlier draft of this paper was titled Pricing HighDimensional American Options: A Duality
A generalized discrepancy and quadrature error bound
 Math. Comp
, 1998
"... Abstract. An error bound for multidimensional quadrature is derived that includes the KoksmaHlawka inequality as a special case. This error bound takes the form of a product of two terms. One term, which depends only on the integrand, is defined as a generalized variation. The other term, which dep ..."
Abstract

Cited by 140 (13 self)
 Add to MetaCart
Abstract. An error bound for multidimensional quadrature is derived that includes the KoksmaHlawka inequality as a special case. This error bound takes the form of a product of two terms. One term, which depends only on the integrand, is defined as a generalized variation. The other term, which depends only on the quadrature rule, is defined as a generalized discrepancy. The generalized discrepancy is a figure of merit for quadrature rules and includes as special cases the L pstar discrepancy and Pα that arises in the study of lattice rules.
Interactive Global Illumination using Fast Ray Tracing
, 2002
"... Rasterization hardware provides interactive frame rates for rendering dynamic scenes, but lacks the ability of ray tracing required for efficient global illumination simulation. Existing ray tracing based methods yield high quality renderings but are far too slow for interactive use. We present a ..."
Abstract

Cited by 138 (22 self)
 Add to MetaCart
Rasterization hardware provides interactive frame rates for rendering dynamic scenes, but lacks the ability of ray tracing required for efficient global illumination simulation. Existing ray tracing based methods yield high quality renderings but are far too slow for interactive use. We present a new parallel global illumination algorithm that perfectly scales, has minimal preprocessing and communication overhead, applies highly efficient sampling techniques based on randomized quasiMonte Carlo integration, and benefits from a fast parallel ray tracing implementation by shooting coherent groups of rays. Thus a performance is achieved that allows for applying arbitrary changes to the scene, while simulating global illumination including shadows from area light sources, indirect illumination, specular effects, and caustics at interactive frame rates. Ceasing interaction rapidly provides high quality renderings.
On the Relationship Between Classical Grid Search and Probabilistic Roadmaps
"... We present, implement, and analyze a spectrum of closelyrelated planners, designed to gain insight into the relationship between classical grid search and probabilistic roadmaps (PRMs). Building on quasiMonte Carlo sampling literature, we have developed deterministic variants of the PRM that use ..."
Abstract

Cited by 136 (10 self)
 Add to MetaCart
We present, implement, and analyze a spectrum of closelyrelated planners, designed to gain insight into the relationship between classical grid search and probabilistic roadmaps (PRMs). Building on quasiMonte Carlo sampling literature, we have developed deterministic variants of the PRM that use lowdiscrepancy and lowdispersion samples, including lattices. Classical grid search is extended using subsampling for collision detection and also the optimaldispersion Sukharev grid, which can be considered as a kind of latticebased roadmap to complete the spectrum. Our experimental results show that the deterministic variants of the PRM offer performance advantages in comparison to the original PRM and the recent Lazy PRM. This even includes searching using a grid with subsampled collision checking. Our theoretical analysis shows that all of our deterministic PRM variants are resolution complete and achieve the best possible asymptotic convergence rate, which is shown superior to that obtained by random sampling. Thus, in surprising contrast to recent trends, there is both experimental and theoretical evidence that some forms of grid search are superior to the original PRM.
Using Randomization to Break the Curse of Dimensionality
 Econometrica
, 1997
"... Abstract: This paper introduces random versions of successive approximations and multigrid algorithms for computing approximate solutions to a class of finite and infinite horizon Markovian decision problems (MDPs). We prove that these algorithms succeed in breaking the “curse of dimensionality ” fo ..."
Abstract

Cited by 124 (0 self)
 Add to MetaCart
Abstract: This paper introduces random versions of successive approximations and multigrid algorithms for computing approximate solutions to a class of finite and infinite horizon Markovian decision problems (MDPs). We prove that these algorithms succeed in breaking the “curse of dimensionality ” for a subclass of MDPs known as discrete decision processes (DDPs). 1