Results 1  10
of
65
Methods for Approximating Integrals in Statistics with Special Emphasis on Bayesian Integration Problems
 Statistical Science
"... This paper is a survey of the major techniques and approaches available for the numerical approximation of integrals in statistics. We classify these into five broad categories; namely, asymptotic methods, importance sampling, adaptive importance sampling, multiple quadrature and Markov chain method ..."
Abstract

Cited by 50 (5 self)
 Add to MetaCart
This paper is a survey of the major techniques and approaches available for the numerical approximation of integrals in statistics. We classify these into five broad categories; namely, asymptotic methods, importance sampling, adaptive importance sampling, multiple quadrature and Markov chain methods. Each method is discussed giving an outline of the basic supporting theory and particular features of the technique. Conclusions are drawn concerning the relative merits of the methods based on the discussion and their application to three examples. The following broad recommendations are made. Asymptotic methods should only be considered in contexts where the integrand has a dominant peak with approximate ellipsoidal symmetry. Importance sampling, and preferably adaptive importance sampling, based on a multivariate Student should be used instead of asymptotics methods in such a context. Multiple quadrature, and in particular subregion adaptive integration, are the algorithms of choice for...
Multidimensional Adaptive Sampling and Reconstruction for Ray Tracing
"... We present a new adaptive sampling strategy for ray tracing. Our technique is specifically designed to handle multidimensional sample domains, and it is well suited for efficiently generating images with effects such as soft shadows, motion blur, and depth of field. These effects are problematic for ..."
Abstract

Cited by 44 (1 self)
 Add to MetaCart
We present a new adaptive sampling strategy for ray tracing. Our technique is specifically designed to handle multidimensional sample domains, and it is well suited for efficiently generating images with effects such as soft shadows, motion blur, and depth of field. These effects are problematic for existing image based adaptive sampling techniques as they operate on pixels, which are possibly noisy results of a Monte Carlo ray tracing process. Our sampling technique operates on samples in the multidimensional space given by the rendering equation and as a consequence the value of each sample is noisefree. Our algorithm consists of two passes. In the first pass we adaptively generate samples in the multidimensional space, focusing on regions where the local contrast between samples is high. In the second pass we reconstruct the image by integrating the multidimensional function along all but the image dimensions. We perform a high quality anisotropic reconstruction by determining the extent of each sample in the multidimensional space using a structure tensor. We demonstrate our method on scenes with a 3 to 5 dimensional space, including soft shadows, motion blur, and depth of field. The results show that our method uses fewer samples than Mittchell’s adaptive sampling technique while producing images with less noise.
Extensible Lattice Sequences For QuasiMonte Carlo Quadrature
 SIAM Journal on Scientific Computing
, 1999
"... Integration lattices are one of the main types of low discrepancy sets used in quasiMonte Carlo methods. However, they have the disadvantage of being of fixed size. This article describes the construction of an infinite sequence of points, the first b m of which form a lattice for any nonnegative ..."
Abstract

Cited by 35 (11 self)
 Add to MetaCart
Integration lattices are one of the main types of low discrepancy sets used in quasiMonte Carlo methods. However, they have the disadvantage of being of fixed size. This article describes the construction of an infinite sequence of points, the first b m of which form a lattice for any nonnegative integer m. Thus, if the quadrature error using an initial lattice is too large, the lattice can be extended without discarding the original points. Generating vectors for extensible lattices are found by minimizing a loss function based on some measure of discrepancy or nonuniformity of the lattice. The spectral test used for finding pseudorandom number generators is one important example of such a discrepancy. The performance of the extensible lattices proposed here is compared to that of other methods for some practical quadrature problems.
SubregionAdaptive Integration of Functions Having a Dominant Peak
 JOURNAL OF COMPUTATIONAL AND GRAPHICAL STATISTICS
, 1993
"... Many statistical multiple integration problems involve integrands that have a dominant peak. In applying numerical methods to solve these problems, statisticians have paid relatively little attention to existing quadrature methods and available software developed in the numerical analysis literature ..."
Abstract

Cited by 26 (5 self)
 Add to MetaCart
Many statistical multiple integration problems involve integrands that have a dominant peak. In applying numerical methods to solve these problems, statisticians have paid relatively little attention to existing quadrature methods and available software developed in the numerical analysis literature. One reason these methods have been largely overlooked, even though they are known to be more efficient than Monte Carlo for wellbehaved problems of low dimensionality, may be that when applied naively they are poorly suited for peakedintegrand problems. In this paper we use transformations based on "splitt" distributions to allow the integrals to be efficiently computed using a subregionadaptive numerical integration algorithm. Our splitt distributions are modifications of those suggested by Geweke (1989) and may also be used to define Monte Carlo importance functions. We then compare our approach to Monte Carlo. In the several examples we examine here, we find subregionadaptive inte...
An Adaptive Numerical Cubature Algorithm for Simplices
, 1997
"... this paper is the numerical evaluation of integrals in the form f (x)dT ; where x is an nvector, f is lvector and T is a collection of m nsimplices. There has been only limited work on practical algorithms for this general problem. Most research has considered the case n = 2, where T is a t ..."
Abstract

Cited by 17 (2 self)
 Add to MetaCart
this paper is the numerical evaluation of integrals in the form f (x)dT ; where x is an nvector, f is lvector and T is a collection of m nsimplices. There has been only limited work on practical algorithms for this general problem. Most research has considered the case n = 2, where T is a triangle. For this case there has been work on the development of integration rules (reviewed in the paper by Lyness and Cools [25]), and algorithms ([3; 12; 20; 21; 24] ). For the case n = 3, where The rst author was supported by NATO Collaborative Research Grant CRG 940139, and by Fund for Scienti c Research { Flanders and US NSF grants
The Informational Complexity of Learning from Examples
, 1996
"... This thesis attempts to quantify the amount of information needed to learn certain tasks. The tasks chosen vary from learning functions in a Sobolev space using radial basis function networks to learning grammars in the principles and parameters framework of modern linguistic theory. These problem ..."
Abstract

Cited by 15 (4 self)
 Add to MetaCart
This thesis attempts to quantify the amount of information needed to learn certain tasks. The tasks chosen vary from learning functions in a Sobolev space using radial basis function networks to learning grammars in the principles and parameters framework of modern linguistic theory. These problems are analyzed from the perspective of computational learning theory and certain unifying perspectives emerge. Copyright c fl Massachusetts Institute of Technology, 1996 This report describes research done within the Center for Biological and Computational Learning in the Department of Brain and Cognitive Sciences and at the Artificial Intelligence Laboratory at the Massachusetts Institute of Technology. This research is sponsored by a grant from the National Science Foundation under contract ASC9217041 (this award includes funds from ARPA provided under the HPCC program); and by a grant from ARPA/ONR under contract N0001492J1879. Additional support has been provided by Siemens Co...
LocallyCorrected Multidimensional Quadrature Rules For Singular Functions
 SIAM Journal of Scientific Computing
, 1993
"... . Accurate numerical integration of singular functions usually requires either adaptivity or product integration. Both interfere with fast summation techniques and thus hamper largescale computations. This paper presents a method for computing highly accurate quadrature formulas for singular functi ..."
Abstract

Cited by 15 (3 self)
 Add to MetaCart
(Show Context)
. Accurate numerical integration of singular functions usually requires either adaptivity or product integration. Both interfere with fast summation techniques and thus hamper largescale computations. This paper presents a method for computing highly accurate quadrature formulas for singular functions which combine well with fast summation methods. Given the singularity and the N nodes, we rst construct weights which integrate smooth functions with orderk accuracy. Then we locally correct a small number of weights near the singularity, to achieve orderk accuracy on singular functions as well. The method is highly ecient and runs in O(Nk 2d +N log 2 N) time and O(k 2d +N) space. We derive precise error bounds and time estimates and conrm them with numerical results which demonstrate the accuracy and eciency of the method in largescale computations. As part of our implementation, we also construct a new adaptive multidimensional product Gauss quadrature routine with an eecti...
Doubly Adaptive Quadrature Routines based on Newton–Cotes rules
 Reports in Informatics 229, Dept. of Informatics, Univ. of
, 2002
"... In this paper we test two recently published Matlab codes, adaptsim and adaptlob, using both a Lyness–Kaganove test and a battery type of test. Furthermore we modify these two codes using sequences of null rules in the error estimator with the intention to increase the reliability for both codes. In ..."
Abstract

Cited by 10 (2 self)
 Add to MetaCart
(Show Context)
In this paper we test two recently published Matlab codes, adaptsim and adaptlob, using both a Lyness–Kaganove test and a battery type of test. Furthermore we modify these two codes using sequences of null rules in the error estimator with the intention to increase the reliability for both codes. In addition two new Matlab codes applying a locally and a globally adaptive strategy respectively are developed. These two new codes turn out to have very good properties both with respect to reliability and efficiency. Both algorithms are using sequences of null rules in their local error estimators. These error estimators allow us both to test if we are in the region of asymptotic behavior and thus increase reliability and to take advantage of the degree of precision of the basic quadrature rule. The new codes compare favorably to the two recently published adaptive codes both when we use a Lyness–Kaganove testing technique and by using a battery test.
Computing Multivariate Normal Probabilities Using Rank1 Lattice Sequences
 in Proceedings of the Workshop on Scientific Computing
, 1997
"... . Multivariate normal probabilities, which are used for statistical inference, must be computed numerically. This article describes a new rank1 lattice quadrature rule and its application to computing multivariate normal probabilities. In contrast to existing lattice rules the number of integrand e ..."
Abstract

Cited by 8 (2 self)
 Add to MetaCart
(Show Context)
. Multivariate normal probabilities, which are used for statistical inference, must be computed numerically. This article describes a new rank1 lattice quadrature rule and its application to computing multivariate normal probabilities. In contrast to existing lattice rules the number of integrand evaluations need not be specified in advance. When compared to existing algorithms for computing multivariate normal probabilities the new algorithm is more efficient when high accuracy is required and/or the number of variables is large. 1 Introduction The most important probability distribution is the Gaussian or normal probability distribution. Normal probabilities are used to perform statistical inference and construct confidence intervals. The definition of the normal probability distribution involves an integral which cannot be evaluated in terms of elementary functions. Therefore, numerical methods are needed. Many software packages contain routines for evaluating univariate normal pr...