Results 1 
2 of
2
The Brownian fan
 Commun. Pure Appl. Math. ?, no
"... We provide a mathematical study of the modified Diffusion Monte Carlo (DMC) algorithm introduced in the companion article [HW14]. DMC is a simulation technique that uses branching particle systems to represent expectations associated with FeynmanKac formulae. We provide a detailed heuristic explana ..."
Abstract

Cited by 1 (1 self)
 Add to MetaCart
(Show Context)
We provide a mathematical study of the modified Diffusion Monte Carlo (DMC) algorithm introduced in the companion article [HW14]. DMC is a simulation technique that uses branching particle systems to represent expectations associated with FeynmanKac formulae. We provide a detailed heuristic explanation of why, in cases in which a stochastic integral appears in the FeynmanKac formula (e.g. in rare event simulation, continuous time filtering, and other settings), the new algorithm is expected to converge in a suitable sense to a limiting process as the time interval between branching steps goes to 0. The situation studied here stands in stark contrast to the “naı̈ve ” generalisation of the DMC algorithm which would lead to an exponential explosion of the number of particles, thus precluding the existence of any finite limiting object. Convergence is shown rigorously in the simplest possible situation of a random walk, biased by a linear potential. The resulting limiting object, which we call the “Brownian fan”, is a very natural new mathematical object of independent interest.
Fast randomized iteration: diffusion Monte Carlo through the lens of numerical linear algebra
"... We review the basic outline of the highly successful diffusion Monte Carlo technique commonly used in contexts ranging from electronic structure calculations to rare event simulation and data assimilation, and show that aspects of the scheme can be extended to address a variety of common tasks in n ..."
Abstract
 Add to MetaCart
(Show Context)
We review the basic outline of the highly successful diffusion Monte Carlo technique commonly used in contexts ranging from electronic structure calculations to rare event simulation and data assimilation, and show that aspects of the scheme can be extended to address a variety of common tasks in numerical linear algebra. From the point of view of numerical linear algebra, the main novelty of the new algorithms is that they work in either linear or constant cost per iteration (and in total, under appropriate conditions) and are rather versatile: In this article, we will show how they apply to solution of linear systems, eigenvalue problems, and matrix exponentiation, in dimensions far beyond the present limits of numerical linear algebra. In fact, the schemes that we propose are inspired by recent DMC based quantum Monte Carlo schemes that have been applied to matrices as large as 10108 × 10108. We will also provide convergence results and discuss the dependence of these results on the dimension of the system. For many problems one can expect the total cost of the schemes to be sublinear in the dimension of the problem. So while traditional iterative methods in numerical linear algebra were created in part to deal with instances where a matrix (of size O(n2)) is too big to store, the adaptations of DMC that we propose are intended for instances in which even the solution vector itself (of size O(n)) may be too big to store or manipulate. 1