Results 11  20
of
615
Multilevel Monte Carlo Methods and Applications to Elliptic PDEs with Random Coefficients
"... We consider the numerical solution of elliptic partial differential equations with random coefficients. Such problems arise, for example, in uncertainty quantification for groundwater flow. We describe a novel variance reduction technique for the standard Monte Carlo method, called the multilevel Mo ..."
Abstract

Cited by 46 (15 self)
 Add to MetaCart
(Show Context)
We consider the numerical solution of elliptic partial differential equations with random coefficients. Such problems arise, for example, in uncertainty quantification for groundwater flow. We describe a novel variance reduction technique for the standard Monte Carlo method, called the multilevel Monte Carlo method. The main result is that in certain circumstances the asymptotic cost of solving the stochastic problem is a constant (but moderately large) multiple of the cost of solving the deterministic problem. Numerical calculations demonstrating the effectiveness of the method for one and twodimensional model problems arising in groundwater flow are presented. 1
Monte Carlo algorithms for optimal stopping and statistical learning
, 2003
"... We extend the LongstaffSchwartz algorithm for approximately solving optimal stopping problems on highdimensional state spaces. We reformulate the optimal stopping problem for Markov processes in discrete time as a generalized statistical learning problem. Within this setup we apply deviation ineq ..."
Abstract

Cited by 38 (2 self)
 Add to MetaCart
We extend the LongstaffSchwartz algorithm for approximately solving optimal stopping problems on highdimensional state spaces. We reformulate the optimal stopping problem for Markov processes in discrete time as a generalized statistical learning problem. Within this setup we apply deviation inequalities for suprema of empirical processes to derive consistency criteria, and to estimate the convergence rate and sample complexity. Our results strengthen and extend earlier results obtained by Clément, Lamberton and Protter (2002).
Information relaxation and duality in stochastic dynamic programs
 Working Paper, Fuqua School of Business
, 2008
"... We describe a general technique for determining upper bounds on maximal values (or lower bounds on minimal costs) in stochastic dynamic programs. In this approach, we relax the nonanticipativity constraints that require decisions to depend only on the information available at the time a decision is ..."
Abstract

Cited by 37 (3 self)
 Add to MetaCart
We describe a general technique for determining upper bounds on maximal values (or lower bounds on minimal costs) in stochastic dynamic programs. In this approach, we relax the nonanticipativity constraints that require decisions to depend only on the information available at the time a decision is made and impose a “penalty ” that punishes violations of nonanticipativity. In applications, the hope is that this relaxed version of the problem will be simpler to solve than the original dynamic program. The upper bounds provided by this dual approach complement lower bounds on values that may be found by simulating with heuristic policies. We describe the theory underlying this dual approach and establish weak duality, strong duality and complementary slackness results that are analogous to the duality results of linear programming. We also study properties of good penalties. Finally, we demonstrate the use of this dual approach in an adaptive inventory control problem with an unknown and changing demand distribution and in valuing options with stochastic volatilities and interest rates. These are complex problems of significant practical interest that are quite difficult to solve to optimality. In these examples, our dual approach requires relatively little additional computation and leads to tight bounds on the optimal values.
043 "An Iteration Procedure for Solving Integral Equations Related to Optimal Stopping Problems" by Denis Belomestny and Pavel
, 2006
"... We present an iterative algorithm for computing values of optimal stopping problems for onedimensional diffusions on finite time intervals. The method is based on a time discretisation of the initial model and a construction of discretised analogues of the associated integral equation for the valu ..."
Abstract

Cited by 35 (1 self)
 Add to MetaCart
We present an iterative algorithm for computing values of optimal stopping problems for onedimensional diffusions on finite time intervals. The method is based on a time discretisation of the initial model and a construction of discretised analogues of the associated integral equation for the value function. The proposed iterative procedure converges in a finite number of steps and delivers in each step a lower or an upper bound for the discretised value function on the whole time interval. We also give remarks on applications of the method for solving the integral equations related to several optimal stopping problems. 1
Improved algorithms for rare event simulation with heavy tails
 The Danish National Research Foundation: Network in Mathematical Physics and Stochastics
, 2004
"... heavy tails ..."
(Show Context)
Number of paths versus number of basis functions in American option pricing
 ANN. APPL. PROBAB
, 2004
"... An American option grants the holder the right to select the time at which to exercise the option, so pricing an American option entails solving an optimal stopping problem. Difficulties in applying standard numerical methods to complex pricing problems have motivated the development of techniques t ..."
Abstract

Cited by 32 (0 self)
 Add to MetaCart
(Show Context)
An American option grants the holder the right to select the time at which to exercise the option, so pricing an American option entails solving an optimal stopping problem. Difficulties in applying standard numerical methods to complex pricing problems have motivated the development of techniques that combine Monte Carlo simulation with dynamic programming. One class of methods approximates the option value at each time using a linear combination of basis functions, and combines Monte Carlo with backward induction to estimate optimal coefficients in each approximation. We analyze the convergence of such a method as both the number of basis functions and the number of simulated paths increase. We get explicit results when the basis functions are polynomials and the underlying process is either Brownian motion or geometric Brownian motion. We show that the number of paths required for worstcase convergence grows exponentially in the degree of the approximating polynomials in the case of Brownian motion and faster in the case of geometric Brownian motion.
Minimum variance importance sampling via population Monte Carlo. ESAIM: Probability and Statistics
, 2007
"... Variance reduction has always been a central issue in Monte Carlo experiments. Population Monte Carlo can be used to this effect, in that a mixture of importance functions, called a Dkernel, can be iteratively optimised to achieve the minimum asymptotic variance for a function of interest among all ..."
Abstract

Cited by 31 (11 self)
 Add to MetaCart
(Show Context)
Variance reduction has always been a central issue in Monte Carlo experiments. Population Monte Carlo can be used to this effect, in that a mixture of importance functions, called a Dkernel, can be iteratively optimised to achieve the minimum asymptotic variance for a function of interest among all possible mixtures. The implementation of this iterative scheme is illustrated for the computation of the price of a European option in the CoxIngersollRoss model. A Central Limit theorem as well as moderate deviations are established for the Dkernel Population Monte Carlo methodology.
Asymptotic Robustness of Estimators in RareEvent Simulation
"... The asymptotic robustness of estimators as a function of a rarity parameter, in the context of rareevent simulation, is often qualified by properties such as bounded relative error (BRE) and logarithmic efficiency (LE), also called asymptotic optimality. However, these properties do not suffice to ..."
Abstract

Cited by 30 (14 self)
 Add to MetaCart
The asymptotic robustness of estimators as a function of a rarity parameter, in the context of rareevent simulation, is often qualified by properties such as bounded relative error (BRE) and logarithmic efficiency (LE), also called asymptotic optimality. However, these properties do not suffice to ensure that moments of order higher than one are well estimated. For example, they do not guarantee that the variance of the empirical variance remains under control as a function of the rarity parameter. We study generalizations of the BRE and LE properties that take care of this limitation. They are named bounded relative moment of order k (BRMk) and logarithmic efficiency of order k (LEk), where k ≥ 1 is an arbitrary real number. We also introduce and examine a stronger notion called vanishing relative centered moment of order k, and exhibit examples where it holds. These properties are of interest for various estimators, including the empirical mean and the empirical variance. We develop (sufficient) Lyapunovtype conditions for these properties in a setting where statedependent importance sampling (IS) is used to estimate firstpassage time probabilities. We show how these conditions can guide us in the design of good IS schemes, that enjoy convenient asymptotic robustness properties, in the context of random walks with lighttailed and heavytailed increments. As another illustration, we study the hierarchy
Applied Stochastic Processes and Control for JumpDiffusions: Modeling, Analysis and Computation
 Analysis and Computation, SIAM Books
, 2007
"... Abstract. An applied compact introductory survey of Markov stochastic processes and control in continuous time is presented. The presentation is in tutorial stages, beginning with deterministic dynamical systems for contrast and continuing on to perturbing the deterministic model with diffusions usi ..."
Abstract

Cited by 30 (7 self)
 Add to MetaCart
Abstract. An applied compact introductory survey of Markov stochastic processes and control in continuous time is presented. The presentation is in tutorial stages, beginning with deterministic dynamical systems for contrast and continuing on to perturbing the deterministic model with diffusions using Wiener processes. Then jump perturbations are added using simple Poisson processes constructing the theory of simple jumpdiffusions. Next, markedjumpdiffusions are treated using compound Poisson processes to include random marked jumpamplitudes in parallel with the equivalent Poisson random measure formulation. Otherwise, the approach is quite applied, using basic principles with no abstractions beyond Poisson random measure. This treatment is suitable for those in classical applied mathematics, physical sciences, quantitative finance and engineering, but have trouble getting started with the abstract measuretheoretic literature. The approach here builds upon the treatment of continuous functions in the regular calculus and associated ordinary differential equations by adding nonsmooth and jump discontinuities to the model. Finally, the stochastic optimal control of markedjumpdiffusions is developed, emphasizing the underlying assumptions. The survey concludes with applications in biology and finance, some of which are canonical, dimension reducible problems and others are genuine nonlinear problems. Key words. Jumpdiffusions, Wiener processes, Poisson processes, random jump amplitudes, stochastic differential equations, stochastic chain rules, stochastic optimal control AMS subject classifications. 60G20, 93E20, 93E03 1. Introduction. There
Dynamic importance sampling for uniformly recurrent markov chains
 Annals of Applied Probability
, 2005
"... Importance sampling is a variance reduction technique for efficient estimation of rareevent probabilities by Monte Carlo. In standard importance sampling schemes, the system is simulated using an a priori fixed change of measure suggested by a large deviation lower bound analysis. Recent work, howe ..."
Abstract

Cited by 28 (7 self)
 Add to MetaCart
Importance sampling is a variance reduction technique for efficient estimation of rareevent probabilities by Monte Carlo. In standard importance sampling schemes, the system is simulated using an a priori fixed change of measure suggested by a large deviation lower bound analysis. Recent work, however, has suggested that such schemes do not work well in many situations. In this paper we consider dynamic importance sampling in the setting of uniformly recurrent Markov chains. By “dynamic ” we mean that in the course of a single simulation, the change of measure can depend on the outcome of the simulation up till that time. Based on a controltheoretic approach to large deviations, the existence of asymptotically optimal dynamic schemes is demonstrated in great generality. The implementation of the dynamic schemes is carried out with the help of a limiting Bellman equation. Numerical examples are presented to contrast the dynamic and standard schemes. 1. Introduction. Among