Results 1  10
of
343
Asymptotically Optimal Importance Sampling and Stratification for Pricing PathDependent Options
 Mathematical Finance
, 1999
"... This paper develops a variance reduction technique for Monte Carlo simulations of pathdependent options driven by highdimensional Gaussian vectors. The method combines importance sampling based on a change of drift with stratified sampling along a small number of key dimensions. The change of dri ..."
Abstract

Cited by 91 (13 self)
 Add to MetaCart
This paper develops a variance reduction technique for Monte Carlo simulations of pathdependent options driven by highdimensional Gaussian vectors. The method combines importance sampling based on a change of drift with stratified sampling along a small number of key dimensions. The change of drift is selected through a large deviations analysis and is shown to be optimal in an asymptotic sense. The drift selected has an interpretation as the path of the underlying state variables which maximizes the product of probability and payoffthe most important path. The directions used for stratified sampling are optimal for a quadratic approximation to the integrand or payoff function. Indeed, under differentiability assumptions our importance sampling method eliminates variability due to the linear part of the payoff function, and stratification eliminates much of the variability due to the quadratic part of the payoff. The two parts of the method are linked because the asymptotically optimal drift vector frequently provides a particularly effective direction for stratification. We illustrate the use of the method with pathdependent options, a stochastic volatility model, and interest rate derivatives. The method reveals novel features of the structure of their payoffs. KEY WORDS: Monte Carlo methods, variance reduction, large deviations, Laplace principle 1. INTRODUCTION This paper develops a variance reduction technique for Monte Carlo simulations driven by highdimensional Gaussian vectors, with particular emphasis on the pricing of pathdependent options. The method combines importance sampling based on a change of drift with stratified sampling along a small number of key dimensions. The change of drift is selected through a large deviations analysis and is shown to...
A Quartet of SemiGroups for Model Specification, Detection, Robustness, and the Price of Risk
 Price of Risk,” University of Chicago manuscript
"... A representative agent fears that his model, a continuous time Markov process with jump and di#usion components, is misspecified and therefore uses robust control theory to make decisions. Under the decision maker's approximating model, that cautious behavior puts adjustments for model miss ..."
Abstract

Cited by 73 (25 self)
 Add to MetaCart
A representative agent fears that his model, a continuous time Markov process with jump and di#usion components, is misspecified and therefore uses robust control theory to make decisions. Under the decision maker's approximating model, that cautious behavior puts adjustments for model misspecification into factor prices for risk. We use a statistical theory of detection to quantify the appropriate amount of model misspecification that the decision maker should fear. Related semigroups describe (1) an approximating model; (2) the behavior of model detection statistics; (3) a model misspecification adjustment to the continuation value in the decision maker's Bellman equation; and (4) asset prices.
Largest Weighted Delay First Scheduling: Large Deviations and Optimality,“ to appear
 Annals of Appl. Prob
"... We consider a single server system with N input flows. We assume that each flow has stationary increments and satisfies a sample path large deviation principle, and that the system is stable. We introduce the largest weighted delay first (LWDF) queueing discipline associated with any given weight ve ..."
Abstract

Cited by 71 (5 self)
 Add to MetaCart
(Show Context)
We consider a single server system with N input flows. We assume that each flow has stationary increments and satisfies a sample path large deviation principle, and that the system is stable. We introduce the largest weighted delay first (LWDF) queueing discipline associated with any given weight vector α =�α1�����αN�. We show that under the LWDF discipline the sequence of scaled stationary distributions of the delay ˆw i of each flow satisfies a large deviation principle with the rate function given by a finitedimensional optimization problem. We also prove that the LWDF discipline is optimal in the sense that it maximizes the quantity −1 min αi lim i=1 � ��� � N n→ ∞ n log P � ˆw] i>n � � within a large class of work conserving disciplines. 1. Introduction.
Importance sampling, large deviations, and differential games
 Stoch. and Stoch. Reports
"... A heuristic that has emerged in the area of importance sampling is that the changes of measure used to prove large deviation lower bounds give good performance when used for importance sampling. Recent work, however, has suggested that the heuristic is incorrect in many situations. The perspective p ..."
Abstract

Cited by 69 (18 self)
 Add to MetaCart
(Show Context)
A heuristic that has emerged in the area of importance sampling is that the changes of measure used to prove large deviation lower bounds give good performance when used for importance sampling. Recent work, however, has suggested that the heuristic is incorrect in many situations. The perspective put forth in the present paper is that large deviation theory suggests many changes of measure, and that not all are suitable for importance sampling. In the setting of Cramérs Theorem, the traditional interpretation of the heuristic suggests a Þxed change of distribution on the underlying independent and identically distributed summands. In contrast, we consider importance sampling schemes where the exponential change of measure is adaptive, in the sense that it depends on the historical empirical mean. The existence of asymptotically optimal schemes within this class is demonstrated. The result indicates that an adaptive change of measure, rather than a static change of measure, is what the large deviations analysis truly suggests. The proofs utilize a controltheoretic approach to large deviations, which naturally leads to the construction of asymptotically optimal adaptive schemes in terms of a limit Bellman equation. Numerical examples contrasting the adaptive and standard schemes are presented, as well as an interpretation of their different performances in terms of differential games.
A microscopic interpretation for adaptive dynamics trait substitution sequence models
 Stoch. Proc. Appl
"... We consider an interacting particle Markov process for Darwinian evolution in an asexual population with nonconstant population size, involving a linear birth rate, a densitydependent logistic death rate, and a probability µ of mutation at each birth event. We introduce a renormalization parameter ..."
Abstract

Cited by 63 (12 self)
 Add to MetaCart
We consider an interacting particle Markov process for Darwinian evolution in an asexual population with nonconstant population size, involving a linear birth rate, a densitydependent logistic death rate, and a probability µ of mutation at each birth event. We introduce a renormalization parameter K scaling the size of the population, which leads, when K → +∞, to a deterministic dynamics for the density of individuals holding a given trait. By combining in a nonstandard way the limits of large population (K → +∞) and of small mutations (µ → 0), we prove that a time scales separation between the birth and death events and the mutation events occurs and that the interacting particle microscopic process converges for finite dimensional distributions to the biological model of evolution known as the “monomorphic trait substitution sequence ” model of adaptive dynamics, which describes the Darwinian evolution in an asexual population as a Markov jump process in the trait space.
EL Inference for Partially Identified Models: Large Deviations Optimality and Bootstrap Validity
, 2008
"... This paper addresses the issue of optimal inference for parameters that are partially identified in models with moment inequalities. There currently exists a variety of inferential methods for use in this setting. However, the question of choosing optimally among contending procedures is unresolved. ..."
Abstract

Cited by 57 (5 self)
 Add to MetaCart
This paper addresses the issue of optimal inference for parameters that are partially identified in models with moment inequalities. There currently exists a variety of inferential methods for use in this setting. However, the question of choosing optimally among contending procedures is unresolved. In this paper, I first consider a canonical large deviations criterion for optimality and show that inference based on the empirical likelihood ratio statistic is optimal. This finding is a direct analog to that in Kitamura (2001) for moment equality models. Second, I introduce a new empirical likelihood bootstrap that provides a valid resampling method for moment inequality models and overcomes the implementation challenges that arise as a result of nonpivotal limit distributions. Lastly, I analyze the finite sample properties of the proposed framework using Monte Carlo simulations. The simulation results are encouraging.
Markov Chain Approximations for Deterministic Control Problems with Affine Dynamics and Quadratic Cost in the Control
 SIAM J. Numer. Anal
, 1998
"... We consider the construction of Markov chain approximations for an important class of deterministic control problems. The emphasis is on the construction of schemes that can be easily implemented and which possess a number of highly desirable qualitative properties. The class of problems covered is ..."
Abstract

Cited by 53 (0 self)
 Add to MetaCart
We consider the construction of Markov chain approximations for an important class of deterministic control problems. The emphasis is on the construction of schemes that can be easily implemented and which possess a number of highly desirable qualitative properties. The class of problems covered is that for which the control is affine in the dynamics and with quadratic running cost. This class covers a number of interesting application areas, including problems that arise in large deviations, risksensitive and robust control, robust filtering, and certain problems from computer vision. Examples are given, as well as a proof of convergence. 1 Introduction There are a number of deterministic optimal control problems for which a global approximation to the value function is needed. For example, in small noise risksensitive and robust nonlinear filtering [10, 15], the optimal (robust) filter is defined in terms of the value function for a calculus of variations problem in which the v...
Subsolutions of an Isaacs equation and efficient schemes for importance sampling: Convergence analysis
, 2005
"... It was established in [6, 7] that importance sampling algorithms for estimating rareevent probabilities are intimately connected with twoperson zerosum differential games and the associated Isaacs equation. This game interpretation shows that dynamic or statedependent schemes are needed in orde ..."
Abstract

Cited by 51 (18 self)
 Add to MetaCart
(Show Context)
It was established in [6, 7] that importance sampling algorithms for estimating rareevent probabilities are intimately connected with twoperson zerosum differential games and the associated Isaacs equation. This game interpretation shows that dynamic or statedependent schemes are needed in order to attain asymptotic optimality in a general setting. The purpose of the present paper is to show that classical subsolutions of the Isaacs equation can be used as a basic and flexible tool for the construction and analysis of efficient dynamic importance sampling schemes. There are two main contributions. The first is a basic theoretical result characterizing the asymptotic performance of importance sampling estimators based on subsolutions. The second is an explicit method for constructing classical subsolutions as a mollification of piecewise affine functions. Numerical examples are included for illustration and to demonstrate that simple, nearly asymptotically optimal importance sampling schemes can be obtained for a variety of problems via the subsolution approach.
A Variational Representation for Certain Functionals of Brownian Motion
, 1997
"... In this paper we show that the variational representation \Gamma log Ee \Gammaf (W ) = inf v E ae 1 2 Z 1 0 kv s k 2 ds + f ` W + Z \Delta 0 v s ds 'oe holds, where W is a standard d\Gammadimensional Brownian motion, f is any bounded measurable function that maps C([0; 1] : IR ..."
Abstract

Cited by 47 (12 self)
 Add to MetaCart
In this paper we show that the variational representation \Gamma log Ee \Gammaf (W ) = inf v E ae 1 2 Z 1 0 kv s k 2 ds + f ` W + Z \Delta 0 v s ds 'oe holds, where W is a standard d\Gammadimensional Brownian motion, f is any bounded measurable function that maps C([0; 1] : IR d ) into IR, and the infimum is over all processes v that are progressively measurable with respect to the augmentation of the filtration generated by W . An application is made to a problem concerned with large deviations, and extensions to unbounded functions are given. 1 Introduction In this paper we prove the following variational representation formula. Let W be a standard d\Gammadimensional Brownian motion. Then for functions f : C([0; 1] : IR d ) ! IR that are bounded and measurable, \Gamma log Ee \Gammaf (W ) = inf v E ae 1 2 Z 1 0 kv s k 2 ds + f ` W + Z \Delta 0 v s ds 'oe : In this equation E denotes expectation with respect to the probability space on which t...