Results 1  10
of
36
Optimal acceptance rates for Metropolis algorithms: Moving beyond 0.234
, 2006
"... Recent optimal scaling theory has produced a condition for the asymptotically optimal acceptance rate of Metropolis algorithms to be the wellknown 0.234 when applied to certain multidimensional target distributions. These ddimensional target distributions are formed of independent components, each ..."
Abstract

Cited by 29 (6 self)
 Add to MetaCart
(Show Context)
Recent optimal scaling theory has produced a condition for the asymptotically optimal acceptance rate of Metropolis algorithms to be the wellknown 0.234 when applied to certain multidimensional target distributions. These ddimensional target distributions are formed of independent components, each of which is scaled according to its own function of d. We show that when the condition is not met the limiting process of the algorithm is altered, yielding an asymptotically optimal acceptance rate which might drastically differ from the usual 0.234. Specifically, we prove that as d → ∞ the sequence of stochastic processes formed by say the i ∗ th component of each Markov chain usually converges to a Langevin diffusion process with a new speed measure υ, except in particular cases where it converges to a onedimensional Metropolis algorithm with acceptance rule α ∗. We also discuss the use of inhomogeneous proposals, which might reveal essential in specific cases.
OPTIMAL SCALINGS FOR LOCAL METROPOLIS–HASTINGS CHAINS ON NONPRODUCT TARGETS IN HIGH DIMENSIONS
, 2009
"... We investigate local MCMC algorithms, namely the randomwalk Metropolis and the Langevin algorithms, and identify the optimal choice of the local stepsize as a function of the dimension n of the state space, asymptotically as n → ∞. We consider target distributions defined as a change of measure fr ..."
Abstract

Cited by 25 (13 self)
 Add to MetaCart
We investigate local MCMC algorithms, namely the randomwalk Metropolis and the Langevin algorithms, and identify the optimal choice of the local stepsize as a function of the dimension n of the state space, asymptotically as n → ∞. We consider target distributions defined as a change of measure from a product law. Such structures arise, for instance, in inverse problems or Bayesian contexts when a product prior is combined with the likelihood. We state analytical results on the asymptotic behavior of the algorithms under general conditions on the change of measure. Our theory is motivated by applications on conditioned diffusion processes and inverse problems related to the 2D Navier–Stokes equation.
MCMC Methods for Sampling Function Space
"... Applied mathematics is concerned with developing models with predictive capability, and with probing those models to obtain qualitative and quantitative insight into the phenomena being modelled. Statistics is datadriven and is aimed at the development of methodologies to optimize the information ..."
Abstract

Cited by 21 (9 self)
 Add to MetaCart
(Show Context)
Applied mathematics is concerned with developing models with predictive capability, and with probing those models to obtain qualitative and quantitative insight into the phenomena being modelled. Statistics is datadriven and is aimed at the development of methodologies to optimize the information derived from data. The increasing complexity of phenomena that scientists and engineers wish to model, together with our increased ability to gather, store and interrogate data, mean that the subjects of applied mathematics and statistics are increasingly required to work in conjunction in order to significantly progress understanding. This article is concerned with a research program at the interface between these two disciplines, aimed at problems in differential equations where profusion of data and the sophisticated model combine to produce the mathematical problem of obtaining information from a probability measure on function space. In this context there is an array of problems with a common mathematical structure, namely that the probability measure in question is a change of measure from a Gaussian. We illustrate the wideranging applicability of this structure. For problems whose solution is determined by a probability measure on function space, information about the solution can be obtained by sampling from this probability measure. One way to do this is through the use of Markov chain MonteCarlo (MCMC) methods. We show how the common mathematical structure of the aforementioned problems can be exploited in the design of effective MCMC methods.
Optimal scaling of random walk Metropolis algorithms with nonGaussian proposals
 Methodol. Comput. Appl. Probab
, 2011
"... ar ..."
(Show Context)
Diffusion limits of the random walk Metropolis algorithm in high dimensions
 ANNALS OF APPLIED PROBABILITY, 22(3), PP. 881930.
, 2012
"... ..."
Optimal scaling of the random walk Metropolis on elliptically symmetric unimodal targets
 Bernoulli
"... Scaling of proposals for Metropolis algorithms is an important practical problem in MCMC implementation. Criteria for scaling based on empirical acceptance rates of algorithms have been found to work consistently well across a broad range of problems. Essentially, proposal jump sizes are increased w ..."
Abstract

Cited by 17 (8 self)
 Add to MetaCart
(Show Context)
Scaling of proposals for Metropolis algorithms is an important practical problem in MCMC implementation. Criteria for scaling based on empirical acceptance rates of algorithms have been found to work consistently well across a broad range of problems. Essentially, proposal jump sizes are increased when acceptance rates are high and decreased when rates are low. In recent years, considerable theoretical support has been given for rules of this type which work on the basis that acceptance rates around 0.234 should be preferred. This has been based on asymptotic results which approximate high dimensional algorithm trajectories by diffusions. In this paper, we develop a novel approach to understanding 0.234 which avoids the need for diffusion limits. We derive explicit formulae for algorithm efficiency and acceptance rates as functions of the scaling parameter. We apply these to the family of elliptically symmetric target densities, where further illuminating explicit results are possible. Under suitable conditions, we verify the 0.234 rule for a new class of target densities. Moreover, we can characterise cases where 0.234 fails to hold, either because the target density is too diffuse in a sense we make precise, or because the eccentricity of the target density is too severe, again in a sense we make precise. We provide numerical verifications of our results.
Optimal scaling and diffusion limits for the langevin algorithm in high dimensions
 Annals of Applied Probability
"... Copyright and reuse: The Warwick Research Archive Portal (WRAP) makes the work of researchers of the University of Warwick available open access under the following conditions. Copyright © and all moral rights to the version of the paper presented here belong to the individual author(s) and/or other ..."
Abstract

Cited by 16 (4 self)
 Add to MetaCart
(Show Context)
Copyright and reuse: The Warwick Research Archive Portal (WRAP) makes the work of researchers of the University of Warwick available open access under the following conditions. Copyright © and all moral rights to the version of the paper presented here belong to the individual author(s) and/or other copyright owners. To the extent reasonable and practicable the material made available in WRAP has been checked for eligibility before being made available. Copies of full items can be used for personal research or study, educational, or notforprofit purposes without prior permission or charge. Provided that the authors, title and full bibliographic details are credited, a hyperlink and/or URL is given for the original metadata page and the content is not changed in any way. Publisher’s statement:
Optimal Proposal Distributions and Adaptive MCMC
, 2008
"... We review recent work concerning optimal proposal scalings for MetropolisHastings MCMC algorithms, and adaptive MCMC algorithms for trying to improve the algorithm on the fly. ..."
Abstract

Cited by 11 (1 self)
 Add to MetaCart
We review recent work concerning optimal proposal scalings for MetropolisHastings MCMC algorithms, and adaptive MCMC algorithms for trying to improve the algorithm on the fly.