Results 1  10
of
69
Markov Chain Algorithms for Planar Lattice Structures
, 1995
"... Consider the following Markov chain, whose states are all domino tilings of a 2n x 2n chessboard: starting from some arbitrary tiling, pick a 2 x 2 window uniformly at random. If the four squares appearing in this window are covered by two parallel dominoes, rotate the dominoes 90° in place. Repeat ..."
Abstract

Cited by 107 (11 self)
 Add to MetaCart
Consider the following Markov chain, whose states are all domino tilings of a 2n x 2n chessboard: starting from some arbitrary tiling, pick a 2 x 2 window uniformly at random. If the four squares appearing in this window are covered by two parallel dominoes, rotate the dominoes 90° in place. Repeat many times. This process is used in practice to generate a random tiling, and is a widely used tool in the study of the combinatorics of tilings and the behavior of dimer systems in statistical physics. Analogous Markov chains are used to randomly generate other structures on various twodimensional lattices. This paper presents techniques which prove for the first time that, in many interesting cases, a small number of random moves suffice to obtain a uniform distribution.
Mixing times of lozenge tiling and card shuffling Markov chains
, 2002
"... We show how to combine Fourier analysis with coupling arguments to bound the mixing times of a variety of Markov chains. The mixing time is the number of steps a Markov chain takes to approach its equilibrium distribution. One application is to a class of Markov chains introduced by Luby, Randall, ..."
Abstract

Cited by 103 (1 self)
 Add to MetaCart
We show how to combine Fourier analysis with coupling arguments to bound the mixing times of a variety of Markov chains. The mixing time is the number of steps a Markov chain takes to approach its equilibrium distribution. One application is to a class of Markov chains introduced by Luby, Randall, and Sinclair to generate random tilings of regions by lozenges. For an ℓ×ℓ region we bound the mixing time by O(ℓ 4 log ℓ), which improves on the previous bound of O(ℓ 7), and we show the new bound to be essentially tight. In another application we resolve a few questions raised by Diaconis and SaloffCoste by lower bounding the mixing time of various cardshuffling Markov chains. Our lower bounds are within a constant factor of their upper bounds. When we use our methods to modify a pathcoupling analysis of Bubley and Dyer, we obtain an O(n 3 log n) upper bound on the mixing time of the KarzanovKhachiyan Markov chain for linear extensions.
On Markov chains for independent sets
 Journal of Algorithms
, 1997
"... Random independent sets in graphs arise, for example, in statistical physics, in the hardcore model of a gas. A new rapidly mixing Markov chain for independent sets is defined in this paper. We show that it is rapidly mixing for a wider range of values of the parameter than the LubyVigoda chain, ..."
Abstract

Cited by 82 (17 self)
 Add to MetaCart
(Show Context)
Random independent sets in graphs arise, for example, in statistical physics, in the hardcore model of a gas. A new rapidly mixing Markov chain for independent sets is defined in this paper. We show that it is rapidly mixing for a wider range of values of the parameter than the LubyVigoda chain, the best previously known. Moreover the new chain is apparently more rapidly mixing than the LubyVigoda chain for larger values of (unless the maximum degree of the graph is 4). An extension of the chain to independent sets in hypergraphs is described. This chain gives an efficient method for approximately counting the number of independent sets of hypergraphs with maximum degree two, or with maximum degree three and maximum edge size three. Finally, we describe a method which allows one, under certain circumstances, to deduce the rapid mixing of one Markov chain from the rapid mixing of another, with the same state space and stationary distribution. This method is applied to two Markov ch...
Markov Chain Decomposition for Convergence Rate Analysis
"... In this paper we develop tools for analyzing the rate at which a reversible Markov chain converges to stationarity. Our techniques are useful when the Markov chain can be decomposed into pieces which are themselves easier to analyze. The main theorems relate the spectral gap of the original Markov c ..."
Abstract

Cited by 48 (9 self)
 Add to MetaCart
In this paper we develop tools for analyzing the rate at which a reversible Markov chain converges to stationarity. Our techniques are useful when the Markov chain can be decomposed into pieces which are themselves easier to analyze. The main theorems relate the spectral gap of the original Markov chains to the spectral gap of the pieces. In the first case the pieces are restrictions of the Markov chain to subsets of the state space; the second case treats a MetropolisHastings chain whose equilibrium distribution is a weighted average of equilibrium distributions of other MetropolisHastings chains on the same state space.
Fast Convergence of the Glauber Dynamics for Sampling Independent Sets: Part II
, 1999
"... This work is a continuation of [4]. The focus is on the problem of sampling independent sets of a graph with maximum degree ffi. The weight of each independent set is expressed in terms of a fixed positive parameter 2 ffi\Gamma2 , where the weight of an indepednent set oe is joej . The Glaube ..."
Abstract

Cited by 47 (3 self)
 Add to MetaCart
(Show Context)
This work is a continuation of [4]. The focus is on the problem of sampling independent sets of a graph with maximum degree ffi. The weight of each independent set is expressed in terms of a fixed positive parameter 2 ffi\Gamma2 , where the weight of an indepednent set oe is joej . The Glauber dynamics is a simple Markov chain Monte Carlo method for sampling from this distribution. In [4], we showed fast convergence of this dynamics for trianglefree graphs. This paper proves fast convergence for arbitrary graphs. Computer Science Division, University of California at Berkeley, and International Computer Science Institute. Supported in part by National Science Foundation Fellowship. 1 Introduction For a more general introduction and a discussion of related work we refer the reader to the companion work [4]. The aim of this work is given a graph G = (V; E) to efficiently sample from the probability measure ¯G defined on the set of indepedent sets\Omega =\Omega G of G weight...
Rapidly Mixing Markov Chains with Applications in Computer Science and Physics
, 2006
"... Monte Carlo algorithms often depend on Markov chains to sample from very large data sets. A key ingredient in the design of an efficient Markov chain is determining rigorous bounds on how quickly the chain “mixes,” or converges, to its stationary distribution. This survey provides an overview of sev ..."
Abstract

Cited by 33 (0 self)
 Add to MetaCart
Monte Carlo algorithms often depend on Markov chains to sample from very large data sets. A key ingredient in the design of an efficient Markov chain is determining rigorous bounds on how quickly the chain “mixes,” or converges, to its stationary distribution. This survey provides an overview of several useful techniques.
Sampling Adsorbing Staircase Walks Using a New Markov Chain Decomposition Method
 In Proceedings of the 41st Annual Symposium on Foundations of Computer Science
"... Staircase walks are lattice paths from (0; 0) to (2n; 0) which take diagonal steps and which never fall below the xaxis. A path hitting the xaxis k times is assigned a weight of k ; where ? 0 . A simple local Markov chain which connects the state space and converges to the Gibbs measure (which ..."
Abstract

Cited by 32 (5 self)
 Add to MetaCart
(Show Context)
Staircase walks are lattice paths from (0; 0) to (2n; 0) which take diagonal steps and which never fall below the xaxis. A path hitting the xaxis k times is assigned a weight of k ; where ? 0 . A simple local Markov chain which connects the state space and converges to the Gibbs measure (which normalizes these weights) is known to be rapidly mixing when = 1 , and can easily be shown to be rapidly mixing when ! 1 . We give the first proof that this Markov chain is also mixing in the more interesting case of ? 1 , known in the statistical physics community as adsorbing staircase walks. The main new ingredient is a decomposition technique which allows us to analyze the Markov chain in pieces, applying different arguments to analyze each piece. 1. Introduction 1.1. The model Staircase walks (also called Dyck paths) are walks in ZZ 2 from (0; 0) to (n; n) which stay above the diagonal x = y . Rotating by 45 o , they correspond to walks from (0; 0) to (2n; 0) which take diagon...
A Note on the Glauber Dynamics for Sampling Independent Sets
 Electronic Journal of Combinatorics
, 2001
"... This note considers the problem of sampling from the set of weighted independent sets of a graph with maximum degree #. For a positive fugacity #,theweight of an independent set # is # # . Luby and Vigoda proved that the Glauber dynamics, which only changes the configuration at a randomly chosen ..."
Abstract

Cited by 31 (1 self)
 Add to MetaCart
(Show Context)
This note considers the problem of sampling from the set of weighted independent sets of a graph with maximum degree #. For a positive fugacity #,theweight of an independent set # is # # . Luby and Vigoda proved that the Glauber dynamics, which only changes the configuration at a randomly chosen vertex in each step, has mixing time O(n log n)when#< 2 #2 for trianglefree graphs. We extend their approach to general graphs. 1
Coupling from the Past: a User's Guide
, 1997
"... . The Markov chain Monte Carlo method is a general technique for obtaining samples from a probability distribution. In earlier work, we showed that for many applications one can modify the Markov chain Monte Carlo method so as to remove all bias in the output resulting from the biased choice of an i ..."
Abstract

Cited by 27 (2 self)
 Add to MetaCart
. The Markov chain Monte Carlo method is a general technique for obtaining samples from a probability distribution. In earlier work, we showed that for many applications one can modify the Markov chain Monte Carlo method so as to remove all bias in the output resulting from the biased choice of an initial state for the chain; we have called this method Coupling From The Past (CFTP). Here we describe this method in a fashion that should make our ideas accessible to researchers from diverse areas. Our expository strategy is to avoid proofs and focus on sample applications. 1. Introduction In Markov chain Monte Carlo studies, one attempts to sample from a distribution ß by running a Markov chain whose unique steadystate distribution is ß. Ideally, one has proved a theorem that guarantees that the time for which one plans to run the chain is substantially greater than the mixing time of the chain, so that the distribution ~ ß that one's procedure actually samples from is known to be cl...
The Asymmetric OneDimensional Constrained Ising Model: Rigorous Results
, 2008
"... We study a onedimensional spin (interacting particle) system, with product Bernoulli(p) stationary distribution, in which a site can flip only when its left neighbor is in state +1. Such models have been studied in physics as simple exemplars of systems exhibiting slow relaxation. In our “East” mod ..."
Abstract

Cited by 26 (1 self)
 Add to MetaCart
We study a onedimensional spin (interacting particle) system, with product Bernoulli(p) stationary distribution, in which a site can flip only when its left neighbor is in state +1. Such models have been studied in physics as simple exemplars of systems exhibiting slow relaxation. In our “East” model the natural conjecture is that the relaxation time τ(p), that is 1/(spectral gap), satisfies log τ(p) ∼ log2 (1/p) log 2 as p ↓ 0. We prove this up to a factor of 2. The upper bound uses the Poincaré comparison argument applied to a “wave ” (longrange) comparison process, which we analyze by probabilistic techniques. Such comparison arguments go back to Holley (1984, 1985). The lower bound, which atypically is not easy, involves construction and analysis of a certain “coalescing random jumps” process.