Results 1  10
of
76
General state space Markov chains and MCMC algorithm
 PROBABILITY SURVEYS
, 2004
"... This paper surveys various results about Markov chains on general (noncountable) state spaces. It begins with an introduction to Markov chain Monte Carlo (MCMC) algorithms, which provide the motivation and context for the theory which follows. Then, sufficient conditions for geometric and uniform e ..."
Abstract

Cited by 190 (38 self)
 Add to MetaCart
This paper surveys various results about Markov chains on general (noncountable) state spaces. It begins with an introduction to Markov chain Monte Carlo (MCMC) algorithms, which provide the motivation and context for the theory which follows. Then, sufficient conditions for geometric and uniform ergodicity are presented, along with quantitative bounds on the rate of convergence to stationarity. Many of these results are proved using direct coupling constructions based on minorisation and drift conditions. Necessary and sufficient conditions for Central Limit Theorems (CLTs) are also presented, in some cases proved via the Poisson Equation or direct regeneration constructions. Finally, optimal scaling and weak convergence results for MetropolisHastings algorithms are discussed. None of the results presented is new, though many of the proofs are. We also describe some Open Problems.
Fastest mixing markov chain on a graph
 SIAM Review
"... Author names in alphabetical order. Submitted to SIAM Review, problems and techniques section. We consider a symmetric random walk on a connected graph, where each edge is labeled with the probability of transition between the two adjacent vertices. The associated Markov chain has a uniform equilibr ..."
Abstract

Cited by 157 (16 self)
 Add to MetaCart
(Show Context)
Author names in alphabetical order. Submitted to SIAM Review, problems and techniques section. We consider a symmetric random walk on a connected graph, where each edge is labeled with the probability of transition between the two adjacent vertices. The associated Markov chain has a uniform equilibrium distribution; the rate of convergence to this distribution, i.e., the mixing rate of the Markov chain, is determined by the second largest (in magnitude) eigenvalue of the transition matrix. In this paper we address the problem of assigning probabilities to the edges of the graph in such a way as to minimize the second largest magnitude eigenvalue, i.e., the problem of ¯nding the fastest mixing Markov chain on the graph. We show that this problem can be formulated as a convex optimization problem, which can in turn be expressed as a semide¯nite program (SDP). This allows us to easily compute the (globally) fastest mixing Markov chain for any graph with a modest number of edges (say, 1000) using standard numerical methods for SDPs. Larger problems can be solved by
On choosing and bounding probability metrics
 INTERNAT. STATIST. REV.
, 2002
"... When studying convergence of measures, an important issue is the choice of probability metric. We provide a summary and some new results concerning bounds among some important probability metrics/distances that are used by statisticians and probabilists. Knowledge of other metrics can provide a mea ..."
Abstract

Cited by 151 (2 self)
 Add to MetaCart
When studying convergence of measures, an important issue is the choice of probability metric. We provide a summary and some new results concerning bounds among some important probability metrics/distances that are used by statisticians and probabilists. Knowledge of other metrics can provide a means of deriving bounds for another one in an applied problem. Considering other metrics can also provide alternate insights. We also give examples that show that rates of convergence can strongly depend on the metric chosen. Careful consideration is necessary when choosing a metric.
Some Applications of Laplace Eigenvalues of Graphs
 GRAPH SYMMETRY: ALGEBRAIC METHODS AND APPLICATIONS, VOLUME 497 OF NATO ASI SERIES C
, 1997
"... In the last decade important relations between Laplace eigenvalues and eigenvectors of graphs and several other graph parameters were discovered. In these notes we present some of these results and discuss their consequences. Attention is given to the partition and the isoperimetric properties of ..."
Abstract

Cited by 129 (0 self)
 Add to MetaCart
In the last decade important relations between Laplace eigenvalues and eigenvectors of graphs and several other graph parameters were discovered. In these notes we present some of these results and discuss their consequences. Attention is given to the partition and the isoperimetric properties of graphs, the maxcut problem and its relation to semidefinite programming, rapid mixing of Markov chains, and to extensions of the results to infinite graphs.
Evolutionary Search for Minimal Elements in Partially Ordered Finite Sets
 EVOLUTIONARY PROGRAMMING VII, PROCEEDINGS OF THE 7TH ANNUAL CONFERENCE ON EVOLUTIONARY PROGRAMMING
, 1998
"... The task of finding minimal elements of a partially ordered set is a generalization of the task of finding the global minimum of a realvalued function or of finding paretooptimal points of a multicriteria optimization problem. It is shown that evolutionary algorithms are able to converge to t ..."
Abstract

Cited by 40 (9 self)
 Add to MetaCart
(Show Context)
The task of finding minimal elements of a partially ordered set is a generalization of the task of finding the global minimum of a realvalued function or of finding paretooptimal points of a multicriteria optimization problem. It is shown that evolutionary algorithms are able to converge to the set of minimal elements in finite time with probability one, provided that the search space is finite, the timeinvariant variation operator is associated with a positive transition probability function and that the selection operator obeys the socalled `elite preservation strategy.'
Language Evolution by Iterated Learning With Bayesian Agents
, 2007
"... Languages are transmitted from person to person and generation to generation via a process of iterated learning: people learn a language from other people who once learned that language themselves. We analyze the consequences of iterated learning for learning algorithms based on the principles of Ba ..."
Abstract

Cited by 39 (8 self)
 Add to MetaCart
Languages are transmitted from person to person and generation to generation via a process of iterated learning: people learn a language from other people who once learned that language themselves. We analyze the consequences of iterated learning for learning algorithms based on the principles of Bayesian inference, assuming that learners compute a posterior distribution over languages by combining a prior (representing their inductive biases) with the evidence provided by linguistic data. We show that when learners sample languages from this posterior distribution, iterated learning converges to a distribution over languages that is determined entirely by the prior. Under these conditions, iterated learning is a form of Gibbs sampling, a widelyused Markov chain Monte Carlo algorithm. The consequences of iterated learning are more complicated when learners choose the language with maximum posterior probability, being affected by both the prior of the learners and the amount of information transmitted between generations. We show that in this case, iterated learning corresponds to another statistical inference algorithm, a variant of the expectationmaximization (EM) algorithm. These results clarify the role of iterated learning in explanations of linguistic universals and provide a formal connection between constraints on language acquisition and the languages that come to be spoken, suggesting that information transmitted via iterated learning will ultimately come to mirror the minds of the learners.
On the cover time and mixing time of random geometric graphs
 Theor. Comput. Sci
, 2007
"... The cover time and mixing time of graphs has much relevance to algorithmic applications and has been extensively investigated. Recently, with the advent of adhoc and sensor networks, an interesting class of random graphs, namely random geometric graphs, has gained new relevance and its properties ..."
Abstract

Cited by 39 (2 self)
 Add to MetaCart
(Show Context)
The cover time and mixing time of graphs has much relevance to algorithmic applications and has been extensively investigated. Recently, with the advent of adhoc and sensor networks, an interesting class of random graphs, namely random geometric graphs, has gained new relevance and its properties have been the subject of much study. A random geometric graph G(n, r) is obtained by placing n points uniformly at random on the unit square and connecting two points iff their Euclidean distance is at most r. The phase transition behavior with respect to the radius r of such graphs has been of special interest. We show that there exists a critical radius ropt such that for any r ≥ ropt G(n, r) has optimal cover time of Θ(n log n) with high probability, and, importantly, ropt = Θ(rcon) where rcon denotes the critical radius guaranteeing asymptotic connectivity. Moreover, since a disconnected graph has infinite cover time, there is a phase transition and the corresponding threshold width is O(rcon). On the other hand, the radius required for rapid mixing rrapid = ω(rcon), and, in particular, rrapid = Θ(1/poly(log n)). We are able to draw our results by giving a tight bound on the electrical resistance and conductance of G(n, r) via certain constructed flows.
Stochastic Sampling Algorithms for State Estimation of Jump Markov Linear Systems
 IEEE Transactions on Automatic Control
, 2000
"... Jump Markov linear systems are linear systems whose parameters evolve with time according to a finitestate Markov chain. Given a set of observations, our aim is to estimate the states of the finitestate Markov chain and the continuous (in space) states of the linear system. The computational cost ..."
Abstract

Cited by 33 (2 self)
 Add to MetaCart
Jump Markov linear systems are linear systems whose parameters evolve with time according to a finitestate Markov chain. Given a set of observations, our aim is to estimate the states of the finitestate Markov chain and the continuous (in space) states of the linear system. The computational cost in computing conditional mean or maximum a posteriori (MAP) state estimates of the Markov chain or the state of the jump Markov linear system grows exponentially in the number of observations.
Possible biases induced by MCMC convergence diagnostics
, 1997
"... This paper is organised as follows. In Section 2, we present an oversimplified version of a convergence diagnostic, and study analytically its performance on certain simple Markov chains. We restrict ourselves primarily to chains which in fact produce i.i.d. samples from ..."
Abstract

Cited by 25 (2 self)
 Add to MetaCart
(Show Context)
This paper is organised as follows. In Section 2, we present an oversimplified version of a convergence diagnostic, and study analytically its performance on certain simple Markov chains. We restrict ourselves primarily to chains which in fact produce i.i.d. samples from