Results

**1 - 4**of**4**### On the relationships between lumpability and filtering of finite stochastic systems

, 2008

"... The aim of this paper is to provide the conditions necessary to reduce the complexity of state filtering for finite stochastic systems (FSSs). A concept of lumpability for FSSs is introduced. This paper asserts that the unnormalised filter for a lumped FSS has linear dynamics. Two sufficient conditi ..."

Abstract
- Add to MetaCart

(Show Context)
The aim of this paper is to provide the conditions necessary to reduce the complexity of state filtering for finite stochastic systems (FSSs). A concept of lumpability for FSSs is introduced. This paper asserts that the unnormalised filter for a lumped FSS has linear dynamics. Two sufficient conditions for such a lumpability property to hold are discussed. It is shown that the first condition is also necessary for the lumped FSS to have a linear dynamics. Next, it is proven that the second condition allows the filter of the original FSS to be directly obtained from the filter for the lumped FSS. Finally, the paper generalises an earlier published result for the approximation of a general FSS by a lumpable one.

### Stationary Analysis of Markov Chains

, 2004

"... Markov Chains (Under the direction of William J. Stewart). With existing numerical methods, the computation of stationary distributions for large Markov chains is still time-consuming, a direct result of the state explosion problem. In this thesis, we introduce a rank reduction method for computing ..."

Abstract
- Add to MetaCart

Markov Chains (Under the direction of William J. Stewart). With existing numerical methods, the computation of stationary distributions for large Markov chains is still time-consuming, a direct result of the state explosion problem. In this thesis, we introduce a rank reduction method for computing stationary distributions of Markov chains for which low-rank iteration matrices can be formed. We first prove that, for an irreducible Markov chain, a necessary and sufficient condition for convergence in a single iteration is that the iteration matrix have rank one. Since most iteration matrices have rank larger than 1, we also consider the Wedderburn rank-1 reduction formula and develop a rank reduction procedure that takes an initial iteration matrix with rank greater than one and modifies it in successive steps, under the constraint that the exact solution be pre-served at each step, until a rank-1 iteration matrix is obtained. When the iteration matrix has rank r, the proposed algorithm has time complexity O(r 2 n). Secondly we investigate the relationship among lumpability, weak lumpability, quasi-lumpability and near complete decomposability. These concepts are important in aggregating and disaggregating Markov chains. White’s algorithm for identifying all possible lumpable partitions for Markov chains

### Spectral gap and cut-off in Markov chains

, 1999

"... In this paper we consider an example of a family of Markov chains with a spectral gap and show that it exhibits O(n) cut-off in the sense of Diaconis. We argue that this, and bandedness, is what lies at the heart of the O(n) phase transition in the k-SAT problem. 1 Introduction Consider a collectio ..."

Abstract
- Add to MetaCart

(Show Context)
In this paper we consider an example of a family of Markov chains with a spectral gap and show that it exhibits O(n) cut-off in the sense of Diaconis. We argue that this, and bandedness, is what lies at the heart of the O(n) phase transition in the k-SAT problem. 1 Introduction Consider a collection fA n g of n \Theta n, transition matrices corresponding to a family of Markov chains with n states, n = 1; : : :. Assume that each of these has a stationary distribution oe n (the eigenvector of A n corresponding to the eigenvalue 1). Let f(n) be an increasing function of n and, for an arbitrary vector n assume that the limit g(c) = lim n!1 k n A cf(n) \Gamma oe n k (1.1) exists for all c ? 0. If we have that g(c) = ae ff 6= 0 for c ! c 0 for c ? c , we say that the family of Markov chains has a cut-off of order f(n) with critical value c . For example, in riffle shuffling of cards [2, 3], in which case oe n [k] = 1=n, k = 1; : : : ; n, the cut-off is O(log n) with c = 3=2. F...