Results 11  20
of
20
The Key Renewal Theorem for a Transient Markov Chain
, 711
"... We consider a timehomogeneous Markov chain Xn, n ≥ 0, valued in R. Suppose that this chain is transient, that is, Xn generates a σfinite renewal measure. We prove the key renewal theorem under condition that this chain has asymptotically homogeneous at infinity jumps and asymptotically positive dr ..."
Abstract

Cited by 2 (1 self)
 Add to MetaCart
(Show Context)
We consider a timehomogeneous Markov chain Xn, n ≥ 0, valued in R. Suppose that this chain is transient, that is, Xn generates a σfinite renewal measure. We prove the key renewal theorem under condition that this chain has asymptotically homogeneous at infinity jumps and asymptotically positive drift.
Heavy tailed solutions of multivariate smoothing transforms. arXiv:1206.1709
, 2013
"... ar ..."
(Show Context)
A Reversal Argument for Storage Models Defined
"... mmmmmmmmm mo//o//iio/mom EEEEEEEEEE 140 ' II.. a, a,. a. ..."
Markov Additive Processes and Reflecting Brownian Motion in a Cone
"... After applying a certain space and time transformation, a (semimartingale) reflecting Brownian motion without drift in a cone, whose reflection directions are radially homogeneous, becomes a Markov additive process. This observation is a simple manifestation of the invariance of such processes unde ..."
Abstract
 Add to MetaCart
After applying a certain space and time transformation, a (semimartingale) reflecting Brownian motion without drift in a cone, whose reflection directions are radially homogeneous, becomes a Markov additive process. This observation is a simple manifestation of the invariance of such processes under a scaling. Markov additive processes are familiar in queueing theory, especially in Matrix Analytic Methods. The answers to some important questions about reflecting Brownian motion may be guessed by analogy with wellknown results in Matrix Analytic Methods.
MÜNSTER On the Markov Renewal Theorem (Corrected version)
"... Let (S, S) be a measurable space with countably generated σfield S and (Mn,Xn)n≥0aMarkov chain with state space S×IRand transition kernel IP: S×(S⊗B)→[0, 1]. Then (Mn,Sn)n≥0, where Sn = X0 +...+Xn for n ≥ 0, is called the associated Markov random walk. Markov renewal theory deals with the asymptoti ..."
Abstract
 Add to MetaCart
(Show Context)
Let (S, S) be a measurable space with countably generated σfield S and (Mn,Xn)n≥0aMarkov chain with state space S×IRand transition kernel IP: S×(S⊗B)→[0, 1]. Then (Mn,Sn)n≥0, where Sn = X0 +...+Xn for n ≥ 0, is called the associated Markov random walk. Markov renewal theory deals with the asymptotic behavior of suitable functionals of (Mn,Sn)n≥0like the Markov renewal measure ∑ n≥0 P ((Mn,Sn) ∈ A×(t+B)) as t →∞where A ∈ S and B denotes a Borel subset of IR. It is shown that the Markov renewal theorem as well as a related ergodic theorem for semiMarkov processes hold true if only Harris recurrence of (Mn)n≥0 is assumed. This was proved by purely analytical methods by Shurenkov [16] in the onesided case where IP (x, [0, ∞)) = 1 for all x ∈S. Our proof uses probabilistic arguments, notably the construction of regeneration epochs for (Mn)n≥0 such that (Mn,Xn)n≥0is at least nearly regenerative and an extension of Blackwell’s renewal theorem to certain random walks with stationary, 1dependent increments.
1 The Markov Renewal Theorem and Related Results
"... We give a new probabilistic proof of the Markov renewal theorem for Markov random walks with positive drift and Harris recurrent driving chain. It forms an alternative to the one recently given in [1] and follows more closely the probabilistic proofs provided for Blackwell’s theorem in the literatur ..."
Abstract
 Add to MetaCart
(Show Context)
We give a new probabilistic proof of the Markov renewal theorem for Markov random walks with positive drift and Harris recurrent driving chain. It forms an alternative to the one recently given in [1] and follows more closely the probabilistic proofs provided for Blackwell’s theorem in the literature by making use of ladder variables, the stationary Markov delay distribution and a coupling argument. A major advantage is that the arguments can be refined to yield convergence rate results.
Power Laws on Weighted Branching Trees
"... Abstract Consider distributional fixedpoint equations of the form R D = f (Q,Ci,Ri,1 ≤ i ≤ N), where f (·) is a possibly random realvalued function, N ∈ {0,1,2,3,...} ∪ {∞}, {Ci}i∈N are realvalued random weights and {Ri}i∈N are iid copies of R, independent of (Q,N,C1,C2,...); D = represents equa ..."
Abstract
 Add to MetaCart
(Show Context)
Abstract Consider distributional fixedpoint equations of the form R D = f (Q,Ci,Ri,1 ≤ i ≤ N), where f (·) is a possibly random realvalued function, N ∈ {0,1,2,3,...} ∪ {∞}, {Ci}i∈N are realvalued random weights and {Ri}i∈N are iid copies of R, independent of (Q,N,C1,C2,...); D = represents equality in distribution. Fixedpoint equations of this type are important for solving many applied probability problems, ranging from the average case analysis of algorithms to statistical physics. In this paper we present some of our recent work from [26, 27, 28, 36] that studies the power tail asymptotics of such solutions. We exemplify our techniques primarily on the nonhomogeneous equation, R D = ∑ N i=1 CiRi + Q, for which the power tail of the solution, P(R> t), can be determined by three different factors: the multiplicative effect of the weights Ci; the sum of the weights ∑Ci; and the innovation variable Q. 1