Results 1  10
of
51,936
CONSTRUCTING STRONG MARKOV PROCESSES
"... Dedicated to the memory of Lynda Singshinsuk. Abstract. The construction presented in this paper can be briefly described as follows: starting from any “finitedimensional ” Markov transition function pt, on a measurable state space (E, B), we construct a strong Markov process on a certain “intrinsi ..."
Abstract
 Add to MetaCart
Dedicated to the memory of Lynda Singshinsuk. Abstract. The construction presented in this paper can be briefly described as follows: starting from any “finitedimensional ” Markov transition function pt, on a measurable state space (E, B), we construct a strong Markov process on a certain
PVARIATION OF STRONG MARKOV PROCESSES
, 2004
"... Let ξt, t ∈ [0,T], be a strong Markov process with values in a complete separable metric space (X,ρ) and with transition probability function Ps,t(x,dy), 0 ≤ s ≤ t ≤ T, x ∈ X. For any h ∈ [0,T] and a> 0, consider the function α(h,a) = sup{Ps,t(x, {y:ρ(x,y) ≥ a}):x ∈ X,0 ≤ s ≤ t ≤ (s +h) ∧ T}. I ..."
Abstract

Cited by 5 (0 self)
 Add to MetaCart
Let ξt, t ∈ [0,T], be a strong Markov process with values in a complete separable metric space (X,ρ) and with transition probability function Ps,t(x,dy), 0 ≤ s ≤ t ≤ T, x ∈ X. For any h ∈ [0,T] and a> 0, consider the function α(h,a) = sup{Ps,t(x, {y:ρ(x,y) ≥ a}):x ∈ X,0 ≤ s ≤ t ≤ (s +h) ∧ T
Subgeometric ergodicity of strong Markov processes
 ANN.APPL.PROBAB
, 2005
"... We derive sufficient conditions for subgeometric fergodicity of strongly Markovian processes. We first propose a criterion based on modulated moment of some delayed returntime to a petite set. We then formulate a criterion for polynomial fergodicity in terms of a drift condition on the generator. ..."
Abstract

Cited by 15 (1 self)
 Add to MetaCart
We derive sufficient conditions for subgeometric fergodicity of strongly Markovian processes. We first propose a criterion based on modulated moment of some delayed returntime to a petite set. We then formulate a criterion for polynomial fergodicity in terms of a drift condition on the generator
9 by SpringerVerlag 1972 Excision of a Strong Markov Process
"... Let (~, ~ ~, Xz, Or, P~) be a strong Markov process on a locally compact space (EA,J~a) with countable base, where A denotes the usual adjoined absorbing point and gA the Borel sets of E A. The definitions and notation follow those of [1]. In particular, ~ is complete in ~ relative to the family ..."
Abstract
 Add to MetaCart
Let (~, ~ ~, Xz, Or, P~) be a strong Markov process on a locally compact space (EA,J~a) with countable base, where A denotes the usual adjoined absorbing point and gA the Borel sets of E A. The definitions and notation follow those of [1]. In particular, ~ is complete in ~ relative to the family
Coupled hidden Markov models for complex action recognition
, 1996
"... We present algorithms for coupling and training hidden Markov models (HMMs) to model interacting processes, and demonstrate their superiority to conventional HMMs in a vision task classifying twohanded actions. HMMs are perhaps the most successful framework in perceptual computing for modeling and ..."
Abstract

Cited by 501 (22 self)
 Add to MetaCart
We present algorithms for coupling and training hidden Markov models (HMMs) to model interacting processes, and demonstrate their superiority to conventional HMMs in a vision task classifying twohanded actions. HMMs are perhaps the most successful framework in perceptual computing for modeling
Maxmargin Markov networks
, 2003
"... In typical classification tasks, we seek a function which assigns a label to a single object. Kernelbased approaches, such as support vector machines (SVMs), which maximize the margin of confidence of the classifier, are the method of choice for many such tasks. Their popularity stems both from the ..."
Abstract

Cited by 604 (15 self)
 Add to MetaCart
independently to each object, losing much useful information. Conversely, probabilistic graphical models, such as Markov networks, can represent correlations between labels, by exploiting problem structure, but cannot handle highdimensional feature spaces, and lack strong theoretical generalization guarantees
The Infinite Hidden Markov Model
 Machine Learning
, 2002
"... We show that it is possible to extend hidden Markov models to have a countably infinite number of hidden states. By using the theory of Dirichlet processes we can implicitly integrate out the infinitely many transition parameters, leaving only three hyperparameters which can be learned from data. Th ..."
Abstract

Cited by 637 (41 self)
 Add to MetaCart
We show that it is possible to extend hidden Markov models to have a countably infinite number of hidden states. By using the theory of Dirichlet processes we can implicitly integrate out the infinitely many transition parameters, leaving only three hyperparameters which can be learned from data
Markov chain sampling methods for Dirichlet process mixture models
 JOURNAL OF COMPUTATIONAL AND GRAPHICAL STATISTICS
, 2000
"... ..."
A tutorial on hidden Markov models and selected applications in speech recognition
 PROCEEDINGS OF THE IEEE
, 1989
"... Although initially introduced and studied in the late 1960s and early 1970s, statistical methods of Markov source or hidden Markov modeling have become increasingly popular in the last several years. There are two strong reasons why this has occurred. First the models are very rich in mathematical s ..."
Abstract

Cited by 5892 (1 self)
 Add to MetaCart
Although initially introduced and studied in the late 1960s and early 1970s, statistical methods of Markov source or hidden Markov modeling have become increasingly popular in the last several years. There are two strong reasons why this has occurred. First the models are very rich in mathematical
Markov games as a framework for multiagent reinforcement learning
 IN PROCEEDINGS OF THE ELEVENTH INTERNATIONAL CONFERENCE ON MACHINE LEARNING
, 1994
"... In the Markov decision process (MDP) formalization of reinforcement learning, a single adaptive agent interacts with an environment defined by a probabilistic transition function. In this solipsistic view, secondary agents can only be part of the environment and are therefore fixed in their behavior ..."
Abstract

Cited by 601 (13 self)
 Add to MetaCart
In the Markov decision process (MDP) formalization of reinforcement learning, a single adaptive agent interacts with an environment defined by a probabilistic transition function. In this solipsistic view, secondary agents can only be part of the environment and are therefore fixed
Results 1  10
of
51,936