Results 11  20
of
263
Structured threshold policies for dynamic sensor scheduling–a partially observed Markov decision process approach
 IEEE Transactions on Signal Processing
"... Abstract—We consider the optimal sensor scheduling problem formulated as a partially observed Markov decision process (POMDP). Due to operational constraints, at each time instant, the scheduler can dynamically select one out of a finite number of sensors and record a noisy measurement of an underly ..."
Abstract

Cited by 21 (5 self)
 Add to MetaCart
(Show Context)
Abstract—We consider the optimal sensor scheduling problem formulated as a partially observed Markov decision process (POMDP). Due to operational constraints, at each time instant, the scheduler can dynamically select one out of a finite number of sensors and record a noisy measurement of an underlying Markov chain. The aim is to compute the optimal measurement scheduling policy, so as to minimize a cost function comprising of estimation errors and measurement costs. The formulation results in a nonstandard POMDP that is nonlinear in the information state. We give sufficient conditions on the cost function, dynamics of the Markov chain and observation probabilities so that the optimal scheduling policy has a threshold structure with respect to a monotone likelihood ratio (MLR) ordering. As a result, the computational complexity of implementing the optimal scheduling policy is inexpensive. We then present stochastic approximation algorithms for estimating the best linear MLR order threshold policy. Index Terms—Bayesian filtering, monotone likelihood ratio (MLR) ordering, partially observed Markov decision processes (POMDPs), sensor scheduling, stochastic approximation algorithms, stochastic dynamic programming, threshold policies. I.
hybrid Markov/semiMarkov chains
 Computational Statistics and Data Analysis
, 2005
"... Models that combine Markovian states with implicit geometric state occupancy distributions and semiMarkovian states with explicit state occupancy distributions, are investigated. This type of model retains the flexibility of hidden semiMarkov chains for the modeling of short or medium size homogen ..."
Abstract

Cited by 20 (4 self)
 Add to MetaCart
(Show Context)
Models that combine Markovian states with implicit geometric state occupancy distributions and semiMarkovian states with explicit state occupancy distributions, are investigated. This type of model retains the flexibility of hidden semiMarkov chains for the modeling of short or medium size homogeneous zones along sequences but also enables the modeling of long zones with Markovian states. The forwardbackward algorithm, which in particular enables to implement efficiently the Estep of the EM algorithm, and the Viterbi algorithm for the restoration of the most likely state sequence are derived. It is also shown that macrostates, i.e. seriesparallel networks of states with common observation distribution, are not a valid alternative to semiMarkovian states but may be useful at a more macroscopic level to combine Markovian states with semiMarkovian states. This statistical modeling approach is illustrated by the analysis of branching and flowering patterns in plants.
Regime switching stochastic approximation algorithms with application to adaptive discrete stochastic optimization
 SIAM J. Optim
, 2004
"... Abstract. This work is devoted to a class of stochastic approximation problems with regime switching modulated by a discretetime Markov chain. Our motivation stems from using stochastic recursive algorithms for tracking Markovian parameters such as those in spreading code optimization in CDMA (code ..."
Abstract

Cited by 20 (12 self)
 Add to MetaCart
(Show Context)
Abstract. This work is devoted to a class of stochastic approximation problems with regime switching modulated by a discretetime Markov chain. Our motivation stems from using stochastic recursive algorithms for tracking Markovian parameters such as those in spreading code optimization in CDMA (code division multiple access) wireless communication. The algorithm uses constant step size to update the increments of a sequence of occupation measures. It is proved that least squares estimates of the tracking errors can be developed. Assume that the adaptation rate is of the same order of magnitude as that of the timevarying parameter, which is more difficult to deal with than that of slower parameter variations. Due to the timevarying characteristics and Markovian jumps, the usual stochastic approximation (SA) techniques cannot be carried over in the analysis. By a combined use of the SA method and twotimescale Markov chains, asymptotic properties of the algorithm are obtained, which are distinct from the usual SA results. In this paper, it is shown for the first time that, under simple conditions, a continuoustime interpolation of the iterates converges weakly not to an ODE, as is widely known in the literature, but to a system of ODEs with regime switching, and that a suitably scaled sequence of the tracking errors converges not to a diffusion but to a system of switching diffusion. As an application of these results, the performance of an adaptive discrete stochastic optimization algorithm is analyzed.
On the Optimality of Symbol by Symbol Filtering and Denoising
, 2003
"... We consider the problem of optimally recovering a finitealphabet discretetime stochastic process {X t } from its noisecorrupted observation process {Z t }. In general, the optimal estimate of X t will depend on all the components of {Z t } on which it can be based. We characterize nontrivial s ..."
Abstract

Cited by 19 (3 self)
 Add to MetaCart
We consider the problem of optimally recovering a finitealphabet discretetime stochastic process {X t } from its noisecorrupted observation process {Z t }. In general, the optimal estimate of X t will depend on all the components of {Z t } on which it can be based. We characterize nontrivial situations (i.e., beyond the case where (X t , Z t ) are independent) for which optimum performance is attained using "symbol by symbol" operations (a.k.a.
Selecting Hidden Markov Model State Number with CrossValidated Likelihood
 Computational Statistics
"... Abstract: The problem of estimating the number of hidden states in a hidden Markov model is considered. Emphasis is placed on crossvalidated likelihood criteria. Using crossvalidation to assess the number of hidden states allows to circumvent the well documented ..."
Abstract

Cited by 18 (1 self)
 Add to MetaCart
Abstract: The problem of estimating the number of hidden states in a hidden Markov model is considered. Emphasis is placed on crossvalidated likelihood criteria. Using crossvalidation to assess the number of hidden states allows to circumvent the well documented
Schemes for BiDirectional Modeling of Discrete Stationary Sources
, 2005
"... Adaptive models are developed to deal with bidirectional modeling of unknown discrete stationary sources, which can be generally applied to statistical inference problems such as noncausal universal discrete denoising that exploits bidirectional dependencies. Efficient algorithms for constructing ..."
Abstract

Cited by 16 (9 self)
 Add to MetaCart
(Show Context)
Adaptive models are developed to deal with bidirectional modeling of unknown discrete stationary sources, which can be generally applied to statistical inference problems such as noncausal universal discrete denoising that exploits bidirectional dependencies. Efficient algorithms for constructing those models are developed and implemented. Denoising is a primary focus of the application of those models, and we compare their performance to that of the DUDE algorithm [1] for universal discrete denoising.
The entropy of a binary hidden Markov process
 J. Stat. Phys
, 2005
"... The entropy of a binary symmetric Hidden Markov Process is calculated as an expansion in the noise parameter ɛ. We map the problem onto a onedimensional Ising model in a large field of random signs and calculate the expansion coefficients up to second order in ɛ. Using a conjecture we extend the ca ..."
Abstract

Cited by 15 (1 self)
 Add to MetaCart
(Show Context)
The entropy of a binary symmetric Hidden Markov Process is calculated as an expansion in the noise parameter ɛ. We map the problem onto a onedimensional Ising model in a large field of random signs and calculate the expansion coefficients up to second order in ɛ. Using a conjecture we extend the calculation to 11th order and discuss the convergence of the resulting series.
The empirical distribution of rateconstrained source codes
 IEEE Trans. Inform. Theory
"... Let X =(X1,...) be a stationary ergodic finitealphabet source, X n denote its first n symbols, and Y n be the codeword assigned to X n by a lossy source code. The empirical kthorder joint distribution ˆ Q k [X n,Y n](x k,y k)is defined as the frequency of appearances of pairs of kstrings (x k,y k ..."
Abstract

Cited by 15 (3 self)
 Add to MetaCart
(Show Context)
Let X =(X1,...) be a stationary ergodic finitealphabet source, X n denote its first n symbols, and Y n be the codeword assigned to X n by a lossy source code. The empirical kthorder joint distribution ˆ Q k [X n,Y n](x k,y k)is defined as the frequency of appearances of pairs of kstrings (x k,y k)alongthepair(X n,Y n). Our main interest is in the sample behavior of this (random) distribution. Letting I(Q k) denote the mutual information I(X k; Y k) when (X k,Y k) ∼ Q k we show that for any (sequence of) lossy source code(s) of rate ≤ R lim sup n→∞ 1 k I ˆQ k n n
New bounds on the entropy rate of hidden Markov process.
 IEEE Information Theory Workshop,
, 2004
"... ..."
(Show Context)