Results 11  20
of
214
Decentralized Particle Filter with Arbitrary State Decomposition
, 2010
"... In this paper, a new particle filter (PF) which we refer to as the decentralized PF (DPF) is proposed. By first decomposing the state into two parts, the DPF splits the filtering problem into two nested subproblems and then handles the two nested subproblems using PFs. The DPF has the advantage ov ..."
Abstract

Cited by 9 (1 self)
 Add to MetaCart
In this paper, a new particle filter (PF) which we refer to as the decentralized PF (DPF) is proposed. By first decomposing the state into two parts, the DPF splits the filtering problem into two nested subproblems and then handles the two nested subproblems using PFs. The DPF has the advantage over the regular PF that the DPF can increase the level of parallelism of the PF. In particular, part of the resampling in the DPF bears a parallel structure and can thus be implemented in parallel. The parallel structure of the DPF is created by decomposing the state space, differing from the parallel structure of the distributed PFs which is created by dividing the sample space. This difference results in a couple of unique features of the DPF in contrast with the existing distributed PFs. Simulation results of two examples indicate that the DPF has a potential to achieve in a shorter execution time the same level of performance as the regular PF.
A discrete chain graph model for 3d+t cell tracking with high misdetection robustness
 In ECCV
, 2012
"... Abstract. Tracking by assignment is well suited for tracking a varying number of divisible cells, but suffers from false positive detections. We reformulate tracking by assignment as a chain graph–a mixed directedundirected probabilistic graphical model–and obtain a tracking simultaneously over all ..."
Abstract

Cited by 8 (3 self)
 Add to MetaCart
(Show Context)
Abstract. Tracking by assignment is well suited for tracking a varying number of divisible cells, but suffers from false positive detections. We reformulate tracking by assignment as a chain graph–a mixed directedundirected probabilistic graphical model–and obtain a tracking simultaneously over all time steps from the maximum aposteriori configuration. The model is evaluated on two challenging fourdimensional data sets from developmental biology. Compared to previous work, we obtain improved tracks due to an increased robustness against false positive detections and the incorporation of temporal domain knowledge.
Covariance Modelling for NoiseRobust Speech Recognition
"... Model compensation is a standard way of improving speech recognisers’ robustness to noise. Most model compensation techniques produce diagonal covariances. However, this fails to handle any changes in the feature correlations due to the noise. This paper presents a scheme that allows fullcovariance ..."
Abstract

Cited by 8 (5 self)
 Add to MetaCart
Model compensation is a standard way of improving speech recognisers’ robustness to noise. Most model compensation techniques produce diagonal covariances. However, this fails to handle any changes in the feature correlations due to the noise. This paper presents a scheme that allows fullcovariance matrices to be estimated. One problem is that full covariance matrix estimation will be more sensitive approximations, those for the dynamic parameters are known to crude. In this paper a linear transformation of a window of consecutive frames is used as the basis for dynamic parameter compensation. A second problem is that the resulting full covariance matrices slow down decoding. This is addressed by using predictive linear transforms that decorrelate the feature space, so that the decoder can then use diagonal covariance matrices. On a noisecorrupted Resource Management task, the proposed scheme outperformed the standard VTS compensation scheme.
Particle Gibbs with Ancestor Sampling
"... Particle Markov chain Monte Carlo (PMCMC) is a systematic way of combining the two main tools used for Monte Carlo statistical inference: sequential Monte Carlo (SMC) and Markov chain Monte Carlo (MCMC). We present a new PMCMC algorithm that we refer to as particle Gibbs with ancestor sampling (PGAS ..."
Abstract

Cited by 8 (6 self)
 Add to MetaCart
(Show Context)
Particle Markov chain Monte Carlo (PMCMC) is a systematic way of combining the two main tools used for Monte Carlo statistical inference: sequential Monte Carlo (SMC) and Markov chain Monte Carlo (MCMC). We present a new PMCMC algorithm that we refer to as particle Gibbs with ancestor sampling (PGAS). PGAS provides the data analyst with an offtheshelf class of Markov kernels that can be used to simulate, for instance, the typically highdimensional and highly autocorrelated state trajectory in a statespace model. The ancestor sampling procedure enables fast mixing of the PGAS kernel even when using seemingly few particles in the underlying SMC sampler. This is important as it can significantly reduce the computational burden that is typically associated with using SMC. PGAS is conceptually similar to the existing PG with backward simulation (PGBS) procedure. Instead of using separate forward and backward sweeps as in PGBS, however, we achieve the same effect in a single forward sweep. This makes PGAS well suited for addressing inference problems not only in statespace models, but also in models with more complex dependencies, such as nonMarkovian, Bayesian nonparametric, and general probabilistic graphical models.
OASIS: Online Active SemISupervised Learning
"... We consider a learning setting of importance to large scale machine learning: potentially unlimited data arrives sequentially, but only a small fraction of it is labeled. The learner cannot store the data; it should learn from both labeled and unlabeled data, and it may also request labels for some ..."
Abstract

Cited by 7 (0 self)
 Add to MetaCart
(Show Context)
We consider a learning setting of importance to large scale machine learning: potentially unlimited data arrives sequentially, but only a small fraction of it is labeled. The learner cannot store the data; it should learn from both labeled and unlabeled data, and it may also request labels for some of the unlabeled items. This setting is frequently encountered in realworld applications and has the characteristics of online, semisupervised, and active learning. Yet previous learning models fail to consider these characteristics jointly. We present OASIS, a Bayesian model for this learning setting. The main contributions of the model include the novel integration of a semisupervised likelihood function, a sequential Monte Carlo scheme for efficient online Bayesian updating, and a posteriorreduction criterion for active learning. Encouraging results on both synthetic and realworld optical character recognition data demonstrate the synergy of these characteristics in OASIS.
Bayesian inference and learning in Gaussian process statespace models with particle MCMC
 In Advances in Neural Information Processing Systems (NIPS
, 2013
"... Statespace models are successfully used in many areas of science, engineering and economics to model time series and dynamical systems. We present a fully Bayesian approach to inference and learning (i.e. state estimation and system identification) in nonlinear nonparametric statespace models. We ..."
Abstract

Cited by 7 (3 self)
 Add to MetaCart
(Show Context)
Statespace models are successfully used in many areas of science, engineering and economics to model time series and dynamical systems. We present a fully Bayesian approach to inference and learning (i.e. state estimation and system identification) in nonlinear nonparametric statespace models. We place a Gaussian process prior over the state transition dynamics, resulting in a flexible model able to capture complex dynamical phenomena. To enable efficient inference, we marginalize over the transition dynamics function and, instead, infer directly the joint smoothing distribution using specially tailored Particle Markov Chain Monte Carlo samplers. Once a sample from the smoothing distribution is computed, the state transition predictive distribution can be formulated analytically. Our approach preserves the full nonparametric expressivity of the model and can make use of sparse Gaussian processes to greatly reduce computational complexity. 1
Asymptotically exact noisecorrupted speech likelihoods
 In Proc. InterSpeech, 2010
"... Model compensation techniques for noiserobust speech recognition approximate the corrupted speech distribution. This paper introduces a sampling method that, given speech and noise distributions and a mismatch function, in the limit calculates the corrupted speech likelihood exactly. Though it is t ..."
Abstract

Cited by 6 (5 self)
 Add to MetaCart
(Show Context)
Model compensation techniques for noiserobust speech recognition approximate the corrupted speech distribution. This paper introduces a sampling method that, given speech and noise distributions and a mismatch function, in the limit calculates the corrupted speech likelihood exactly. Though it is too slow to compensate a speech recognition system, it enables a more finegrained assessment of compensation techniques, based on the KL divergence of individual components. This makes it possible to evaluate the impact of approximations that compensation schemes make, such as the form of the mismatch function. Index Terms: speech recognition, noise robustness 1.
On the Convergence of Adaptive Sequential Monte Carlo Methods. ArXiv eprints
, 2013
"... In several implementations of Sequential Monte Carlo (SMC) methods it is natural, and important in terms of algorithmic efficiency, to exploit the information of the history of the samples to optimally tune their subsequent propagations. In this article we provide a carefully formulated asymptotic t ..."
Abstract

Cited by 6 (2 self)
 Add to MetaCart
(Show Context)
In several implementations of Sequential Monte Carlo (SMC) methods it is natural, and important in terms of algorithmic efficiency, to exploit the information of the history of the samples to optimally tune their subsequent propagations. In this article we provide a carefully formulated asymptotic theory for a class of such adaptive SMC methods. The theoretical framework developed here will cover, under assumptions, several commonly used SMC algorithms [5, 17, 20]. There are only limited results about the theoretical underpinning of such adaptive methods: we will bridge this gap by providing a weak law of large numbers (WLLN) and a central limit theorem (CLT) for some of these algorithms. The latter seems to be the first result of its kind in the literature and provides a formal justification of algorithms used in many real data contexts [17, 20]. We establish that for a general class of adaptive SMC algorithms [5] the asymptotic variance of the estimators from the adaptive SMC method is identical to a socalled ‘perfect ’ SMC algorithm which uses ideal proposal kernels. Our results are supported by application on a complex highdimensional posterior distribution associated with the NavierStokes model, where adapting highdimensional parameters of the proposal kernels is critical for the efficiency of the algorithm.