Results 1  10
of
22
Smoothing algorithms for statespace models
 in Submission IEEE Transactions on Signal Processing
, 2004
"... A prevalent problem in statistical signal processing, applied statistics, and time series analysis is the calculation of the smoothed posterior distribution, which describes the uncertainty associated with a state, or a sequence of states, conditional on data from the past, the present, and the futu ..."
Abstract

Cited by 62 (9 self)
 Add to MetaCart
(Show Context)
A prevalent problem in statistical signal processing, applied statistics, and time series analysis is the calculation of the smoothed posterior distribution, which describes the uncertainty associated with a state, or a sequence of states, conditional on data from the past, the present, and the future. The aim of this paper is to provide a rigorous foundation for the calculation, or approximation, of such smoothed distributions, to facilitate a robust and efficient implementation. Through a cohesive and generic exposition of the scientific literature we offer several novel extensions such that one can perform smoothing in the most general case. Experimental results for: a Jump Markov Linear System; a comparison of particle smoothing methods; and parameter estimation using a particle implementation of the EM algorithm, are provided.
Maximum A Posteriori Sequence Estimation Using Monte Carlo Particle Filters
, 2001
"... We develop methods for performing maximum a posteriori (MAP) se quence estimation in nonlinear nonGaussian dynamic models. The methods rely on a particle cloud representation of the filtering distribution which evolves through time using importance sampling and resampling ideas. MAP sequence e ..."
Abstract

Cited by 49 (4 self)
 Add to MetaCart
We develop methods for performing maximum a posteriori (MAP) se quence estimation in nonlinear nonGaussian dynamic models. The methods rely on a particle cloud representation of the filtering distribution which evolves through time using importance sampling and resampling ideas. MAP sequence estimation is then performed using a classical dynamic programming technique applied to the discretised version of the state space. In contrast with standard approaches to the problem which essentially compare only the trajectories generated directly during the filtering stage, our method efficiently computes the optimal trajectory over all combinations of the filtered states. A particular strength of the method is that MAP sequence estimation is performed sequentially in one single forwards pass through the data without the requirement of an additional backward sweep. An application to estimation of a nonlinear time series model and to spectral estimation for timevarying autoregressions is described.
Monte Carlo Smoothing with Application to Audio Signal Enhancement
 IEEE Transactions on Signal Processing
"... We describe methods for applying Monte Carlo filtering and smoothing for estimation of unobserved states in a nonlinear statespace model. By exploiting the statistical structure of the model, we develop a RaoBlackwellized particle smoother. Due to the lengthy nature of real signals, we suggest pr ..."
Abstract

Cited by 29 (4 self)
 Add to MetaCart
(Show Context)
We describe methods for applying Monte Carlo filtering and smoothing for estimation of unobserved states in a nonlinear statespace model. By exploiting the statistical structure of the model, we develop a RaoBlackwellized particle smoother. Due to the lengthy nature of real signals, we suggest processing the data in blocks, and a blockbased smoother algorithm is developed for this purpose. All the algorithms suggested are tested with real speech and audio data, and the results are shown and compared with those generated using the generic particle smoother and the extended Kalman filter (EKF). It is found that the proposed RaoBlackwellized particle smoother improves on the standard particle smoother and the extended Kalman smoother. In addition, the proposed Blockbased smoother algorithm enhances the efficiency of the proposed RaoBlackwellized smoother by significantly reducing the storage capacity required for the particle information.
Generalised linear Gaussian models
, 2001
"... This paper addresses the timeseries modelling of high dimensional data. Currently, the hidden Markov model (HMM) is the most popular and successful model especially in speech recognition. However, there are well known shortcomings in HMMs particularly in the modelling of the correlation between suc ..."
Abstract

Cited by 23 (7 self)
 Add to MetaCart
This paper addresses the timeseries modelling of high dimensional data. Currently, the hidden Markov model (HMM) is the most popular and successful model especially in speech recognition. However, there are well known shortcomings in HMMs particularly in the modelling of the correlation between successive observation vectors; that is, interframe correlation. Standard diagonal covariance matrix HMMs also lack the modelling of the spatial correlation in the feature vectors; that is, intraframe correlation. Several other timeseries models have been proposed recently especially in the segment model framework to address the interframe correlation problem such as GaussMarkov and dynamical system segment models. The lack of intraframe correlation has been compensated for with transform schemes such as semitied full covariance matrices (STC). All these models can be regarded as belonging to the broad class of generalised linear Gaussian models. Linear Gaussian models (LGM) are popular as many forms may be trained efficiently using the expectation maximisation algorithm. In this paper, several LGMs and generalised LGMs are reviewed. The models can be roughly categorised into four combinations according to two different state evolution and two different observation processes. The state evolution process can be based on a discrete finite state machine such as in the HMMs or a linear firstorder GaussMarkov process such as in the traditional linear dynamical systems. The observation process can be represented as a factor analysis model or a linear discriminant analysis model. General HMMs and schemes proposed to improve their performance such as STC can be regarded as special cases in this framework.
Architectures for Efficient Implementation of Particle Filters
, 2004
"... Particle filters are sequential Monte Carlo methods that are used in numerous problems where timevarying signals must be presented in real time and where the objective is to estimate various unknowns of the signal and/or detect events described by the signals. The standard solutions of such proble ..."
Abstract

Cited by 22 (0 self)
 Add to MetaCart
Particle filters are sequential Monte Carlo methods that are used in numerous problems where timevarying signals must be presented in real time and where the objective is to estimate various unknowns of the signal and/or detect events described by the signals. The standard solutions of such problems in many applications are based on the Kalman filters or extended Kalman filters. In situations when the problems are nonlinear or the noise that distorts the signals is nonGaussian, the Kalman filters provide a solution that may be far from optimal. Particle filters are an intriguing alternative to the Kalman filters due to their excellent performance in very di#cult problems including communications, signal processing, navigation, and computer vision. Hence, particle filters have been the focus of wide research recently and immense literature can be found on their theory. Most of these works recognize the complexity and computational intensity of these filters, but there has been no e#ort directed toward the implementation of these filters in hardware. The objective of this dissertation is to develop, design, and build e#cient hardware for particle filters, and thereby bring them closer to practical applications. The fact that particle filters outperform most of the traditional filtering methods in many complex practical scenarios, coupled with the challenges related to decreasing their computational complexity and improving realtime performance, makes this work worthwhile. The main
Sequential parameter estimation of timevarying nonGaussian autoregressive processes
 EURASIP Journal on Applied Signal Processing
"... Parameter estimation of timevarying nonGaussian autoregressive processes can be a highly nonlinear problem. The problem gets even more difficult if the functional form of the time variation of the process parameters is unknown. In this paper, we address parameter estimation of such processes by p ..."
Abstract

Cited by 11 (0 self)
 Add to MetaCart
(Show Context)
Parameter estimation of timevarying nonGaussian autoregressive processes can be a highly nonlinear problem. The problem gets even more difficult if the functional form of the time variation of the process parameters is unknown. In this paper, we address parameter estimation of such processes by particle filtering, where posterior densities are approximated by sets of samples (particles) and particle weights. These sets are updated as new measurements become available using the principle of sequential importance sampling. From the samples and their weights we can compute a wide variety of estimates of the unknowns. In absence of exact modeling of the time variation of the process parameters, we exploit the concept of forgetting factors so that recent measurements affect current estimates more than older measurements. We investigate the performance of the proposed approach on autoregressive processes whose parameters change abruptly at unknown instants and with driving noises, which are Gaussian mixtures or Laplacian processes.
The Gaussian Mixture MCMC Particle Algorithm for Dynamic Cluster Tracking
, 2009
"... We present a new filtering algorithm for tracking multiple clusters of coordinated targets. Based on a Markov Chain Monte Carlo (MCMC) mechanism, the new algorithm propagates a discrete approximation of the underlying filtering density. A dynamic Gaussian mixture model is utilized for representing ..."
Abstract

Cited by 11 (3 self)
 Add to MetaCart
We present a new filtering algorithm for tracking multiple clusters of coordinated targets. Based on a Markov Chain Monte Carlo (MCMC) mechanism, the new algorithm propagates a discrete approximation of the underlying filtering density. A dynamic Gaussian mixture model is utilized for representing the timevarying clustering structure. This involves point process formulations of typical behavioral moves such as birth and death of clusters as well as merging and splitting. Following our previous work, we adopt here two strategies for increasing the sampling efficiency of the basic MCMC scheme: an evolutionary stage which allows improved exploration of the sample space, and an EMbased method for making optimized proposals based on the frame likelihood. The algorithm’s performance is assessed and demonstrated in both synthetic and real tracking scenarios.
Bayesian timevarying autoregressions: Theory, methods and Applications
 UNIVERSITY OF SAO PAOLO
, 2000
"... We review the class of timevarying autoregressive (TVAR) models and a range of related recent developments of Bayesian time series modelling. Beginning with TVAR models in a Bayesian dynamic linear modelling framework, we review aspects of latent structure analysis, including timedomain decompo ..."
Abstract

Cited by 6 (0 self)
 Add to MetaCart
We review the class of timevarying autoregressive (TVAR) models and a range of related recent developments of Bayesian time series modelling. Beginning with TVAR models in a Bayesian dynamic linear modelling framework, we review aspects of latent structure analysis, including timedomain decomposition methods that provide inferences on the structure underlying nonstationary time series, and that are now central tools in the time series analyst's toolkit. Recent model extensions that deal with model order uncertainty, and are enabled using efficient Markov Chain Monte Carlo simulation methods, are discussed, as are novel approaches to sequential filtering and smoothing using particulate filtering methods. We emphasize the relevance of TVAR modelling in a range of applied contexts, including biomedical signal processing and communications, and highlight some of the central developments via examples arising in studies of multiple electroencephalographic (EEG) traces in neurophysiology. We conclude with comments about current research frontiers.
ADAPTIVE STOPPING FOR FAST PARTICLE SMOOTHING
"... Particle smoothing is useful for offline state inference and parameter learning in nonlinear/nonGaussian statespace models. However, many particle smoothers, such as the popular forward filter/backward simulator (FFBS), are plagued by a quadratic computational complexity in the number of particles ..."
Abstract

Cited by 3 (2 self)
 Add to MetaCart
Particle smoothing is useful for offline state inference and parameter learning in nonlinear/nonGaussian statespace models. However, many particle smoothers, such as the popular forward filter/backward simulator (FFBS), are plagued by a quadratic computational complexity in the number of particles. One approach to tackle this issue is to use rejectionsamplingbased FFBS (RSFFBS), which asymptotically reaches linear complexity. In practice, however, the constants can be quite large and the actual gain in computational time limited. In this contribution, we develop a hybrid method, governed by an adaptive stopping rule, in order to exploit the benefits, but avoid the drawbacks, of RSFFBS. The resulting particle smoother is shown in a simulation study to be considerably more computationally efficient than both FFBS and RSFFBS. Index Terms — Sequential Monte Carlo, particle smoothing, backward simulation. 1.