Results 1 
4 of
4
An Analysis of Regularized Interacting Particle Methods for Nonlinear Filtering
 Proceedings of the 3rd IEEE European Workshop on ComputerIntensive Methods in Control and Signal Processing
, 1998
"... Interacting particle methods have been recently proposed for the approximation of nonlinear filters. These are efficient recursive Monte Carlo methods, which in principle could be implemented in high dimensional problems  i.e. which could beat the curse of dimensionality  and where the particl ..."
Abstract

Cited by 13 (1 self)
 Add to MetaCart
Interacting particle methods have been recently proposed for the approximation of nonlinear filters. These are efficient recursive Monte Carlo methods, which in principle could be implemented in high dimensional problems  i.e. which could beat the curse of dimensionality  and where the particles automatically concentrate in regions of interest of the state space. In this paper we show that it is sometimes necessary to add a regularization step, and we analyze the approximation error for the resulting regularized interacting particle methods. 1. Introduction We consider the following model, where the unobserved state process fX t ; t 0g satisfies the stochastic differential equation (SDE) on R m dX t = b(X t ) dt + oe(X t ) dW t ; X 0 ¸ ¯ 0 ; (1) with standard Wiener process fW t ; t 0g, and where ddimensional observations fzn ; n 1g are available at discrete time instants 0 ! t 1 ! \Delta \Delta \Delta ! t n ! \Delta \Delta \Delta z n = h(X t n ) + v n ; in additional w...
Optimum Nonlinear Filtering
 IEEE Transactions on Signal Processing
, 1997
"... This paper is composed of two parts. The first part surveys the literature regarding optimum nonlinear filtering from the (continuoustime) stochastic analysis point of view, and the other part explores the impact of recent applications of neural networks (in a discretetime context) to nonlinear fi ..."
Abstract

Cited by 7 (0 self)
 Add to MetaCart
(Show Context)
This paper is composed of two parts. The first part surveys the literature regarding optimum nonlinear filtering from the (continuoustime) stochastic analysis point of view, and the other part explores the impact of recent applications of neural networks (in a discretetime context) to nonlinear filtering. In particular, the results obtained by using a regularized form of radial basis function (RBF) networks are presented in fair detail. KeywordsNonlinear filtering, optimum filtering, stochastic calculus, radial basis function neural networks, the Kalman filter I. Introduction Optimum filtering has been a focus of research in signal processing and control since the pioneering works of Wiener [49] and Kolmogorov [19] over half a century ago. The first landmark contribution to optimum filtering was made by Kalman [17], who formulated a recursive solution to the optimum linear filtering problem using a statespace model for a dynamical system. Our interest in this paper is optimum n...
On Exact Filters for Continuous Signals With Discrete Observations
 University of Minnesota Institute for
, 1998
"... . Many filtering applications are characterized by continuous state dynamics X t = R t 0 m(X s )ds + oeW t + ae, discrete observations Y k = Y t k , and observation noise that is nonadditive or nonGaussian. In most such instances, neither exact finitedimensional filters nor known online/off ..."
Abstract

Cited by 3 (1 self)
 Add to MetaCart
(Show Context)
. Many filtering applications are characterized by continuous state dynamics X t = R t 0 m(X s )ds + oeW t + ae, discrete observations Y k = Y t k , and observation noise that is nonadditive or nonGaussian. In most such instances, neither exact finitedimensional filters nor known online/offline splitting methods apply. Thus, there is a pressing issue to determine how best to calculate the conditional density Pr fX t 2 dzj Y k ; 1 k lg : Ideally, one would like an answer which avoids solving partial differential equations online. In this note, we show that a combination of convolution, scaling, and substitutions efficiently solves this problem under certain conditions. The most noteable aspects about our method are that it is extremely easy to use and that it assumes nothing about the observations other than the ability to construct p Yk jX t k ; the conditional density of the k th observation given the current state. 1. Introduction Traditionally, nonlinear...
GENERALISED PARTICLE FILTERS WITH GAUSSIAN MEASURES
"... The stochastic filtering problem deals with the estimation of the posterior distribution of the current state of a signal process X = {Xt}t≥0 given the information supplied by an associate process Y = {Yt}t≥0. The scope and range of its applications includes the control of engineering systems, gl ..."
Abstract

Cited by 2 (0 self)
 Add to MetaCart
(Show Context)
The stochastic filtering problem deals with the estimation of the posterior distribution of the current state of a signal process X = {Xt}t≥0 given the information supplied by an associate process Y = {Yt}t≥0. The scope and range of its applications includes the control of engineering systems, global data assimilation in meteorology, volatility estimation in financial markets, computer vision and vehicle tracking. A massive scientific and computational effort is dedicated to the development of viable tools for approximating the solution of the filtering problem. Classical PDE methods can be successful, particularly if the state space has low dimensions. In higher dimensions, a class of numerical methods called particle filters have proved the most successful methods todate. These methods produce an approximations of the posterior distribution by using the empirical distribution of a cloud of particles that explore the signal’s state space. We discuss here a more general class of numerical methods which involve generalised particles, that is, particles that evolve through larger spaces. Such generalised particles include Gaussian measures, wavelets, and finite elements in addition to the classical particle methods. We will construct the approximating particle system under the Gaussian measure framework and prove the corresponding convergence result. 1. THE FILTERING FRAMEWORK Let (Ω,F,P) be a probability space on which we have defined a process X = {Xt}t≥0 called the signal and an associate process Y = {Yt}t≥0 called the observation. The process X is the solution of a ddimensional stochastic differential equation driven by a pdimensional Brownian motion V, that is: Xt = X0+ ∫ t