Results 1  10
of
577
A Unifying Review of Linear Gaussian Models
, 1999
"... Factor analysis, principal component analysis, mixtures of gaussian clusters, vector quantization, Kalman filter models, and hidden Markov models can all be unified as variations of unsupervised learning under a single basic generative model. This is achieved by collecting together disparate observa ..."
Abstract

Cited by 348 (18 self)
 Add to MetaCart
(Show Context)
Factor analysis, principal component analysis, mixtures of gaussian clusters, vector quantization, Kalman filter models, and hidden Markov models can all be unified as variations of unsupervised learning under a single basic generative model. This is achieved by collecting together disparate observations and derivations made by many previous authors and introducing a new way of linking discrete and continuous state models using a simple nonlinearity. Through the use of other nonlinearities, we show how independent component analysis is also a variation of the same basic generative model. We show that factor analysis and mixtures of gaussians can be implemented in autoencoder neural networks and learned using squared error plus the same regularization term. We introduce a new model for static data, known as sensible principal component analysis, as well as a novel concept of spatially adaptive observation noise. We also review some of the literature involving global and local mixtures of the basic models and provide pseudocode for inference and learning for all the basic models.
Efficient data assimilation for spatiotemporal chaos: A local ensemble transform Kalman filter
 Physica D
, 2007
"... Data assimilation is an iterative approach to the problem of estimating the state of a dynamical system using both current and past observations of the system together with a model for the system’s time evolution. Rather than solving the problem from scratch each time new observations become availab ..."
Abstract

Cited by 147 (11 self)
 Add to MetaCart
Data assimilation is an iterative approach to the problem of estimating the state of a dynamical system using both current and past observations of the system together with a model for the system’s time evolution. Rather than solving the problem from scratch each time new observations become available, one uses the model to “forecast ” the current state, using a prior state estimate (which incorporates information from past data) as the initial condition, then uses current data to correct the prior forecast to a current state estimate. This Bayesian approach is most effective when the uncertainty in both the observations and in the state estimate, as it evolves over time, are accurately quantified. In this article, I describe a practical method for data assimilation in large, spatiotemporally chaotic systems. The method is a type of “Ensemble Kalman Filter”, in which the state estimate and its approximate uncertainty are represented at any given time by an ensemble of system states. I discuss both the mathematical basis of this approach and its implementation; my primary emphasis is on ease of use and computational speed rather than improving accuracy over previously published approaches to ensemble Kalman filtering. 1
Dynamic Model of Visual Recognition Predicts Neural Response Properties in the Visual Cortex
 Neural Computation
, 1995
"... this paper, we describe a hierarchical network model of visual recognition that explains these experimental observations by using a form of the extended Kalman filter as given by the Minimum Description Length (MDL) principle. The model dynamically combines inputdriven bottomup signals with expec ..."
Abstract

Cited by 114 (21 self)
 Add to MetaCart
this paper, we describe a hierarchical network model of visual recognition that explains these experimental observations by using a form of the extended Kalman filter as given by the Minimum Description Length (MDL) principle. The model dynamically combines inputdriven bottomup signals with expectationdriven topdown signals to predict current recognition state. Synaptic weights in the model are adapted in a Hebbian manner according to a learning rule also derived from the MDL principle. The resulting prediction/learning scheme can be viewed as implementing a form of the ExpectationMaximization (EM) algorithm. The architecture of the model posits an active computational role for the reciprocal connections between adjoining visual cortical areas in determining neural response properties. In particular, the model demonstrates the possible role of feedback from higher cortical areas in mediating neurophysiological effects due to stimuli from beyond the classical receptive field. Si
An Intelligent Predictive Control Approach to the HighSpeed CrossCountry Autonomous Navigation Problem
, 1995
"... mRIm9533 submitted in partial fulfiumtnr of the reqimlmts for the degm of ..."
Abstract

Cited by 79 (3 self)
 Add to MetaCart
mRIm9533 submitted in partial fulfiumtnr of the reqimlmts for the degm of
Receding Horizon Control of Nonlinear Systems: A Control . . .
, 2000
"... n Automatic Control, pages 898 907, 1990. J. Shamma and M. Athans. Guaranteed properties of gain scheduled control for linear parametervarying plants. Automatica, pages 559 564, 1991. J. Shamma and M. Athans. Gainscheduling: Potential hazards and possible remedies. IEEE Control Systems Magazine, ..."
Abstract

Cited by 61 (5 self)
 Add to MetaCart
n Automatic Control, pages 898 907, 1990. J. Shamma and M. Athans. Guaranteed properties of gain scheduled control for linear parametervarying plants. Automatica, pages 559 564, 1991. J. Shamma and M. Athans. Gainscheduling: Potential hazards and possible remedies. IEEE Control Systems Magazine, 12(3):101 107, June 1992. [Sch96] A. Schwartz. Theory and Implementation of Numerical Methods Based on RungeKutta Integration for Optimal Control Problems. PhD Disser tation, University of California, Berkeley, 1996. [SCH+00] M. Sznaier, J. Cloutier, R. Hull, D. Jacques, and C. Mracek. Reced ing horizon control lyapunov function approach to suboptimal regula tion of nonlinear systems. Journal of Guidance, Control, and Dynamics, 23(3):399 405, 2000. [SD90] M. Sznaier and M. J. Damborg. Heuristically enhanced feedback con trol of constrained discretetime linear systems. Automatica, 26:521 532, 1990. [SMR99] P. Scokaert, D. Mayne, and J. Rawlings. Suboptimal model predictive cont
Measurement and Integration of 3D Structures by Tracking Edge Lines
, 1992
"... This paper describes techniques for dynamically modeling the 2D appearance and 3D geometry of a scene by integrating information from a moving camera. These techniques are illustrated by the design of a system which constructs a geometric description of a scene from the motion of a camera mounted ..."
Abstract

Cited by 60 (6 self)
 Add to MetaCart
This paper describes techniques for dynamically modeling the 2D appearance and 3D geometry of a scene by integrating information from a moving camera. These techniques are illustrated by the design of a system which constructs a geometric description of a scene from the motion of a camera mounted on a robot arm. A framework
Detection of Stochastic Processes
 IEEE Trans. Inform. Theory
, 1998
"... This paper reviews two streams of development, from the 1940's to the present, in signal detection theory: the structure of the likelihood ratio for detecting signals in noise and the role of dynamic optimization in detection problems involving either very large signal sets or the joint optimiz ..."
Abstract

Cited by 59 (7 self)
 Add to MetaCart
(Show Context)
This paper reviews two streams of development, from the 1940's to the present, in signal detection theory: the structure of the likelihood ratio for detecting signals in noise and the role of dynamic optimization in detection problems involving either very large signal sets or the joint optimization of observation time and performance. This treatment deals exclusively with basic results developed for the situation in which the observations are modeled as continuoustime stochastic processes. The mathematics and intuition behind such developments as the matched filter, the RAKE receiver, the estimatorcorrelator, maximumlikelihood sequence detectors, multiuser detectors, sequential probability ratio tests, and cumulativesum quickest detectors, are described. Index Terms Dynamic programming, innovations processes, likelihood ratios, martingale theory, matched filters, optimal stopping, reproducing kernel Hilbert spaces, sequence detection, sequential methods, signal detection, signal estimation.
An Overview of Nonlinear Model Predictive Control Applications
 Nonlinear Predictive Control
, 2000
"... . This paper provides an overview of nonlinear model predictive control (NMPC) applications in industry, focusing primarily on recent applications reported by NMPC vendors. A brief summary of NMPC theory is presented to highlight issues pertinent to NMPC applications. Five industrial NMPC implem ..."
Abstract

Cited by 57 (1 self)
 Add to MetaCart
(Show Context)
. This paper provides an overview of nonlinear model predictive control (NMPC) applications in industry, focusing primarily on recent applications reported by NMPC vendors. A brief summary of NMPC theory is presented to highlight issues pertinent to NMPC applications. Five industrial NMPC implementations are then discussed with reference to modeling, control, optimization, and implementation issues. Results from several industrial applications are presented to illustrate the benefits possible with NMPC technology. A discussion of future needs in NMPC theory and practice is provided to conclude the paper. 1. Introduction The term Model Predictive Control (MPC) describes a class of computer control algorithms that control the future behavior of a plant through the use of an explicit process model. At each control interval the MPC algorithm computes an openloop sequence of manipulated variable adjustments in order to optimize future plant behavior. The first input in the optima...
Planningbased Prediction for Pedestrians
"... Abstract — We present a novel approach for determining robot movements that efficiently accomplish the robot’s tasks while not hindering the movements of people within the environment. Our approach models the goaldirected trajectories of pedestrians using maximum entropy inverse optimal control. Th ..."
Abstract

Cited by 54 (15 self)
 Add to MetaCart
(Show Context)
Abstract — We present a novel approach for determining robot movements that efficiently accomplish the robot’s tasks while not hindering the movements of people within the environment. Our approach models the goaldirected trajectories of pedestrians using maximum entropy inverse optimal control. The advantage of this modeling approach is the generality of its learned cost function to changes in the environment and to entirely different environments. We employ the predictions of this model of pedestrian trajectories in a novel incremental planner and quantitatively show the improvement in hindrancesensitive robot trajectory planning provided by our approach. I.
Distributing the Kalman filters for largescale systems
 IEEE Trans. on Signal Processing, http://arxiv.org/pdf/0708.0242
"... Abstract—This paper presents a distributed Kalman filter to estimate the state of a sparsely connected, largescale,dimensional, dynamical system monitored by a network of sensors. Local Kalman filters are implemented ondimensional subsystems,, obtained by spatially decomposing the largescale sys ..."
Abstract

Cited by 54 (13 self)
 Add to MetaCart
(Show Context)
Abstract—This paper presents a distributed Kalman filter to estimate the state of a sparsely connected, largescale,dimensional, dynamical system monitored by a network of sensors. Local Kalman filters are implemented ondimensional subsystems,, obtained by spatially decomposing the largescale system. The distributed Kalman filter is optimal under an th order Gauss–Markov approximation to the centralized filter. We quantify the information loss due to this thorder approximation by the divergence, which decreases as increases. The order of the approximation leads to a bound on the dimension of the subsystems, hence, providing a criterion for subsystem selection. The (approximated) centralized Riccati and Lyapunov equations are computed iteratively with only local communication and loworder computation by a distributed iterate collapse inversion (DICI) algorithm. We fuse the observations that are common among the local Kalman filters using bipartite fusion graphs and consensus averaging algorithms. The proposed algorithm achieves full distribution of the Kalman filter. Nowhere in the network, storage, communication, or computation ofdimensional vectors and matrices is required; only dimensional vectors and matrices are communicated or used in the local computations at the sensors. In other words, knowledge of the state is itself distributed. Index Terms—Distributed algorithms, distributed estimation, information filters, iterative methods, Kalman filtering, largescale systems, matrix inversion, sparse matrices. I.