Results 11  20
of
577
Local low dimensionality of atmospheric dynamics
 Phys. Rev. Lett
, 2001
"... Recent studies (Patil et al. 2001, 2002) have shown that, when the Earth’s surface is divided up into local regions of moderate size, vectors of the forecast uncertainties in such regions tend to lie in a subspace of much lower dimension than that of the full atmospheric state vector. In this paper ..."
Abstract

Cited by 53 (18 self)
 Add to MetaCart
Recent studies (Patil et al. 2001, 2002) have shown that, when the Earth’s surface is divided up into local regions of moderate size, vectors of the forecast uncertainties in such regions tend to lie in a subspace of much lower dimension than that of the full atmospheric state vector. In this paper we show how this finding can be exploited to formulate a potentially accurate and efficient data assimilation technique. The basic idea is that, since the expected forecast errors lie in a locally low dimensional subspace, the analysis resulting from the data assimilation should also lie in this subspace. This implies that operations only on relatively low dimensional matrices are required. The data assimilation analysis is done locally in a manner allowing massively parallel computation to be exploited. The local analyses are then used to construct global states for advancement to the next forecast time. Potential advantages of the method are discussed. 1
Climate change and salmon production in the Northeast Pacific Ocean
 In R.J. Beamish (Ed.) Climate Change and Northern Fish Populations
, 1994
"... Abstract: Alaskan salmon stocks have exhibited enormous fluctuations in production during the 20th century. In this paper, we investigate our hypothesis that largescale salmonproduction variability is driven by climatic processes in the Northeast Pacific Ocean. Using a timeseries analytical techn ..."
Abstract

Cited by 53 (5 self)
 Add to MetaCart
(Show Context)
Abstract: Alaskan salmon stocks have exhibited enormous fluctuations in production during the 20th century. In this paper, we investigate our hypothesis that largescale salmonproduction variability is driven by climatic processes in the Northeast Pacific Ocean. Using a timeseries analytical technique known as intervention analysis, we demonstrate that Alaskan salmonids alternate between high and low production regimes. The transition from a high(low) regime to a low(high) regime is called an intervention. To test for interventions, we first fitted the salmon time series to univariate autoregressive integrated moving average (ARIMA) models. On the basis of tentatively identified climatic regime shifts, potential interventions were then identified and incorporated into the models, and the resulting fit was compared with the nonintervention models. A highly significant positive step intervention in the late 1970s and a significant negative step intervention in the late 1940s were identified in the four major Alaska salmon stocks analyzed. We review the evidence for synchronous climatic regime shifts in the late 1940s and late 1970s that coincide with the shifts in salmon production. Potential mechanisms linking North Pacific climatic processes to salmon production are identified. 1
An optimal estimation approach to visual perception and learning
 VISION RESEARCH
, 1999
"... How does the visual system learn an internal model of the external environment? How is this internal model used during visual perception? How are occlusions and background clutter so effortlessly discounted for when recognizing a familiar object? How is a particular object of interest attended to an ..."
Abstract

Cited by 51 (8 self)
 Add to MetaCart
(Show Context)
How does the visual system learn an internal model of the external environment? How is this internal model used during visual perception? How are occlusions and background clutter so effortlessly discounted for when recognizing a familiar object? How is a particular object of interest attended to and recognized in the presence of other objects in the field of view? In this paper, we attempt to address these questions from the perspective of Bayesian optimal estimation theory. Using the concept of generative models and the statistical theory of Kalman filtering, we show how static and dynamic events occurring in the visual environment may be learned and recognized given only the input images. We also describe an extension of the Kalman filter model that can handle multiple objects in the field of view. The resulting robust Kalman filter model demonstrates how certain forms of attention can be viewed as an emergent property of the interaction between top–down expectations and bottom–up signals. Experimental results are provided to help demonstrate the ability of such a model to perform robust segmentation and recognition of objects and image sequences in the presence of occlusions and clutter.
Principles and Techniques for Sensor Data Fusion
, 1993
"... This paper concerns a problem which is basic to perception: the integration of perceptual information into a coherent description of the world. In this paper we present perception as a process of dynamically maintaining a model of the local external environment. Fusion of perceptual information is a ..."
Abstract

Cited by 39 (6 self)
 Add to MetaCart
This paper concerns a problem which is basic to perception: the integration of perceptual information into a coherent description of the world. In this paper we present perception as a process of dynamically maintaining a model of the local external environment. Fusion of perceptual information is at the heart of this process.
On Unscented Kalman Filtering for State Estimation of ContinuousTime Nonlinear Systems
, 2007
"... This article considers the application of the unscented Kalman filter (UKF) to continuoustime filtering problems, where both the state and measurement processes are modeled as stochastic differential equations. The mean and covariance differential equations which result in the continuoustime lim ..."
Abstract

Cited by 37 (8 self)
 Add to MetaCart
This article considers the application of the unscented Kalman filter (UKF) to continuoustime filtering problems, where both the state and measurement processes are modeled as stochastic differential equations. The mean and covariance differential equations which result in the continuoustime limit of the UKF are derived. The continuousdiscrete unscented Kalman filter is derived as a special case of the continuoustime filter, when the continuoustime prediction equations are combined with the update step of the discretetime unscented Kalman filter. The filter equations are also transformed into sigmapoint differential equations, which can be interpreted as matrix square root versions of the filter equations.
Unscented RauchTungStriebel Smoother
"... This article considers the application of the unscented transform to optimal smoothing of nonlinear state space models. In this article, a new RauchTungStriebel type form of the fixedinterval unscented Kalman smoother is derived. The new smoother differs from the previously proposed twofilter f ..."
Abstract

Cited by 36 (3 self)
 Add to MetaCart
This article considers the application of the unscented transform to optimal smoothing of nonlinear state space models. In this article, a new RauchTungStriebel type form of the fixedinterval unscented Kalman smoother is derived. The new smoother differs from the previously proposed twofilter formulation based unscented Kalman smoother in the sense that it is not based on running two independent filters forward and backward in time. Instead, a separate backward smoothing pass is used, which recursively computes corrections to the forward filtering result. The smoother equations are derived as approximations to the formal Bayesian optimal smoothing equations. The performance of the new smoother is demonstrated with a simulation.
Little Ben: The Ben Franklin Racing Team’s Entry in the 2007 DARPA Urban Challenge
, 2008
"... paper describes “Little Ben, ” an autonomous ground vehicle constructed by the Ben Franklin Racing Team for the 2007 DARPA Urban Challenge in under a year and for less than $250,000. The sensing, planning, navigation, and actuation systems for Little Ben were carefully designed to meet the performan ..."
Abstract

Cited by 35 (4 self)
 Add to MetaCart
(Show Context)
paper describes “Little Ben, ” an autonomous ground vehicle constructed by the Ben Franklin Racing Team for the 2007 DARPA Urban Challenge in under a year and for less than $250,000. The sensing, planning, navigation, and actuation systems for Little Ben were carefully designed to meet the performance demands required of an autonomous vehicle traveling in an uncertain urban environment. We incorporated an array of a global positioning system (GPS)/inertial navigation system, LIDARs, and stereo cameras to provide timely information about the surrounding environment at the appropriate ranges. This sensor information was integrated into a dynamic map that could robustly handle GPS dropouts and errors. Our planning algorithms consisted of a highlevel mission planner that used information from the provided route network definition and mission data files to select routes, whereas the lower level planner used the latest dynamic map information to optimize a feasible trajectory to the next waypoint. The vehicle was actuated by a costbased controller that efficiently handled steering, throttle, and braking maneuvers in both forward and reverse directions. Our software modules were integrated within a hierarchical architecture that allowed rapid development and testing of the system performance. The resulting vehicle was one of six to successfully finish the Urban Challenge.
Tidal Flow Forecasting using Reduced Rank Square Root Filters
 Hydraul
, 1996
"... The Kalman filter algorithm can be used for many data assimilation problems. For large systems, that arise from discretizing partial differential equations, the standard algorithm has huge computational and storage requirements. This makes direct use infeasible for many applications. In addition num ..."
Abstract

Cited by 35 (2 self)
 Add to MetaCart
(Show Context)
The Kalman filter algorithm can be used for many data assimilation problems. For large systems, that arise from discretizing partial differential equations, the standard algorithm has huge computational and storage requirements. This makes direct use infeasible for many applications. In addition numerical difficulties may arise if due to finite precision computations or approximations of the error covariance the requirement that the error covariance should be positive semidefinite is violated. In this paper an approximation to the Kalman filter algorithm is suggested that solves these problems for many applications. The algorithm is based on a reduced rank approximation of the error covariance using a square root factorization. The use of the factorization ensures that the error covariance matrix remains positive semidefinite at all times, while the smaller rank reduces the number of computations and storage requirements. The number of computations and storage required depend on the ...
A survey of sequential Monte Carlo methods for economics and finance
, 2009
"... This paper serves as an introduction and survey for economists to the field of sequential Monte Carlo methods which are also known as particle filters. Sequential Monte Carlo methods are simulation based algorithms used to compute the highdimensional and/or complex integrals that arise regularly in ..."
Abstract

Cited by 34 (7 self)
 Add to MetaCart
This paper serves as an introduction and survey for economists to the field of sequential Monte Carlo methods which are also known as particle filters. Sequential Monte Carlo methods are simulation based algorithms used to compute the highdimensional and/or complex integrals that arise regularly in applied work. These methods are becoming increasingly popular in economics and finance; from dynamic stochastic general equilibrium models in macroeconomics to option pricing. The objective of this paper is to explain the basics of the methodology, provide references to the literature, and cover some of the theoretical results that justify the methods in practice.
Computational Methods for Complex Stochastic Systems: A Review of Some Alternatives to MCMC
"... We consider analysis of complex stochastic models based upon partial information. MCMC and reversible jump MCMC are often the methods of choice for such problems, but in some situations they can be difficult to implement; and suffer from problems such as poor mixing, and the difficulty of diagnosing ..."
Abstract

Cited by 34 (5 self)
 Add to MetaCart
(Show Context)
We consider analysis of complex stochastic models based upon partial information. MCMC and reversible jump MCMC are often the methods of choice for such problems, but in some situations they can be difficult to implement; and suffer from problems such as poor mixing, and the difficulty of diagnosing convergence. Here we review three alternatives to MCMC methods: importance sampling, the forwardbackward algorithm, and sequential Monte Carlo (SMC). We discuss how to design good proposal densities for importance sampling, show some of the range of models for which the forwardbackward algorithm can be applied, and show how resampling ideas from SMC can be used to improve the efficiency of the other two methods. We demonstrate these methods on a range of examples, including estimating the transition density of a diffusion and of a discretestate continuoustime Markov chain; inferring structure in population genetics; and segmenting genetic divergence data.