Results 1  10
of
163
Adaptive Sampling With the Ensemble Transform . . .
, 2001
"... A suboptimal Kalman filter called the ensemble transform Kalman filter (ET KF) is introduced. Like other Kalman filters, it provides a framework for assimilating observations and also for estimating the effect of observations on forecast error covariance. It differs from other ensemble Kalman filt ..."
Abstract

Cited by 321 (19 self)
 Add to MetaCart
A suboptimal Kalman filter called the ensemble transform Kalman filter (ET KF) is introduced. Like other Kalman filters, it provides a framework for assimilating observations and also for estimating the effect of observations on forecast error covariance. It differs from other ensemble Kalman filters in that it uses ensemble transformation and a normalization to rapidly obtain the prediction error covariance matrix associated with a particular deployment of observational resources. This rapidity enables it to quickly assess the ability of a large number of future feasible sequences of observational networks to reduce forecast error variance. The ET KF was used by the National Centers for Environmental Prediction in the Winter Storm Reconnaissance missions of 1999 and 2000 to determine where aircraft should deploy dropwindsondes in order to improve 2472h forecasts over the continental United States. The ET KF may be applied to any wellconstructed set of ensemble perturbations. The ET KF
Epipolarplane image analysis: An approach to determining structure from motion
 INTERN..1. COMPUTER VISION
, 1987
"... We present a technique for building a threedimensional description of a static scene from a dense sequence of images. These images are taken in such rapid succession that they form a solid block of data in which the temporal continuity from image to image is approximately equal to the spatial conti ..."
Abstract

Cited by 255 (3 self)
 Add to MetaCart
(Show Context)
We present a technique for building a threedimensional description of a static scene from a dense sequence of images. These images are taken in such rapid succession that they form a solid block of data in which the temporal continuity from image to image is approximately equal to the spatial continuity in an individual image. The technique utilizes knowledge of the camera motion to form and analyze slices of this solid. These slices directly encode not only the threedimensional positions of objects, but also such spatiotemporal events as the occlusion of one object by another. For straightline camera motions, these slices have a simple linear structure that makes them easier to analyze. The analysis computes the threedimensional positions of object features, marks occlusion boundaries on the objects, and builds a threedimensional map of "free space." In our article, we first describe the application of this technique to a simple camera motion, and then show how projective duality is used to extend the analysis to a wider class of camera motions and object types that include curved and moving objects.
DistanceDependent Filtering of Background Error Covariance Estimates in an Ensemble Kalman Filter
, 2001
"... The usefulness of a distancedependent reduction of background error covariance estimates in an ensemble Kalman filter is demonstrated. Covariances are reduced by performing an elementwise multiplication of the background error covariance matrix with a correlation function with local support. This ..."
Abstract

Cited by 189 (31 self)
 Add to MetaCart
The usefulness of a distancedependent reduction of background error covariance estimates in an ensemble Kalman filter is demonstrated. Covariances are reduced by performing an elementwise multiplication of the background error covariance matrix with a correlation function with local support. This reduces noisiness and results in an improved background error covariance estimate, which generates a reducederror ensemble of model initial conditions. The benefits
Fusion of face and speech data for person identity verification
 IEEE Trans. Neural Networks
, 1999
"... Abstract—Biometric person identity authentication is gaining more and more attention. The authentication task performed by an expert is a binary classification problem: reject or accept identity claim. Combining experts, each based on a different modality (speech, face, fingerprint, etc.), increases ..."
Abstract

Cited by 120 (0 self)
 Add to MetaCart
(Show Context)
Abstract—Biometric person identity authentication is gaining more and more attention. The authentication task performed by an expert is a binary classification problem: reject or accept identity claim. Combining experts, each based on a different modality (speech, face, fingerprint, etc.), increases the performance and robustness of identity authentication systems. In this context, a key issue is the fusion of the different experts for taking a final decision (i.e., accept or reject identity claim). We propose to evaluate different binary classification schemes (support vector machine, multilayer perceptron, C4.5 decision tree, Fisher’s linear discriminant, Bayesian classifier) to carry on the fusion. The experimental results show that support vector machines and Bayesian classifier achieve almost the same performances, and both outperform the other evaluated classifiers. Index Terms—Bayesian decision, binary classifiers, biometrics, data fusion, face recognition, speaker recognition, support vector machine. I.
Ensemble Square Root Filters
, 2003
"... Ensemble data assimilation methods assimilate observations using statespace estimation methods and lowrank representations of forecast and analysis error covariances. A key element of such methods is the transformation of the forecast ensemble into an analysis ensemble with appropriate statistics ..."
Abstract

Cited by 116 (7 self)
 Add to MetaCart
Ensemble data assimilation methods assimilate observations using statespace estimation methods and lowrank representations of forecast and analysis error covariances. A key element of such methods is the transformation of the forecast ensemble into an analysis ensemble with appropriate statistics. This transformation may be performed stochastically by treating observations as random variables, or deterministically by requiring that the updated analysis perturbations satisfy the Kalman filter analysis error covariance equation. Deterministic analysis ensemble updates are implementations of Kalman square root filters. The nonuniqueness of the deterministic transformation used in square root Kalman filters provides a framework to compare three recently proposed ensemble data assimilation methods.
A Local Least Squares Framework for Ensemble Filtering
, 2003
"... Many methods using ensemble integrations of prediction models as integral parts of data assimilation have appeared in the atmospheric and oceanic literature. In general, these methods have been derived from the Kalman filter and have been known as ensemble Kalman filters. A more general class of m ..."
Abstract

Cited by 88 (9 self)
 Add to MetaCart
Many methods using ensemble integrations of prediction models as integral parts of data assimilation have appeared in the atmospheric and oceanic literature. In general, these methods have been derived from the Kalman filter and have been known as ensemble Kalman filters. A more general class of methods including these ensemble Kalman filter methods is derived starting from the nonlinear filtering problem. When working in a joint state observation space, many features of ensemble filtering algorithms are easier to derive and compare. The ensemble filter methods derived here make a (local) least squares assumption about the relation between prior distributions of an observation variable and model state variables. In this context, the update procedure applied when a new observation becomes available can be described in two parts. First, an update increment is computed for each prior ensemble estimate of the observation variable by applying a scalar ensemble filter. Second, a linear regression of the prior ensemble sample of each state variable on the observation variable is performed to compute update increments for each state variable ensemble member from corresponding observation variable increments. The regression can be applied globally or locally using Gaussian kernel methods.
Assessing the effects of data selection with the DAO Physicalspace Statistical Analysis System
 Mon. Wea. Rev
, 1998
"... Conventional optimal interpolation (OI) analysis systems solve the standard statistical analysis equations approximately, by invoking a local approximation and a data selection procedure. Although solution of the analysis equations is essentially exact in the recent generation of global spectral var ..."
Abstract

Cited by 49 (9 self)
 Add to MetaCart
(Show Context)
Conventional optimal interpolation (OI) analysis systems solve the standard statistical analysis equations approximately, by invoking a local approximation and a data selection procedure. Although solution of the analysis equations is essentially exact in the recent generation of global spectral variational analysis systems, these new systems also include substantial changes in error covariance modeling, making it difficult to discern whether improvements in analysis and forecast quality are due to exact, global solution of the analysis equations, or to changes in error covariance modeling. The formulation and implementation of a new type of global analysis system at the Data Assimilation Office, termed the Physicalspace Statistical Analysis System (PSAS), is described in this article. Since this system operates directly in physical space, it is capable of employing error covariance models identical to those of the predecessor OI system, as well as more advanced models. To focus strictly on the effect of global versus local solution of the analysis equations, a comparison between PSAS and OI analyses is carried out with both systems using identical error covariance models and identical data. Spectral decomposition of the analysis increments reveals that, relative to the PSAS increments, the OI increments have too little power at large horizontal scales and excessive power at small horizontal scales. The OI increments also display an unrealistically large ratio of divergence to vorticity. Dynamical imbalances in the OIanalyzed state can therefore be attributed in part to the approximate local method of solution, and are not entirely due to the simple geostrophic constraint built into the forecast error covariance model. Rootmeansquare observation minus 6h forecast errors in the zonal wind component are substantially smaller for the PSAS system than for the OI system. 1.
A comparison between the 4DVAR and the ensemble Kalman filter techniques for radar data assimilation
 IN REVIEW
, 2005
"... A fourdimensional variational data assimilation (4DVAR) algorithm is compared to an ensemble Kalman filter (EnKF) for the assimilation of radar data at the convective scale. Using a cloudresolving model, simulated, imperfect radar observations of a supercell storm are assimilated under the assump ..."
Abstract

Cited by 48 (4 self)
 Add to MetaCart
A fourdimensional variational data assimilation (4DVAR) algorithm is compared to an ensemble Kalman filter (EnKF) for the assimilation of radar data at the convective scale. Using a cloudresolving model, simulated, imperfect radar observations of a supercell storm are assimilated under the assumption of a perfect forecast model. Overall, both assimilation schemes perform well and are able to recover the supercell with comparable accuracy, given radialvelocity and reflectivity observations where rain was present. 4DVAR produces generally better analyses than the EnKF given observations limited to a period of 10 min (or three volume scans), particularly for the wind components. In contrast, the EnKF typically produces better analyses than 4DVAR after several assimilation cycles, especially for model variables not functionally related to the observations. The advantages of the EnKF in later cycles arise at least in part from the fact that the 4DVAR scheme implemented here does not use a forecast from a previous cycle as background or evolve its error covariance. Possible reasons for the initial advantage of 4DVAR are deficiencies in the initial ensemble used by the EnKF, the temporal smoothness constraint used in 4DVAR, and nonlinearities in the evolution of forecast errors over the assimilation window.
Maximumlikelihood estimation of forecast and observation error covariance parameters. Part I: Methodology
, 1998
"... The maximumlikelihood method for estimating observation and forecast error covariance parameters is described. The method is presented in general terms but with particular emphasis on practical aspects of implementation. Issues such as bias estimation and correction, parameter identifiability, esti ..."
Abstract

Cited by 46 (6 self)
 Add to MetaCart
The maximumlikelihood method for estimating observation and forecast error covariance parameters is described. The method is presented in general terms but with particular emphasis on practical aspects of implementation. Issues such as bias estimation and correction, parameter identifiability, estimation accuracy, and robustness of the method, are discussed in detail. The relationship between the maximumlikelihood method and Generalized CrossValidation is briefly addressed. The method can be regarded as a generalization of the traditional procedure for estimating covariance parameters from station data. It does not involve any restrictions on the covariance models and can be used with data from moving observers, provided the parameters to be estimated are identifiable. Any available a priori information about the observation and forecast error distributions can be incorporated into the estimation procedure. Estimates of parameter accuracy due to sampling error are obtained as a byp...
Adjusting the Outputs of a Classifier to New a Priori Probabilities May Significantly Improve Classification Accuracy: Evidence from a MultiClass Problem in Remote Sensing
 NEURAL COMPUTATION
, 2001
"... In the present study, we introduce a simple iterative procedure that allows to correct the outputs of a classifier with respect to the new a priori probabilities of a new data set to be scored, even when these new a priori probabilities are unknown in advance. We also show that a significant i ..."
Abstract

Cited by 43 (2 self)
 Add to MetaCart
In the present study, we introduce a simple iterative procedure that allows to correct the outputs of a classifier with respect to the new a priori probabilities of a new data set to be scored, even when these new a priori probabilities are unknown in advance. We also show that a significant increase in classification accuracy can be observed when using this procedure properly. More specifically, by applying the correcting procedure to the outputs of a simple logistic regression model, we observe an increase of 5.8% of classification rate on a di#cult realworld multiclass problem  the automatic labeling of geographical maps based on remote sensing information. Moreover,