Results 1  10
of
200
Adaptive Sampling With the Ensemble Transform . . .
, 2001
"... A suboptimal Kalman filter called the ensemble transform Kalman filter (ET KF) is introduced. Like other Kalman filters, it provides a framework for assimilating observations and also for estimating the effect of observations on forecast error covariance. It differs from other ensemble Kalman filt ..."
Abstract

Cited by 321 (19 self)
 Add to MetaCart
A suboptimal Kalman filter called the ensemble transform Kalman filter (ET KF) is introduced. Like other Kalman filters, it provides a framework for assimilating observations and also for estimating the effect of observations on forecast error covariance. It differs from other ensemble Kalman filters in that it uses ensemble transformation and a normalization to rapidly obtain the prediction error covariance matrix associated with a particular deployment of observational resources. This rapidity enables it to quickly assess the ability of a large number of future feasible sequences of observational networks to reduce forecast error variance. The ET KF was used by the National Centers for Environmental Prediction in the Winter Storm Reconnaissance missions of 1999 and 2000 to determine where aircraft should deploy dropwindsondes in order to improve 2472h forecasts over the continental United States. The ET KF may be applied to any wellconstructed set of ensemble perturbations. The ET KF
An Ensemble Adjustment Kalman Filter for Data Assimilation
, 2001
"... A theory for estimating the probability distribution of the state of a model given a set of observations exists. This nonlinear ..."
Abstract

Cited by 283 (12 self)
 Add to MetaCart
A theory for estimating the probability distribution of the state of a model given a set of observations exists. This nonlinear
Ensemble Data Assimilation without Perturbed Observations
 MON. WEA. REV
, 2002
"... The ensemble Kalman filter (EnKF) is a data assimilation scheme based on the traditional Kalman filter update equation. An ensemble of forecasts are used to estimate the backgrounderror covariances needed to compute the Kalman gain. It is known that if the same observations and the same gain are ..."
Abstract

Cited by 278 (21 self)
 Add to MetaCart
The ensemble Kalman filter (EnKF) is a data assimilation scheme based on the traditional Kalman filter update equation. An ensemble of forecasts are used to estimate the backgrounderror covariances needed to compute the Kalman gain. It is known that if the same observations and the same gain are used to update each member of the ensemble, the ensemble will systematically underestimate analysiserror covariances. This will cause a degradation of subsequent analyses and may lead to filter divergence. For large ensembles, it is known that this problem can be alleviated by treating the observations as random variables, adding random perturbations to them with the correct statistics. Two important
DistanceDependent Filtering of Background Error Covariance Estimates in an Ensemble Kalman Filter
, 2001
"... The usefulness of a distancedependent reduction of background error covariance estimates in an ensemble Kalman filter is demonstrated. Covariances are reduced by performing an elementwise multiplication of the background error covariance matrix with a correlation function with local support. This ..."
Abstract

Cited by 189 (31 self)
 Add to MetaCart
The usefulness of a distancedependent reduction of background error covariance estimates in an ensemble Kalman filter is demonstrated. Covariances are reduced by performing an elementwise multiplication of the background error covariance matrix with a correlation function with local support. This reduces noisiness and results in an improved background error covariance estimate, which generates a reducederror ensemble of model initial conditions. The benefits
Efficient data assimilation for spatiotemporal chaos: A local ensemble transform Kalman filter
 Physica D
, 2007
"... Data assimilation is an iterative approach to the problem of estimating the state of a dynamical system using both current and past observations of the system together with a model for the system’s time evolution. Rather than solving the problem from scratch each time new observations become availab ..."
Abstract

Cited by 147 (11 self)
 Add to MetaCart
Data assimilation is an iterative approach to the problem of estimating the state of a dynamical system using both current and past observations of the system together with a model for the system’s time evolution. Rather than solving the problem from scratch each time new observations become available, one uses the model to “forecast ” the current state, using a prior state estimate (which incorporates information from past data) as the initial condition, then uses current data to correct the prior forecast to a current state estimate. This Bayesian approach is most effective when the uncertainty in both the observations and in the state estimate, as it evolves over time, are accurately quantified. In this article, I describe a practical method for data assimilation in large, spatiotemporally chaotic systems. The method is a type of “Ensemble Kalman Filter”, in which the state estimate and its approximate uncertainty are represented at any given time by an ensemble of system states. I discuss both the mathematical basis of this approach and its implementation; my primary emphasis is on ease of use and computational speed rather than improving accuracy over previously published approaches to ensemble Kalman filtering. 1
OBSTACLES TO HIGHDIMENSIONAL PARTICLE FILTERING
"... Particle filters are ensemblebased assimilation schemes that, unlike the ensemble Kalman filter, employ a fully nonlinear and nonGaussian analysis step to compute the probability distribution function (pdf) of a system’s state conditioned on a set of observations. Evidence is provided that the ens ..."
Abstract

Cited by 94 (4 self)
 Add to MetaCart
Particle filters are ensemblebased assimilation schemes that, unlike the ensemble Kalman filter, employ a fully nonlinear and nonGaussian analysis step to compute the probability distribution function (pdf) of a system’s state conditioned on a set of observations. Evidence is provided that the ensemble size required for a successful particle filter scales exponentially with the problem size. For the simple example in which each component of the state vector is independent, Gaussian and of unit variance, and the observations are of each state component separately with independent, Gaussian errors, simulations indicate that the required ensemble size scales exponentially with the state dimension. In this example, the particle filter requires at least 1011 members when applied to a 200dimensional state. Asymptotic results, following the work of Bengtsson, Bickel and collaborators, are provided for two cases: one in which each prior state component is independent and identically distributed, and one in which both the prior pdf and the observation errors are Gaussian. The asymptotic theory reveals that, in both cases, the required ensemble size scales exponentially with the variance of the observation loglikelihood, rather than with the state dimension per se. 2
A Local Least Squares Framework for Ensemble Filtering
, 2003
"... Many methods using ensemble integrations of prediction models as integral parts of data assimilation have appeared in the atmospheric and oceanic literature. In general, these methods have been derived from the Kalman filter and have been known as ensemble Kalman filters. A more general class of m ..."
Abstract

Cited by 88 (9 self)
 Add to MetaCart
Many methods using ensemble integrations of prediction models as integral parts of data assimilation have appeared in the atmospheric and oceanic literature. In general, these methods have been derived from the Kalman filter and have been known as ensemble Kalman filters. A more general class of methods including these ensemble Kalman filter methods is derived starting from the nonlinear filtering problem. When working in a joint state observation space, many features of ensemble filtering algorithms are easier to derive and compare. The ensemble filter methods derived here make a (local) least squares assumption about the relation between prior distributions of an observation variable and model state variables. In this context, the update procedure applied when a new observation becomes available can be described in two parts. First, an update increment is computed for each prior ensemble estimate of the observation variable by applying a scalar ensemble filter. Second, a linear regression of the prior ensemble sample of each state variable on the observation variable is performed to compute update increments for each state variable ensemble member from corresponding observation variable increments. The regression can be applied globally or locally using Gaussian kernel methods.
Hydrologic Data Assimilation with the Ensemble Kalman Filter
, 2002
"... Soil moisture controls the partitioning of moisture and energy fluxes at the land surface and is a key variable in weather and climate prediction. The performance of the ensemble Kalman filter (EnKF) for soil moisture estimation is assessed by assimilating Lband (1.4 GHz) microwave radiobrightnes ..."
Abstract

Cited by 88 (7 self)
 Add to MetaCart
Soil moisture controls the partitioning of moisture and energy fluxes at the land surface and is a key variable in weather and climate prediction. The performance of the ensemble Kalman filter (EnKF) for soil moisture estimation is assessed by assimilating Lband (1.4 GHz) microwave radiobrightness observations into a land surface model. An optimal smoother (a dynamic variational method) is used as a benchmark for evaluating the filter's performance. In a series of synthetic experiments the effect of ensemble size and nonGaussian forecast errors on the estimation accuracy of the EnKF is investigated. With a state vector dimension of 4608 and a relatively small ensemble size of 30 (or 100; or 500), the actual errors in surface soil moisture at the final update time are reduced by 55% (or 70%; or 80%) from the value obtained without assimilation (as compared to 84% for the optimal smoother). For robust error variance estimates, an ensemble of at least 500 members is needed.
Estimation of highdimensional prior and posterior covariance matrices in Kalman filter variants
 Journal of Multivariate Analysis
, 2007
"... This work studies the effect of using Monte Carlo based methods to estimate highdimensional systems. Recent focus in the geosciences has been on representing the atmospheric state using a probability density function, and, for extremely highdimensional systems, various sample based Kalman filter t ..."
Abstract

Cited by 84 (4 self)
 Add to MetaCart
(Show Context)
This work studies the effect of using Monte Carlo based methods to estimate highdimensional systems. Recent focus in the geosciences has been on representing the atmospheric state using a probability density function, and, for extremely highdimensional systems, various sample based Kalman filter techniques have been developed to address the problem of realtime assimilation of system information and observations. As the employed sample sizes are typically several orders of magnitude smaller than the system dimension, such sampling techniques inevitably induces considerable variability into the state estimate, primarily through prior and posterior sample covariance matrices. In this article we quantify this variability with mean squared error measures for two MonteCarlo based Kalman filter variants, the ensemble Kalman filter and the squareroot filter. Under weak assumptions, we derive exact expressions of the error measures. In other cases, we rely on matrix expansions and provide approximations. We show that covarianceshrinking (tapering) based on the Schur product of the prior sample covariance matrix and a positive definite function is a simple, computationally feasible, and very effective technique to reduce sample variability and to address rankdeficient sample covariances. We propose practical rules for obtaining optimally tapered sample covariance matrices. The theoretical results are verified and illustrated with extensive simulations.
A comparison of probabilistic forecasts from bred, singularvector, and perturbation observation ensembles
 MON. WEA. REV
, 2000
"... The statistical properties of analysis and forecast errors from commonly used ensemble perturbation methodologies are explored. A quasigeostrophic channel model is used, coupled with a 3Dvariational data assimilation scheme. A perfect model is assumed. Three perturbation methodologies are considere ..."
Abstract

Cited by 55 (7 self)
 Add to MetaCart
The statistical properties of analysis and forecast errors from commonly used ensemble perturbation methodologies are explored. A quasigeostrophic channel model is used, coupled with a 3Dvariational data assimilation scheme. A perfect model is assumed. Three perturbation methodologies are considered. The breeding and singularvector (SV) methods approximate the strategies currently used at operational centers in the United States and Europe, respectively. The perturbed observation (PO) methodology approximates a random sample from the analysis probability density function (pdf) and is similar to the method performed at the Canadian Meteorological Centre. Initial conditions for the PO ensemble are analyses from independent, parallel data assimilation cycles. Each assimilation cycle utilizes observations perturbed by random noise whose statistics are consistent with observational error covariances. Each member’s assimilation/forecast cycle is also started from a distinct initial condition. Relative to breeding and SV, the PO method here produced analyses and forecasts with desirable statistical characteristics. These include consistent rank histogram uniformity for all variables at all lead times, high spread/ skill correlations, and calibrated, reducederror probabilistic forecasts. It achieved these improvements primarily because 1) the ensemble mean of the PO initial conditions was more accurate than the mean of the bred or