Results 1  10
of
246
Adaptive Sampling With the Ensemble Transform . . .
, 2001
"... A suboptimal Kalman filter called the ensemble transform Kalman filter (ET KF) is introduced. Like other Kalman filters, it provides a framework for assimilating observations and also for estimating the effect of observations on forecast error covariance. It differs from other ensemble Kalman filt ..."
Abstract

Cited by 321 (19 self)
 Add to MetaCart
A suboptimal Kalman filter called the ensemble transform Kalman filter (ET KF) is introduced. Like other Kalman filters, it provides a framework for assimilating observations and also for estimating the effect of observations on forecast error covariance. It differs from other ensemble Kalman filters in that it uses ensemble transformation and a normalization to rapidly obtain the prediction error covariance matrix associated with a particular deployment of observational resources. This rapidity enables it to quickly assess the ability of a large number of future feasible sequences of observational networks to reduce forecast error variance. The ET KF was used by the National Centers for Environmental Prediction in the Winter Storm Reconnaissance missions of 1999 and 2000 to determine where aircraft should deploy dropwindsondes in order to improve 2472h forecasts over the continental United States. The ET KF may be applied to any wellconstructed set of ensemble perturbations. The ET KF
An Ensemble Adjustment Kalman Filter for Data Assimilation
, 2001
"... A theory for estimating the probability distribution of the state of a model given a set of observations exists. This nonlinear ..."
Abstract

Cited by 283 (12 self)
 Add to MetaCart
A theory for estimating the probability distribution of the state of a model given a set of observations exists. This nonlinear
Ensemble forecasting at NCEP and the breeding method
 Mon. Wea. Rev
, 1997
"... The breeding method has been used to generate perturbations for ensemble forecasting at the National Centers for Environmental Prediction (formerly known as the National Meteorological Center) since December 1992. At that time a single breeding cycle with a pair of bred forecasts was implemented. In ..."
Abstract

Cited by 193 (16 self)
 Add to MetaCart
(Show Context)
The breeding method has been used to generate perturbations for ensemble forecasting at the National Centers for Environmental Prediction (formerly known as the National Meteorological Center) since December 1992. At that time a single breeding cycle with a pair of bred forecasts was implemented. In March 1994, the ensemble was expanded to seven independent breeding cycles on the Cray C90 supercomputer, and the forecasts were extended to 16 days. This provides 17 independent global forecasts valid for two weeks every day. For efficient ensemble forecasting, the initial perturbations to the control analysis should adequately sample the space of possible analysis errors. It is shown that the analysis cycle is like a breeding cycle: it acts as a nonlinear perturbation model upon the evolution of the real atmosphere. The perturbation (i.e., the analysis error), carried forward in the firstguess forecasts, is ‘‘scaled down’ ’ at regular intervals by the use of observations. Because of this, growing errors associated with the evolving state of the atmosphere develop within the analysis cycle and dominate subsequent forecast error growth. The breeding method simulates the development of growing errors in the analysis cycle. A difference field between two nonlinear forecasts is carried forward (and scaled down at regular intervals) upon the evolving atmospheric analysis fields. By construction, the bred vectors are superpositions of the leading local (timedependent)
DistanceDependent Filtering of Background Error Covariance Estimates in an Ensemble Kalman Filter
, 2001
"... The usefulness of a distancedependent reduction of background error covariance estimates in an ensemble Kalman filter is demonstrated. Covariances are reduced by performing an elementwise multiplication of the background error covariance matrix with a correlation function with local support. This ..."
Abstract

Cited by 189 (31 self)
 Add to MetaCart
The usefulness of a distancedependent reduction of background error covariance estimates in an ensemble Kalman filter is demonstrated. Covariances are reduced by performing an elementwise multiplication of the background error covariance matrix with a correlation function with local support. This reduces noisiness and results in an improved background error covariance estimate, which generates a reducederror ensemble of model initial conditions. The benefits
An Introduction to Estimation Theory
 OFFICE NOTE SERIES ON GLOBAL MODELING AND DATA ASSIMILATION
, 1997
"... Despite the explosive growth of activity in the field of Earth System data assimilation over the past decade or so, there remains a substantial gap between theory and practice. The present article attempts to bridge this gap by exposing some of the central concepts of estimation theory and connectin ..."
Abstract

Cited by 166 (7 self)
 Add to MetaCart
Despite the explosive growth of activity in the field of Earth System data assimilation over the past decade or so, there remains a substantial gap between theory and practice. The present article attempts to bridge this gap by exposing some of the central concepts of estimation theory and connecting them with current and future data assimilation approaches. Estimation theory provides a broad and natural mathematical foundation for data assimilation science. Stochasticdynamic modeling and stochastic observation modeling are described first. Optimality criteria for linear and nonlinear state estimation problems are then explored, leading to conditionalmean estimation procedures such as the Kalman filter and some of its generalizations, and to conditionalmode estimation procedures such as variational methods. A detailed derivation of the Kalman filter is given to illustrate the role of key probabilistic concepts and assumptions. Extensions of the Kalman filter to nonlinear observat...
Using Bayesian model averaging to calibrate forecast ensembles
 MONTHLY WEATHER REVIEW 133
, 2005
"... Ensembles used for probabilistic weather forecasting often exhibit a spreaderror correlation, but they tend to be underdispersive. This paper proposes a statistical method for postprocessing ensembles based on Bayesian model averaging (BMA), which is a standard method for combining predictive distr ..."
Abstract

Cited by 139 (34 self)
 Add to MetaCart
(Show Context)
Ensembles used for probabilistic weather forecasting often exhibit a spreaderror correlation, but they tend to be underdispersive. This paper proposes a statistical method for postprocessing ensembles based on Bayesian model averaging (BMA), which is a standard method for combining predictive distributions from different sources. The BMA predictive probability density function (PDF) of any quantity of interest is a weighted average of PDFs centered on the individual biascorrected forecasts, where the weights are equal to posterior probabilities of the models generating the forecasts and reflect the models ’ relative contributions to predictive skill over the training period. The BMA weights can be used to assess the usefulness of ensemble members, and this can be used as a basis for selecting ensemble members; this can be useful given the cost of running large ensembles. The BMA PDF can be represented as an unweighted ensemble of any desired size, by simulating from the BMA predictive distribution. The BMA predictive variance can be decomposed into two components, one corresponding to the betweenforecast variability, and the second to the withinforecast variability. Predictive PDFs or intervals based solely on the ensemble spread incorporate the first component but not the second. Thus BMA provides a theoretical explanation of the tendency of ensembles to exhibit a spreaderror correlation but yet
A Hybrid Ensemble Kalman Filter / 3DVariational Analysis Scheme
"... A hybrid 3dimensional variational (3DVar) / ensemble Kalman filter analysis scheme is demonstrated using a quasigeostrophic model under perfectmodel assumptions. Four networks with differing observational densities are tested, including one network with a data void. The hybrid scheme operates by ..."
Abstract

Cited by 123 (18 self)
 Add to MetaCart
(Show Context)
A hybrid 3dimensional variational (3DVar) / ensemble Kalman filter analysis scheme is demonstrated using a quasigeostrophic model under perfectmodel assumptions. Four networks with differing observational densities are tested, including one network with a data void. The hybrid scheme operates by computing a set of parallel data assimilation cycles, with each member of the set receiving unique perturbed observations. The perturbed observations are generated by adding random noise consistent with observation error statistics to the control set of observations. Background error statistics for the data assimilation are estimated from a linear combination of timeinvariant 3DVar covariances and flowdependent covariances developed from the ensemble of shortrange forecasts. The hybrid scheme allows the user to weight the relative contributions of the 3DVar and ensemblebased background covariances. The analysis scheme was cycled for 90 days, with new observations assimilated every 12 h...
Interpretation Of Rank Histograms For Verifying Ensemble Forecasts
, 2000
"... Rank histograms are a tool for evaluating ensemble forecasts. They are useful for determining the reliability of ensemble forecasts and for diagnosing errors in its mean and spread. Rank histograms are generated by repeatedly tallying the rank of the verification (usually, an observation) relative t ..."
Abstract

Cited by 109 (7 self)
 Add to MetaCart
Rank histograms are a tool for evaluating ensemble forecasts. They are useful for determining the reliability of ensemble forecasts and for diagnosing errors in its mean and spread. Rank histograms are generated by repeatedly tallying the rank of the verification (usually, an observation) relative to values from an ensemble sorted from lowest to highest. However, an uncritical use of the rank histogram can lead to misinterpretations of the qualities of that ensemble. For example, a flat rank histogram, ususally taken as a sign of reliability, can still be generated from unreliable ensembles. Similarly, a Ushaped rank histogram, commonly understood as indicating a lack of variability in the ensemble, can also be a sign of conditional bias. It is also shown that flat rank histograms can be generated for some model variables if the variance of the ensemble is correctly specified, yet if covariances between model grid points are improperly specified, rank histograms for combinations of mo...
Data assimilation via error subspace statistical estimation. Part I: Theory and schemes
, 1999
"... A rational approach is used to identify efficient schemes for data assimilation in nonlinear ocean–atmosphere models. The conditional mean, a minimum of several cost functionals, is chosen for an optimal estimate. After stating the present goals and describing some of the existing schemes, the const ..."
Abstract

Cited by 96 (15 self)
 Add to MetaCart
(Show Context)
A rational approach is used to identify efficient schemes for data assimilation in nonlinear ocean–atmosphere models. The conditional mean, a minimum of several cost functionals, is chosen for an optimal estimate. After stating the present goals and describing some of the existing schemes, the constraints and issues particular to ocean–atmosphere data assimilation are emphasized. An approximation to the optimal criterion satisfying the goals and addressing the issues is obtained using heuristic characteristics of geophysical measurements and models. This leads to the notion of an evolving error subspace, of variable size, that spans and tracks the scales and processes where the dominant errors occur. The concept of error subspace statistical estimation (ESSE) is defined. In the present minimum error variance approach, the suboptimal criterion is based on a continued and energetically optimal reduction of the dimension of error covariance matrices. The evolving error subspace is characterized by error singular vectors and values, or in other words, the error principal components and coefficients. Schemes for filtering and smoothing via ESSE are derived. The data–forecast melding minimizes variance in the error subspace. Nonlinear Monte Carlo forecasts integrate the error subspace in time. The smoothing is based on a statistical approximation approach. Comparisons with existing filtering and smoothing procedures
Which is better, an ensemble of positivenegative pairs or a centered spherical simplex ensemble? Monthly Weather Rev
, 2004
"... New methods to center the initial ensemble perturbations on the analysis are introduced and compared with the commonly used centering method of positive–negative paired perturbations. In the new method, one linearly dependent perturbation is added to a set of linearly independent initial perturbatio ..."
Abstract

Cited by 62 (1 self)
 Add to MetaCart
(Show Context)
New methods to center the initial ensemble perturbations on the analysis are introduced and compared with the commonly used centering method of positive–negative paired perturbations. In the new method, one linearly dependent perturbation is added to a set of linearly independent initial perturbations to ensure that the sum of the new initial perturbations equals zero; the covariance calculated from the new initial perturbations is equal to the analysis error covariance estimated by the independent initial perturbations, and all of the new initial perturbations are equally likely. The new method is illustrated by applying it to the ensemble transform Kalman filter (ETKF) ensemble forecast scheme, and the resulting ensemble is called the spherical simplex ETKF ensemble. It is shown from a multidimensional Taylor expansion that the symmetric positive–negative paired centering would yield a more accurate forecast ensemble mean and covariance than the spherical simplex centering if the ensemble were large enough to span all initial uncertain directions and thus the analysis error covariance was modeled precisely. However, when the number of uncertain directions is larger than the ensemble size, the spherical simplex centering has the advantage of allowing almost twice as many uncertain directions to be spanned as the symmetric positive–negative paired centering. The performances of the spherical simplex ETKF and symmetric positive–negative paired ETKF ensembles are compared by using the Community Climate Model