Results 1  10
of
105
Ensemble Square Root Filters
, 2003
"... Ensemble data assimilation methods assimilate observations using statespace estimation methods and lowrank representations of forecast and analysis error covariances. A key element of such methods is the transformation of the forecast ensemble into an analysis ensemble with appropriate statistics ..."
Abstract

Cited by 118 (8 self)
 Add to MetaCart
Ensemble data assimilation methods assimilate observations using statespace estimation methods and lowrank representations of forecast and analysis error covariances. A key element of such methods is the transformation of the forecast ensemble into an analysis ensemble with appropriate statistics. This transformation may be performed stochastically by treating observations as random variables, or deterministically by requiring that the updated analysis perturbations satisfy the Kalman filter analysis error covariance equation. Deterministic analysis ensemble updates are implementations of Kalman square root filters. The nonuniqueness of the deterministic transformation used in square root Kalman filters provides a framework to compare three recently proposed ensemble data assimilation methods.
The Maximum Likelihood Ensemble Filter as a . . .
, 2008
"... The Maximum Likelihood Ensemble Filter (MLEF) equations are derived without the differentiability requirement for the prediction model and for the observation operators. Derivation reveals that a new nondifferentiable minimization method can be defined as a generalization of the gradientbased un ..."
Abstract

Cited by 68 (18 self)
 Add to MetaCart
The Maximum Likelihood Ensemble Filter (MLEF) equations are derived without the differentiability requirement for the prediction model and for the observation operators. Derivation reveals that a new nondifferentiable minimization method can be defined as a generalization of the gradientbased unconstrained methods, such as the preconditioned conjugategradient and quasiNewton methods. In the new minimization algorithm the vector of first order increments of the cost function is defined as a generalized gradient, while the symmetric matrix of second order increments of the cost function is defined as a generalized Hessian matrix. In the case of differentiable observation operators, the minimization algorithm reduces to the standard gradientbased form. The nondifferentiable aspect of the MLEF algorithm is illustrated in an example with onedimensional Burgers model and simulated observations. The MLEF algorithm has a robust performance, producing satisfactory results for tested nondifferentiable observation operators.
Beyond Gaussian Statistical Modeling in Geophysical Data Assimilation
, 2010
"... This review discusses recent advances in geophysical data assimilation beyond Gaussian statistical modeling, in the fields of meteorology, oceanography, as well as atmospheric chemistry. The nonGaussian features are stressed rather than the nonlinearity of the dynamical models, although both aspe ..."
Abstract

Cited by 47 (10 self)
 Add to MetaCart
This review discusses recent advances in geophysical data assimilation beyond Gaussian statistical modeling, in the fields of meteorology, oceanography, as well as atmospheric chemistry. The nonGaussian features are stressed rather than the nonlinearity of the dynamical models, although both aspects are entangled. Ideas recently proposed to deal with these nonGaussian issues, in order to improve the state or parameter estimation, are emphasized. The general Bayesian solution to the estimation problem and the techniques to solve it are first presented, as well as the obstacles that hinder their use in highdimensional and complex systems. Approximations to the Bayesian solution relying on Gaussian, or on secondorder moment closure, have been wholly adopted in geophysical data assimilation (e.g., Kalman filters and quadratic variational solutions). Yet, nonlinear and nonGaussian effects remain. They essentially originate in the nonlinear models and in the nonGaussian priors. How these effects are handled within algorithms based on Gaussian assumptions is then described. Statistical tools that can diagnose them and measure deviations from Gaussianity are recalled. The following advanced techniques that seek to handle the estimation problem beyond Gaussianity are
Assimilation of Ocean Colour Data Into a Biochemical Model of the North Atlantic Part 1. Data Assimilation Experiments
, 2003
"... An advanced multivariate sequential data assimilation method, the ensemble Kalman filter (EnKF), has been investigated with a threedimensional biochemical model of the North Atlantic, utilizing real chlorophyll data from the from the Seaviewing Wide Fieldofview Sensor (SeaWiFS). The approach cho ..."
Abstract

Cited by 31 (2 self)
 Add to MetaCart
An advanced multivariate sequential data assimilation method, the ensemble Kalman filter (EnKF), has been investigated with a threedimensional biochemical model of the North Atlantic, utilizing real chlorophyll data from the from the Seaviewing Wide Fieldofview Sensor (SeaWiFS). The approach chosen here differs significantly from conventional parameter estimation techniques. We keep the parameters fixed, and instead update the actual model state, allowing for unknown errors in the dynamical formulation. In the ensemble Kalman filter, estimates of the true dynamical error covariances are provided from an ensemble of model states.
Ensemble squareroot filters
, 2003
"... Ensemble data assimilation methods assimilate observations using statespace estimation methods and lowrank representations of forecast and analysis error covariances. A key element of such methods is the transformation of the forecast ensemble into an analysis ensemble with appropriate statistics. ..."
Abstract

Cited by 29 (2 self)
 Add to MetaCart
Ensemble data assimilation methods assimilate observations using statespace estimation methods and lowrank representations of forecast and analysis error covariances. A key element of such methods is the transformation of the forecast ensemble into an analysis ensemble with appropriate statistics. This transformation may be preformed stochastically by treating observations as random variables, or deterministically by requiring that the updated analysis perturbations satisfy the Kalman filter analysis error covariance equation. Deterministic analysis ensemble updates are implementations of Kalman squareroot filters. The nonuniqueness of the deterministic transformation used in squareroot Kalman filters provides a framework to compare three recently proposed ensemble data assimilation methods. 2
A DualWeighted Approach to Order Reduction in 4DVAR Data Assimilation
 MONTHLY WEATHER REVIEW VOLUME 136
, 2008
"... Strategies to achieve order reduction in fourdimensional variational data assimilation (4DVAR) search for an optimal lowrank state subspace for the analysis update. A common feature of the reduction methods proposed in atmospheric and oceanographic studies is that the identification of the basis f ..."
Abstract

Cited by 28 (12 self)
 Add to MetaCart
Strategies to achieve order reduction in fourdimensional variational data assimilation (4DVAR) search for an optimal lowrank state subspace for the analysis update. A common feature of the reduction methods proposed in atmospheric and oceanographic studies is that the identification of the basis functions relies on the model dynamics only, without properly accounting for the specific details of the data assimilation system (DAS). In this study a general framework of the proper orthogonal decomposition (POD) method is considered and a costeffective approach is proposed to incorporate DAS information into the orderreduction procedure. The sensitivities of the cost functional in 4DVAR data assimilation with respect to the timevarying model state are obtained from a backward integration of the adjoint model. This information is further used to define appropriate weights and to implement a dualweighted proper orthogonal decomposition (DWPOD) method for order reduction. The use of a weighted ensemble data mean and weighted snapshots using the adjoint DAS is a novel element in reducedorder 4DVAR data assimilation. Numerical results are presented with a global shallowwater model based on the Lin–Rood fluxform semiLagrangian scheme. A simplified 4DVAR DAS is considered in the twinexperiment framework with initial conditions specified from the 40yr ECMWF ReAnalysis (ERA40) datasets. A comparative analysis with the standard
Sequential Data Assimilation Techniques in Oceanography
, 2003
"... this article, we will focus on sequential DA methods that constitute the second class. These methods use a probabilistic framework and give estimates of the whole system state sequentially by propagating information only forward in time. This avoids deriving an inverse or an adjoint model and theref ..."
Abstract

Cited by 19 (3 self)
 Add to MetaCart
this article, we will focus on sequential DA methods that constitute the second class. These methods use a probabilistic framework and give estimates of the whole system state sequentially by propagating information only forward in time. This avoids deriving an inverse or an adjoint model and therefore makes sequential methods easier to adapt for all models. Further, the probabilistic framework is more convenient for error estimation and further stochastic analysis such as threshold characterization
A deterministic formulation of the ensemble Kalman filter: An alternative to ensemble square root filters
 Tellus
"... The use of perturbed observations in the traditional ensemble Kalman filter (EnKF) results in a suboptimal filter behaviour, particularly for small ensembles. In this work, we propose a simple modification to the traditional EnKF that results in matching the analysed error covariance given by Kalman ..."
Abstract

Cited by 18 (4 self)
 Add to MetaCart
The use of perturbed observations in the traditional ensemble Kalman filter (EnKF) results in a suboptimal filter behaviour, particularly for small ensembles. In this work, we propose a simple modification to the traditional EnKF that results in matching the analysed error covariance given by Kalman filter in cases when the correction is small; without perturbed observations. The proposed filter is based on the recognition that in the case of small corrections to the forecast the traditional EnKF without perturbed observations reduces the forecast error covariance by an amount that is nearly twice as large as that is needed to match Kalman filter. The analysis scheme works as follows: update the ensemble mean and the ensemble anomalies separately; update the mean using the standard analysis equation; update the anomalies with the same equation but half the Kalman gain. The proposed filter is shown to be a linear approximation to the ensemble square root filter (ESRF). Because of its deterministic character and its similarity to the traditional EnKF we call it the ‘deterministic EnKF’, or the DEnKF. A number of numerical experiments to compare the performance of the DEnKF with both the EnKF and an ESRF using three small models are conducted. We show that the DEnKF performs almost as well as the ESRF and is a significant improvement over the EnKF. Therefore, the DEnKF combines the numerical effectiveness, simplicity and versatility of the EnKF with the performance of the ESRFs. Importantly, the DEnKF readily permits the use of the traditional Schur productbased localization schemes. 1.
An Ensemble Smoother with Error Estimates
, 2001
"... A smoother introduced earlier by van Leeuwen and Evensen is applied to a problem in which real observations are used in an area with strongly nonlinear dynamics. The derivation is new, but it resembles an earlier derivation by van Leeuwen and Evensen. Again a Bayesian view is taken in which the pr ..."
Abstract

Cited by 18 (2 self)
 Add to MetaCart
A smoother introduced earlier by van Leeuwen and Evensen is applied to a problem in which real observations are used in an area with strongly nonlinear dynamics. The derivation is new, but it resembles an earlier derivation by van Leeuwen and Evensen. Again a Bayesian view is taken in which the prior probability density of the model and the probability density of the observations are combined to form a posterior density. The mean and the covariance of this density give the varianceminimizing model evolution and its errors. The assumption is made that the prior probability density is a Gaussian, leading to a linear update equation. Critical evaluation shows when the assumption is justified. This also sheds light on why Kalman filters, in which the same approximation is made, work for nonlinear models. By reference to the derivation, the impact of model and observational biases on the equations is discussed, and it is shown that Bayes's formulation can still be used. A practical