Results 1  10
of
35
Data assimilation in slow–fast systems using homogenized climate models
 J. Atmos. Sci
, 2012
"... A deterministic multiscale toy model is studied in which a chaotic fast subsystem triggers rare transitions between slow regimes, akin to weather or climate regimes. Using homogenization techniques, a reduced stochastic parameterization model is derived for the slow dynamics. The reliability of this ..."
Abstract

Cited by 12 (3 self)
 Add to MetaCart
A deterministic multiscale toy model is studied in which a chaotic fast subsystem triggers rare transitions between slow regimes, akin to weather or climate regimes. Using homogenization techniques, a reduced stochastic parameterization model is derived for the slow dynamics. The reliability of this reduced climate model in reproducing the statistics of the slow dynamics of the full deterministic model for finite values of the timescale separation is numerically established. The statistics, however, are sensitive to uncertainties in the parameters of the stochastic model. It is investigated whether the stochastic climate model can be beneficial as a forecast model in an ensemble data assimilation setting, in particular in the realistic setting when observations are only available for the slow variables. Themain result is that reduced stochastic models can indeed improve the analysis skill when used as forecast models instead of the perfect full deterministic model. The stochastic climate model is far superior at detecting transitions between regimes. The observation intervals for which skill improvement can be obtained are related to the characteristic time scales involved. The reason why stochastic climate models are capable of producing superior skill in an ensemble setting is the finite ensemble size; ensembles obtained from the perfect deterministic forecast model lack sufficient spread even for moderate ensemble sizes. Stochastic climate models provide a natural way to provide sufficient ensemble spread to detect transitions between regimes. This is corroborated with numerical simulations. The conclusion is that stochastic parameterizations are attractive for data assimilation despite their sensitivity to uncertainties in the parameters. 1.
Controlling overestimation of error covariance in ensemble Kalman filters with sparse observations: A variance limiting Kalman filter, Monthly Weather Review 139
, 2011
"... The problem of an ensemble Kalman filter when only partial observations are available is considered. In particular, the situation is investigated where the observational space consists of variables that are directly observable with known observational error, and of variables of which only their clim ..."
Abstract

Cited by 3 (2 self)
 Add to MetaCart
The problem of an ensemble Kalman filter when only partial observations are available is considered. In particular, the situation is investigated where the observational space consists of variables that are directly observable with known observational error, and of variables of which only their climatic variance and mean are given. To limit the variance of the latter poorly resolved variables a variancelimiting Kalman filter (VLKF) is derived in a variational setting. TheVLKF for a simple linear toymodel is analyzed and its range of optimal performance is determined. The VLKF is explored in an ensemble transform setting for the Lorenz96 system, and it is shown that incorporating the information of the variance of some unobservable variables can improve the skill and also increase the stability of the data assimilation procedure. 1.
The role of additive and multiplicative noise in filtering complex dynamical systems
"... Covariance inflation is an adhoc treatment that is widely used in practical realtime data assimilation algorithms to mitigate covariance underestimation due to model errors, nonlinearity, or/and, in the context of ensemble filters, insufficient ensemble size. In this paper, we systematically deriv ..."
Abstract

Cited by 3 (3 self)
 Add to MetaCart
(Show Context)
Covariance inflation is an adhoc treatment that is widely used in practical realtime data assimilation algorithms to mitigate covariance underestimation due to model errors, nonlinearity, or/and, in the context of ensemble filters, insufficient ensemble size. In this paper, we systematically derive an effective “statistical” inflation for filtering multiscale dynamical systems with moderate scale gap, = O(10−1), to the case of no scale gap with = O(1), in the presence of model errors through reduced dynamics from rigorous stochastic subgridscale parametrizations. We will demonstrate that for linear problems, an effective covariance inflation is achieved by a systematically derived additive noise in the forecast model, producing superior filtering skill. For nonlinear problems, we will study an analytically solvable stochastic test model, mimicking turbulent signals in regimes ranging from a turbulent energy transfer range to a dissipative range to a laminar regime. In this context, we will show that multiplicative noise naturally arises in addition to additive noise in a reduced stochastic forecast model. Subsequently, we will show that a “statistical ” inflation factor that involves mean correction in addition to covariance inflation is necessary to achieve accurate filtering in the presence of intermittent instability in both the turbulent energy transfer range and the dissipative range.
Linear Theory for Filtering Nonlinear Multiscale Systems with Model Error
, 2014
"... In this paper, we study filtering of multiscale dynamical systems with model error arising from limitations in resolving the smaller scale processes. In particular, the analysis assumes the availability of continuoustime noisy observations of all components of the slow variables. Mathematically, t ..."
Abstract

Cited by 2 (2 self)
 Add to MetaCart
(Show Context)
In this paper, we study filtering of multiscale dynamical systems with model error arising from limitations in resolving the smaller scale processes. In particular, the analysis assumes the availability of continuoustime noisy observations of all components of the slow variables. Mathematically, this paper presents new results on higherorder asymptotic expansion of the first two moments of a conditional measure. In particular, we are interested in the application of filtering multiscale problems in which the conditional distribution is defined over the slow variables, given noisy observation of the slow variables alone. From the mathematical analysis, we learn that for a continuous time linear model with Gaussian noise, there exists a unique choice of parameters in a linear reduced model for the slow variables which gives the optimal filtering when only the slow variables are observed. Moreover, these parameters simultaneously give the optimal equilibrium statistical estimates of the underlying system, and as a consequence they can be estimated offline from the equilibrium statistics of the true signal. By examining a nonlinear test model, we show that the linear theory extends in this nonGaussian, nonlinear configuration as long as we know the optimal stochastic parameterization and the correct observation model. However, when the stochastic parameterization model is inappropriate, parameters chosen for good filter performance may give poor equilibrium statistical estimates and vice versa; this finding is based on analytical and numerical results
Generated using version 3.0 of the official AMS LATEX template Marginalized Particle Filtering Framework for Tuning of Ensemble Filters
, 2010
"... Marginalized particle filtering (MPF), also known as RaoBlackwellized particle filtering, has been recently developed as a hybrid method combining analytical filters with particle filters. In this paper, we investigate the prospects of this approach in enviromental modelling where the key concerns ..."
Abstract
 Add to MetaCart
Marginalized particle filtering (MPF), also known as RaoBlackwellized particle filtering, has been recently developed as a hybrid method combining analytical filters with particle filters. In this paper, we investigate the prospects of this approach in enviromental modelling where the key concerns are nonlinearity, highdimensionality, and computational cost. In our formulation, exact marginalization in the MPF is replaced by approximate marginalization yielding a framework for creation of new hybrid filters. In particular, we propose to use the MPF framework for online tuning of nuisance parameters of ensemble filters. Conditional independence based simplification of the MPF algorithm is proposed for computational reasons and its close relation to previously published methods is discussed. Strength of the framework is demonstrated on the joint estimation of the inflation factor, the measurement error variance and the lengthscale parameter of covariance localization. It is shown that accurate estimation can be achieved with a moderate number of particles. Moreover, this result was achieved with naively chosen proposal densities leaving space for further improvements. 1.
Nonlinear Processes in Geophysics Ensemble Kalman filtering without the intrinsic need for inflation
"... Abstract. The main intrinsic source of error in the ensemble Kalman filter (EnKF) is sampling error. External sources of error, such as model error or deviations from Gaussianity, depend on the dynamical properties of the model. Sampling errors can lead to instability of the filter which, as a cons ..."
Abstract
 Add to MetaCart
Abstract. The main intrinsic source of error in the ensemble Kalman filter (EnKF) is sampling error. External sources of error, such as model error or deviations from Gaussianity, depend on the dynamical properties of the model. Sampling errors can lead to instability of the filter which, as a consequence, often requires inflation and localization. The goal of this article is to derive an ensemble Kalman filter which is less sensitive to sampling errors. A prior probability density function conditional on the forecast ensemble is derived using Bayesian principles. Even though this prior is built upon the assumption that the ensemble is Gaussiandistributed, it is different from the Gaussian probability density function defined by the empirical mean and the empirical error covariance matrix of the ensemble, which is implicitly used in traditional EnKFs. This new prior generates a new class of
unknown title
"... Estimation of the caesium137 source term from the Fukushima Daiichi nuclear power plant using a consistent joint assimilation of air concentration and deposition observations ..."
Abstract
 Add to MetaCart
(Show Context)
Estimation of the caesium137 source term from the Fukushima Daiichi nuclear power plant using a consistent joint assimilation of air concentration and deposition observations
unknown title
"... Estimation of errors in the inverse modeling of accidental release of atmospheric pollutant: Application to the reconstruction of the cesium137 and iodine131 source terms from the Fukushima Daiichi power plant ..."
Abstract
 Add to MetaCart
Estimation of errors in the inverse modeling of accidental release of atmospheric pollutant: Application to the reconstruction of the cesium137 and iodine131 source terms from the Fukushima Daiichi power plant
Docteur de l’Université ParisEst Spécialité: Sciences et Techniques de l’Environnement par
"... Thèse présentée pour obtenir le grade de ..."
(Show Context)
Accounting for model error due to unresolved scales within ensemble
"... We propose a method to account for model error due to unresolved scales in the context of the ensemble transform Kalman filter (ETKF). The approach extends to this class of algorithms the deterministic model error formulation recently explored for variational schemes and extended Kalman filter. The ..."
Abstract
 Add to MetaCart
(Show Context)
We propose a method to account for model error due to unresolved scales in the context of the ensemble transform Kalman filter (ETKF). The approach extends to this class of algorithms the deterministic model error formulation recently explored for variational schemes and extended Kalman filter. The model error statistic required in the analysis update is estimated using historical reanalysis increments and a suitable model error evolution law. Two different versions of the method are described; a timeconstant model error treatment where the same model error statistical description is timeinvariant, and a timevarying treatment where the assumed model error statistics is randomly sampled at each analysis step. We compare both methods with the standard method of dealing with model error through inflation and localization, and illustrate our results with numerical simulations on a loworder nonlinear system exhibiting chaotic dynamics. The results show that the filter skill is significantly improved through the proposed model error treatments, and that both methods require far less parameter tuning than the standard approach. Furthermore, the proposed approach is simple to implementwithin apreexisting ensemblebased scheme. The general implications for the use of the proposed approach in the framework of squareroot filters such as the ETKF are also discussed. Key Words: model error; data assimilation; ensemble Kalman filter; ensemble prediction; bias correction