Results 1  10
of
84
Wavelet Deconvolution
 IEEE Transactions on Information Theory
, 2002
"... This paper studies the issue of optimal deconvolution density estimation using wavelets. The approach taken here can be considered as orthogonal series estimation in the more general context of the density estimation. We explore the asymptotic properties of estimators based on thresholding of estima ..."
Abstract

Cited by 65 (1 self)
 Add to MetaCart
(Show Context)
This paper studies the issue of optimal deconvolution density estimation using wavelets. The approach taken here can be considered as orthogonal series estimation in the more general context of the density estimation. We explore the asymptotic properties of estimators based on thresholding of estimated wavelet coefficients. Minimax rates of convergence under the integrated square loss are studied over Besov classes Bσpq of functions for both ordinary smooth and supersmooth convolution kernels. The minimax rates of convergence depend on the smoothness of functions to be deconvolved and the decay rate of the characteristic function of convolution kernels. It is shown that no linear deconvolution estimators can achieve the optimal rates of convergence in the Besov spaces with p < 2 when the convolution kernel is ordinary smooth and super smooth. If the convolution kernel is ordinary smooth, then linear estimators can be improved by using thresholding wavelet deconvolution estimators which are asymptotically minimax within logarithmic terms. Adaptive minimax properties of thresholding wavelet deconvolution estimators are also discussed. Keywords. Adaptive estimation, Besov spaces, KullbackLeibler information, linear estimators, minimax estimation, thresholding, wavelet bases.
Bayesian Smoothing and Regression Splines for Measurement Error Problems
 Journal of the American Statistical Association
, 2001
"... In the presence of covariate measurement error, estimating a regression function nonparametrically is extremely dicult, the problem being related to deconvolution. Various frequentist approaches exist for this problem, but to date there has been no Bayesian treatment. In this paper we describe Bayes ..."
Abstract

Cited by 40 (8 self)
 Add to MetaCart
(Show Context)
In the presence of covariate measurement error, estimating a regression function nonparametrically is extremely dicult, the problem being related to deconvolution. Various frequentist approaches exist for this problem, but to date there has been no Bayesian treatment. In this paper we describe Bayesian approaches to modeling a exible regression function when the predictor variable is measured with error. The regression function is modeled with smoothing splines and regression P{splines. Two methods are described for exploration of the posterior. The rst is called iterative conditional modes (ICM) and is only partially Bayesian. ICM uses a componentwise maximization routine to nd the mode of the posterior. It also serves to create starting values for the second method, which is fully Bayesian and uses Markov chain Monte Carlo techniques to generate observations from the joint posterior distribution. Using the MCMC approach has the advantage that interval estimates that directly model and adjust for the measurement error are easily calculated. We provide simulations with several nonlinear regression functions and provide an illustrative example. Our simulations indicate that the frequentist mean squared error properties of the fully Bayesian method are better than those of ICM and also of previously proposed frequentist methods, at least in the examples we have studied. KEY WORDS: Bayesian methods; Eciency; Errors in variables; Functional method; Generalized linear models; Kernel regression; Measurement error; Nonparametric regression; P{splines; Regression Splines; SIMEX; Smoothing Splines; Structural modeling. Short title. Nonparametric Regression with Measurement Error Author Aliations Scott M. Berry (Email: scott@berryconsultants.com) is Statistical Scientist,...
Nonparametric Regression In The Presence Of Measurement Error
 Biometrika
, 1999
"... This paper develops the two ideas of approximately consistent and regression spline estimation in the presence of measurement error. In ..."
Abstract

Cited by 31 (5 self)
 Add to MetaCart
This paper develops the two ideas of approximately consistent and regression spline estimation in the presence of measurement error. In
A DesignAdaptive Local Polynomial Estimator for the ErrorsinVariables Problem
"... Abstract: Local polynomial estimators are popular techniques for nonparametric regression estimation and have received great attention in the literature. Their simplest version, the local constant estimator, can be easily extended to the errorsinvariables context by exploiting its similarity with ..."
Abstract

Cited by 16 (1 self)
 Add to MetaCart
(Show Context)
Abstract: Local polynomial estimators are popular techniques for nonparametric regression estimation and have received great attention in the literature. Their simplest version, the local constant estimator, can be easily extended to the errorsinvariables context by exploiting its similarity with the deconvolution kernel density estimator. The generalization of the higher order versions of the estimator, however, is not straightforward and has remained an open problem for the last 15 years, since the publication of Fan and Truong (1993). We propose an innovative local polynomial estimator of any order in the errorsinvariables context, derive its designadaptive asymptotic properties and study its finite sample performance on simulated examples. We provide not only a solution to a longstanding open problem, but also provide methodological contributions to errorinvariable regression, including local polynomial estimation of derivative functions.
Nonlinear and Nonparametric Regression and Instrumental
 Variables,Journal of the American Statistical Association
, 2004
"... We consider regression when the predictor is measured with error and an instrumental variable (IV) is available. The regression function can be modeled linearly, nonlinearly, or nonparametrically. Our major new result shows that the regression function and all parameters in the measurement error mod ..."
Abstract

Cited by 15 (6 self)
 Add to MetaCart
(Show Context)
We consider regression when the predictor is measured with error and an instrumental variable (IV) is available. The regression function can be modeled linearly, nonlinearly, or nonparametrically. Our major new result shows that the regression function and all parameters in the measurement error model are identified under relatively weak conditions, much weaker than previously known to imply identifiability. In addition, we exploit a characterization of the IV estimator as a classical “correction for attenuation ” method based on a particular estimate of the variance of the measurement error. This estimate of the measurement error variance allows us to construct functional nonparametric regression estimators making no assumptions about the distribution of the unobserved predictor and structural estimators that use parametric assumptions about this distribution. The functional estimators uses simulation extrapolation or deconvolution kernels and the structural method uses Bayesian Markov chain Monte Carlo. The Bayesian estimator is found to significantly outperform the functional approach.
REGRESSION ON MANIFOLDS: ESTIMATION OF THE EXTERIOR DERIVATIVE
 SUBMITTED TO THE ANNALS OF STATISTICS
, 2010
"... Collinearity and nearcollinearity of predictors cause difficulties when doing regression. In these cases, variable selection becomes untenable because of mathematical issues concerning the existence and numerical stability of the regression coefficients, and interpretation of the coefficients is am ..."
Abstract

Cited by 14 (3 self)
 Add to MetaCart
(Show Context)
Collinearity and nearcollinearity of predictors cause difficulties when doing regression. In these cases, variable selection becomes untenable because of mathematical issues concerning the existence and numerical stability of the regression coefficients, and interpretation of the coefficients is ambiguous because gradients are not defined. Using a differential geometric interpretation, in which the regression coefficients are interpreted as estimates of the exterior derivative of a function, we develop a new method to do regression in the presence of collinearities. Our regularization scheme can improve estimation error, and it can be easily modified to include lassotype regularization. These estimators also have simple extensions to the “large p, small n” context.
Optimal ChangePoint Estimation in Inverse Problems
 Scand. J. Statist
, 1997
"... . We develop a method of estimating changepoints of a function in the case of indirect noisy observations. As two paradigmatic problems we consider deconvolution and errorsinvariables regression. We estimate the scalar products of our indirectly observed function with appropriate test functions, ..."
Abstract

Cited by 13 (0 self)
 Add to MetaCart
. We develop a method of estimating changepoints of a function in the case of indirect noisy observations. As two paradigmatic problems we consider deconvolution and errorsinvariables regression. We estimate the scalar products of our indirectly observed function with appropriate test functions, which are shifted over the interval of interest. An estimator of the change point is obtained by the extremal point of this quantity. We derive rates of convergence for this estimator. They depend on the degree of illposedness of the problem, which derives from the smoothness of the error density. Analyzing the Hellinger modulus of continuity of the problem we show that these rates are minimax. 1991 Mathematics Subject Classification. Primary 62G05; secondary 62G20 Key words and phrases. Changepoint estimation, inverse problems, indirect observations, deconvolution, errorsinvariables regression, optimal rates of convergence 1 1. Introduction Changepoint estimation has often been s...
Nonlinear Models of Measurement Errors
 JOURNAL OF ECONOMIC LITERATURE
, 2011
"... Measurement errors in economic data are pervasive and nontrivial in size. The presence of measurement errors causes biased and inconsistent parameter estimates and leads to erroneous conclusions to various degrees in economic analysis. While linear errorsinvariables models are usually handled with ..."
Abstract

Cited by 12 (0 self)
 Add to MetaCart
Measurement errors in economic data are pervasive and nontrivial in size. The presence of measurement errors causes biased and inconsistent parameter estimates and leads to erroneous conclusions to various degrees in economic analysis. While linear errorsinvariables models are usually handled with wellknown instrumental variable methods, this article provides an overview of recent research papers that derive estimation methods that provide consistent estimates for nonlinear models with measurement errors. We review models with both classical and nonclassical measurement errors, and with misclassification of discrete variables. For each of the methods surveyed, we describe the key ideas for identification and estimation, and discuss its application whenever it is currently available.
A ridgeparameter approach to deconvolution (long version). Available at www.ms.unimelb.edu.au/˜halpstat/hmeirevlongversion.pdf
, 2006
"... Kernel methods for deconvolution have attractive features, and prevail in the literature. However, they have disadvantages, which include the fact that they are usually suitable only for cases where the error distribution is infinitely supported and its characteristic function does not ever vanish. ..."
Abstract

Cited by 10 (0 self)
 Add to MetaCart
Kernel methods for deconvolution have attractive features, and prevail in the literature. However, they have disadvantages, which include the fact that they are usually suitable only for cases where the error distribution is infinitely supported and its characteristic function does not ever vanish. Even in these settings, optimal convergence rates are achieved by kernel estimators only when the kernel is chosen to adapt to the unknown smoothness of the target distribution. In this paper we suggest alternative ridge methods, not involving kernels in any way. We show that ridge methods (a) do not require the assumption that the errordistribution characteristic function is nonvanishing; (b) adapt themselves remarkably well to the smoothness of the target density, with the result that the degree of smoothness does not need to be directly estimated; and (c) give optimal convergence rates in a broad range of settings. 1. Introduction. Density