Results 1  10
of
17
Outlier Detection Using Nonconvex Penalized Regression
, 2010
"... This paper studies the outlier detection problem from the point of view of penalized regressions. Our regression model adds one mean shift parameter for each of the n data points. We then apply a regularization favoring a sparse vector of mean shift parameters. The usual L1 penalty yields a convex c ..."
Abstract

Cited by 31 (3 self)
 Add to MetaCart
This paper studies the outlier detection problem from the point of view of penalized regressions. Our regression model adds one mean shift parameter for each of the n data points. We then apply a regularization favoring a sparse vector of mean shift parameters. The usual L1 penalty yields a convex criterion, but we find that it fails to deliver a robust estimator. The L1 penalty corresponds to soft thresholding. We introduce a thresholding (denoted by Θ) based iterative procedure for outlier detection (ΘIPOD). A version based on hard thresholding correctly identifies outliers on some hard test problems. We find that ΘIPOD is much faster than iteratively reweighted least squares for large data because each iteration costs at most O(np) (and sometimes much less) avoiding an O(np 2) least squares estimate. We describetheconnection between ΘIPODandMestimators. Ourproposed method has one tuning parameter with which to both identify outliers and estimate regression coefficients. A datadependent choice can be made based on BIC. The tuned ΘIPOD shows outstanding performance in identifying outliers in various situations in comparison to other existing approaches. This methodology extends to highdimensional modeling with p ≫ n, if both the coefficient vector and the outlier pattern are sparse. 1 1
Anomalies in the analysis of calibrated data
 J. Statist. Comput. Simul
, 2009
"... Abstract. This study examines effects of calibration errors on model assumptions and data–analytic tools in direct calibration assays. These effects encompass induced dependencies, inflated variances, and heteroscedasticity among the calibrated measurements, whose distributions arise as mixtures. Th ..."
Abstract

Cited by 2 (2 self)
 Add to MetaCart
Abstract. This study examines effects of calibration errors on model assumptions and data–analytic tools in direct calibration assays. These effects encompass induced dependencies, inflated variances, and heteroscedasticity among the calibrated measurements, whose distributions arise as mixtures. These anomalies adversely affect conventional inferences, to include the inconsistency of sample means; the underestimation of measurement variance; and the distributions of sample means, sample variances, and Student’s t as mixtures. Inferences in comparative experiments remain largely intact, although error mean squares continue to underestimate the measurement variances. These anomalies are masked in practice, as conventional diagnostics cannot discern irregularities induced through calibration. Case studies illustrate the principal issues. 1.
Fisher Information Test of Normality
, 1998
"... An extremal property of normal distributions is that they have the smallest Fisher Information for location among all distributions with the same variance. A new test of normality proposed by Terrell (1995) utilizes the above property by finding that density of maximum likelihood constrained on havi ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
An extremal property of normal distributions is that they have the smallest Fisher Information for location among all distributions with the same variance. A new test of normality proposed by Terrell (1995) utilizes the above property by finding that density of maximum likelihood constrained on having the expected Fisher Information under normality based on the sample variance. The test statistic is then constructed as a ratio of the resulting likelihood against that of normality. Since the asymptotic distribution of this test statistic is not available, the critical values for n = 3 to 200 have been obtained by simulation and smoothed using polynomials. An extensive power study shows that the test has superior power against distributions that are symmetric and leptokurtic (longtailed). Another advantage of the test over existing ones is the direct depiction of any deviation from normality in the form of a density estimate. This is evident when the test is applied to several real data sets. Testing of normality in residuals is also investigated. Various approaches in dealing with residuals being possibly heteroscedastic and correlated suffer from a loss of power. The approach with the fewest undesirable features is to use the Ordinary Least
INFORMATION TO USERS
, 1982
"... Niknian, Minoo, "Contributions to the problem of goodnessoffit " (1982). Retrospective Theses and Dissertations. Paper 7469. ..."
Abstract
 Add to MetaCart
Niknian, Minoo, "Contributions to the problem of goodnessoffit " (1982). Retrospective Theses and Dissertations. Paper 7469.
2.2 The Empirical Likelihood Method........................
"... photocopying or other means, without the permission of the author. ..."
(Show Context)
IRREGULARITIES IN X(Y) FROM Y(X) IN LINEAR CALIBRATION
"... Abstract. Let X be an input measurement and Y the output reading of a calibrated instrument, with Y (X) as the calibration curve. Solving X(Y) projects an instrumental reading back onto the scale of measurements as an object of pivotal interest. Arrays of instrumental readings are projected in this ..."
Abstract
 Add to MetaCart
(Show Context)
Abstract. Let X be an input measurement and Y the output reading of a calibrated instrument, with Y (X) as the calibration curve. Solving X(Y) projects an instrumental reading back onto the scale of measurements as an object of pivotal interest. Arrays of instrumental readings are projected in this manner in practice, yielding arrays of calibrated measurements, typically subject to errors of calibration. Effects of calibration errors on properties of calibrated measurements are examined here under linear calibration. Irregularities arise as induced dependencies, inflated variances, nonstandard distributions, inconsistent sample means, the underestimation of measurement variance, and other unintended consequences. On the other hand, conventional properties are seen to remain largely in place in the use of selected regression diagnostics, and in one–way comparative experiments using calibrated data. 1.
Department of Economics TESTING FOR NORMALITY IN THE LINEAR REGRESSION MODEL: AN EMPIRICAL LIKELIHOOD RATIO TEST
, 2004
"... The empirical likelihood ratio (ELR) test for the problem of testing for normality in a linear regression model is derived in this paper. The sampling properties of the ELR test and four other commonly used tests are explored and analyzed using Monte Carlo simulation. The ELR test has good power pro ..."
Abstract
 Add to MetaCart
The empirical likelihood ratio (ELR) test for the problem of testing for normality in a linear regression model is derived in this paper. The sampling properties of the ELR test and four other commonly used tests are explored and analyzed using Monte Carlo simulation. The ELR test has good power properties against various alternative hypotheses. Keywords: Regression residual, empirical likelihood ratio, Monte Carlo simulation, normality
structural change
, 2004
"... In this paper we derive an empirical likelihood type Wald (ELW)test for the problem testing for structural change in a linear regression model when the variance of error term is not known to be equal across regimes. The sampling properties of the ELW test are analyzed using Monte Carlo simulation. C ..."
Abstract
 Add to MetaCart
(Show Context)
In this paper we derive an empirical likelihood type Wald (ELW)test for the problem testing for structural change in a linear regression model when the variance of error term is not known to be equal across regimes. The sampling properties of the ELW test are analyzed using Monte Carlo simulation. Comparisons of these properties of the ELW test and of three other commonly used tests (Jayatissa, Weerahandi, and Wald) are conducted. The finding is that the ELW test has very good power properties. Keywords: JEL Classifications:
Bootstrapping regression models with BLUS residuals
, 1999
"... To bootstrap a regression problem, pairs of response and explanatory variables or residuals can be resampled, according to whether we believe that the explanatory variables are random or fixed. In the latter case, di#erent residuals have been proposed in the literature, including the ordinary residu ..."
Abstract
 Add to MetaCart
To bootstrap a regression problem, pairs of response and explanatory variables or residuals can be resampled, according to whether we believe that the explanatory variables are random or fixed. In the latter case, di#erent residuals have been proposed in the literature, including the ordinary residuals (Efron, 1979), standardized residuals (Bickel and Freedman, 1983) and studentized residuals (Weber, 1984). Freedman (1981) has shown that the bootstrap from ordinary residuals is asymptotically valid when the number of cases increases and the number of variables is fixed. Bickel and Freedman (1983) have shown the asymptotic validity for ordinary residuals when the number of variables as well as the number of cases increase provided that the ratio of the two converges to zero at an appropriate rate. In this paper, the authors introduce the use of Best Linear Unbiased Scaled (BLUS) residuals in bootstrapping regression models. The main advantage of the BLUS residuals, introduced in Theil (...