Results 1  10
of
33
Breakdown and Groups
, 2002
"... The concept of breakdown point was... In this paper we argue that this success is intimately connected to the fact that the translation and affine groups act on the sample space and give rise to a definition of equivariance for statistical functionals. For such functionals a nontrivial upper bound f ..."
Abstract

Cited by 24 (6 self)
 Add to MetaCart
The concept of breakdown point was... In this paper we argue that this success is intimately connected to the fact that the translation and affine groups act on the sample space and give rise to a definition of equivariance for statistical functionals. For such functionals a nontrivial upper bound for the breakdown point can be shown. In the absence of such a group structure a breakdown point of one is attainable and this is perhaps the decisive reason why the concept of breakdown point in other situations has not proved as successful. Even if a natural group is present it is often not sufficiently large to allow a nontrivial upper bound for the breakdown point. One exception to this is the problem of the autocorrelation structure of time series where we derive a nontrivial upper breakdown point using the group of realizable linear filters. The paper is formulated in an abstract manner to emphasize the role of the group and the resulting equivariance structure
Multiscale Inference about a Density
, 2007
"... We introduce a multiscale test statistic based on local order statistics and spacings that provides simultaneous confidence statements for the existence and location of local increases and decreases of a density or a failure rate. The procedure provides guaranteed finitesample significance levels, ..."
Abstract

Cited by 20 (5 self)
 Add to MetaCart
We introduce a multiscale test statistic based on local order statistics and spacings that provides simultaneous confidence statements for the existence and location of local increases and decreases of a density or a failure rate. The procedure provides guaranteed finitesample significance levels, is easy to implement and possesses certain asymptotic optimality and adaptivity properties.
Approximating Data with weighted smoothing Splines
, 2009
"... Given a data set (ti,yi), i = 1,...,n with the ti ∈ [0, 1] nonparametric regression is concerned with the problem of specifying a suitable function fn: [0, 1] → R such that the data can be reasonably approximated by the points (ti,fn(ti)), i = 1,...,n. If a data set exhibits large variations in lo ..."
Abstract

Cited by 9 (3 self)
 Add to MetaCart
Given a data set (ti,yi), i = 1,...,n with the ti ∈ [0, 1] nonparametric regression is concerned with the problem of specifying a suitable function fn: [0, 1] → R such that the data can be reasonably approximated by the points (ti,fn(ti)), i = 1,...,n. If a data set exhibits large variations in local behaviour, for example large peaks as in spectroscopy data, then the method must be able to adapt to the local changes in smoothness. Whilst many methods are able to accomplish this they are less successful at adapting derivatives. In this paper we show how the goal of local adaptivity of the function and its first and second derivatives can be attained in a simple manner using weighted smoothing splines. A residual based concept of approximation is used which forces local adaptivity of the regression function together with a global regularization which makes the function as smooth as possible subject to the approximation constraints.
A WaveletFisz Approach to Spectrum Estimation
, 2008
"... We suggest a new approach to wavelet threshold estimation of spectral densities of stationary time series. It is well known that choosing appropriate thresholds to smooth the periodogram is difficult because nonparametric spectral estimation suffers from problems similar to curve estimation with a ..."
Abstract

Cited by 8 (4 self)
 Add to MetaCart
We suggest a new approach to wavelet threshold estimation of spectral densities of stationary time series. It is well known that choosing appropriate thresholds to smooth the periodogram is difficult because nonparametric spectral estimation suffers from problems similar to curve estimation with a highly heteroscedastic and nonGaussian error structure. Possible solutions that have been proposed are plugin estimation of the variance of the empirical wavelet coefficients or the logtransformation of the periodogram. In this paper we propose an alternative method to address the problem of heteroscedasticity and nonnormality. We estimate thresholds for the empirical wavelet coefficients of the (tapered) periodogram as appropriate linear combinations of the periodogram values similar to empirical scaling coefficients. Our solution permits the design of “asymptotically noisefree reconstruction thresholds”, paralleling classical wavelet theory for nonparametric regression with Gaussian white noise errors. Our simulation studies show promising results that clearly improve the classical approaches mentioned above. In addition, we derive theoretical results on the nearoptimal rate of convergence of the minimax meansquare risk for a class of spectral densities, including those of very low regularity.
Monotone spectral density estimation
, 2009
"... We propose two estimators of a unimodal or monotone spectral density, that are based on the periodogram. These are the isotonic regression of the periodogram and the isotonic regression of the logperiodogram. We derive pointwise limit distribution results for the proposed estimators for short memor ..."
Abstract

Cited by 7 (0 self)
 Add to MetaCart
We propose two estimators of a unimodal or monotone spectral density, that are based on the periodogram. These are the isotonic regression of the periodogram and the isotonic regression of the logperiodogram. We derive pointwise limit distribution results for the proposed estimators for short memory linear processes and long memory Gaussian processes and also that the estimators are rate optimal. 1
Bivariate density estimation using BV regularisation
, 2007
"... The problem of bivariate density estimation is studied with the aim of finding the density function with the smallest number of local extreme values which is adequate with the given data. Adequacy is defined via Kuiper metrics. The concept of the tautstring algorithm which provides adequate approxi ..."
Abstract

Cited by 7 (0 self)
 Add to MetaCart
The problem of bivariate density estimation is studied with the aim of finding the density function with the smallest number of local extreme values which is adequate with the given data. Adequacy is defined via Kuiper metrics. The concept of the tautstring algorithm which provides adequate approximations with a small number of local extrema is generalised for analysing two and higher dimensional data, using Delaunay triangulation and diffusion filtering. Results are based on equivalence relations in one dimension between the taut string algorithm and the method of solving the discrete total variation flow equation. The generalisation and some modifications are developed and the performance for density estimation is shown.
Improving Density Estimation by Incorporating Spatial Information
, 2009
"... Given discrete event data, we wish to produce a probability density that can model the relative probability of events occurring in a spatial region. Common methods of density estimation, such as Kernel Density Estimation, do not incorporate geographical information. Using these methods could result ..."
Abstract

Cited by 6 (2 self)
 Add to MetaCart
(Show Context)
Given discrete event data, we wish to produce a probability density that can model the relative probability of events occurring in a spatial region. Common methods of density estimation, such as Kernel Density Estimation, do not incorporate geographical information. Using these methods could result in nonnegligible portions of the support of the density in unrealistic geographic locations. For example, crime density estimation models that do not take geographic information into account may predict events in unlikely places such as oceans, mountains, etc. We propose a set of Maximum Penalized Likelihood Estimation methods based on Total Variation and H1 Sobolev norm regularizers in conjunction with a priori high resolution spatial data to obtain more geographically accurate density estimates. We apply this method to a residential burglary data set of the San Fernando Valley using geographic features obtained from satellite images of the region and housing density information. 1
A comparison of automatic histogram constructions
, 2008
"... Abstract. Even for a welltrained statistician the construction of a histogram for a given realvalued data set is a difficult problem. It is even more difficult to construct a fully automatic procedure which specifies the number and widths of the bins in a satisfactory manner for a wide range of da ..."
Abstract

Cited by 6 (2 self)
 Add to MetaCart
Abstract. Even for a welltrained statistician the construction of a histogram for a given realvalued data set is a difficult problem. It is even more difficult to construct a fully automatic procedure which specifies the number and widths of the bins in a satisfactory manner for a wide range of data sets. In this paper we compare several histogram construction procedures by means of a simulation study. The study includes plugin methods, crossvalidation, penalized maximum likelihood and the taut string procedure. Their performance on different test beds is measured by their ability to identify the peaks of an underlying density as well as by Hellinger distance.
Smooth functions and local extreme values
 Computational Statistics and Data Analysis
"... Given a sample of n observations y1,..., yn at time points t1,..., tn we consider the problem of specifying a function ˜ f such that ˜ f • is smooth, • fits the data in the sense that the residuals yi − ˜ f(ti) satisfy the multiresolution criterion ..."
Abstract

Cited by 4 (2 self)
 Add to MetaCart
Given a sample of n observations y1,..., yn at time points t1,..., tn we consider the problem of specifying a function ˜ f such that ˜ f • is smooth, • fits the data in the sense that the residuals yi − ˜ f(ti) satisfy the multiresolution criterion
A new regression model: modal linear regression
, 2013
"... The mode of a distribution provides an important summary of data and is often estimated based on some nonparametric kernel density estimator. This article develops a new data analysis tool called modal linear regression in order to explore highdimensional data. Modal linear regression models the c ..."
Abstract

Cited by 3 (0 self)
 Add to MetaCart
The mode of a distribution provides an important summary of data and is often estimated based on some nonparametric kernel density estimator. This article develops a new data analysis tool called modal linear regression in order to explore highdimensional data. Modal linear regression models the conditional mode of a response Y given a set of predictors x as a linear function of x. Modal linear regression differs from standard linear regression in that standard linear regression models the conditional mean (as opposed to mode) of Y as a linear function of x. We propose an ExpectationMaximization algorithm in order to estimate the regression coefficients of modal linear regression. We also provide asymptotic properties for the proposed estimator without the symmetric assumption of the error density. Our empirical studies with simulated data and real data demonstrate that the proposed modal regression gives shorter predictive intervals than mean linear regression, median linear regression, and MMestimators.