Results 1 
8 of
8
Nonparametrically Consistent Depthbased Classifiers
"... We introduce a class of depthbased classification procedures that are of a nearestneighbor nature. Depth, after symmetrization, indeed provides the centeroutward ordering that is necessary and sufficient to define nearest neighbors. The resulting classifiers are affineinvariant and inherit the ..."
Abstract

Cited by 7 (0 self)
 Add to MetaCart
(Show Context)
We introduce a class of depthbased classification procedures that are of a nearestneighbor nature. Depth, after symmetrization, indeed provides the centeroutward ordering that is necessary and sufficient to define nearest neighbors. The resulting classifiers are affineinvariant and inherit the nonparametric validity from nearestneighbor classifiers. In particular, we prove that the proposed depthbased classifiers are consistent under very mild conditions. We investigate their finitesample performances through simulations and show that they outperform affineinvariant nearestneighbor classifiers obtained through an obvious standardization construction. We illustrate the practical value of our classifiers on two real data examples. Finally, we shortly discuss the possible uses of our depthbased neighbors in other inference problems.
An Affine Invariant kNearest Neighbor Regression Estimate
, 1201
"... We design a datadependent metric in R d and use it to define the knearest neighbors of a given point. Our metric is invariant under all affine transformations. We show that, with this metric, the standard knearestneighborregression estimateisasymptotically consistent under the usual conditions on ..."
Abstract

Cited by 4 (0 self)
 Add to MetaCart
We design a datadependent metric in R d and use it to define the knearest neighbors of a given point. Our metric is invariant under all affine transformations. We show that, with this metric, the standard knearestneighborregression estimateisasymptotically consistent under the usual conditions on k, and minimal requirements on the input data. Index Terms — Nonparametric estimation, Regression function estimation,
unknown title
"... Signedrank tests for location in the symmetric independent component model ..."
Abstract
 Add to MetaCart
Signedrank tests for location in the symmetric independent component model
Robust Optimal Tests for Causality in Multivariate Time Series ∗
"... Here, we derive optimal rankbased tests for noncausality in the sense of Granger between two multivariate time series. Assuming that the global process admits a joint stationary vector autoregressive (VAR) representation with an elliptically symmetric innovation density, both no feedback and one di ..."
Abstract
 Add to MetaCart
(Show Context)
Here, we derive optimal rankbased tests for noncausality in the sense of Granger between two multivariate time series. Assuming that the global process admits a joint stationary vector autoregressive (VAR) representation with an elliptically symmetric innovation density, both no feedback and one direction causality hypotheses are tested. Using the characterization of noncausality in the VAR context, the local asymptotic normality (LAN) theory described in Le Cam (1986)) allows for constructing locally and asymptotically optimal tests for the null hypothesis of noncausality in one or both directions. These tests are based on multivariate residual ranks and signs (Hallin and Paindaveine, 2004a) and are shown to be asymptotically distribution free under elliptically symmetric innovation densities and invariant with respect to some affine transformations. Local powers and asymptotic relative efficiencies are also derived. The level, power and robustness (to outliers) of the resulting tests are studied by simulation and are compared to those of Wald test. Finally, the new tests are applied to Canadian money and income data.
Multivariate SignedRank Tests in Vector Autoregressive Order Identification
"... Abstract. The classical theory of rankbased inference is essentially limited to univariate linear models with independent observations. The objective of this paper is to illustrate some recent extensions of this theory to timeseries problems (serially dependent observations) in a multivariate sett ..."
Abstract
 Add to MetaCart
Abstract. The classical theory of rankbased inference is essentially limited to univariate linear models with independent observations. The objective of this paper is to illustrate some recent extensions of this theory to timeseries problems (serially dependent observations) in a multivariate setting (multivariate observations) under very mild distributional assumptions (mainly, elliptical symmetry; for some of the testing problems treated below, even secondorder moments are not required). After a brief presentation of the invariance principles that underlie the concepts of ranks to be considered, we concentrate on two examples of practical relevance: (1) the multivariate Durbin–Watson problem (testing against autocorrelated noise in a linear model context) and (2) the problem of testing the order of a vector autoregressive model, testing VAR(p0) against VAR(p0 + 1) dependence. These two testing procedures are the building blocks of classical autoregressive orderidentification methods. Based either on pseudoMahalanobis (Tyler) or on hyperplanebased (Oja and Paindaveine) signs and ranks, three classes of test statistics are considered for each problem: (1) statistics of the signtest type, (2) Spearman statistics and (3) van der Waerden (normal score) statistics. Simulations confirm theoretical results about the power of the proposed rankbased methods and establish their good robustness properties.
Signedrank Tests for Location in the Symmetric Independent Component Model
"... The socalled independent component (IC) model states that the observed pvectorX is generated via X = ΛZ+µ, where µ is a pvector, Λ is a fullrank matrix, and the centered random vector Z has independent marginals. We consider the problem of testing the null hypothesis H0: µ = 0 on the basis of i. ..."
Abstract
 Add to MetaCart
The socalled independent component (IC) model states that the observed pvectorX is generated via X = ΛZ+µ, where µ is a pvector, Λ is a fullrank matrix, and the centered random vector Z has independent marginals. We consider the problem of testing the null hypothesis H0: µ = 0 on the basis of i.i.d. observations X1,...,Xn generated by the symmetric version of the IC model above (for which all ICs have a symmetric distribution about the origin). In the spirit of Hallin & Paindaveine (2002a), we develop nonparametric (signedrank) tests, which are valid without any moment assumption and are, for adequately chosen scores, locally and asymptotically optimal (in the Le Cam sense) at given densities. Our tests are measurable with respect to the marginal signed ranks computed in the collection of null residuals Λ̂−1Xi, where Λ ̂ is a suitable estimate of Λ. Provided that Λ ̂ is affineequivariant, the proposed tests, unlike the standard marginal signedrank tests developed in Puri & Sen (1971) or any of their obvious generalizations, are affineinvariant. Local powers and asymptotic relative efficiencies (AREs) with respect to Hotelling’s T 2 test are derived. Quite remarkably, when Gaussian scores are used, these AREs are always greater than or equal to one, with equality in the multinormal model only. Finitesample efficiencies and robustness properties are investigated through a MonteCarlo study.
Optimal Sparse Principal Component Analysis in High Dimensional Elliptical Model
, 2013
"... We propose a semiparametric sparse principal component analysis method named elliptical component analysis (ECA) for analyzing high dimensional nonGaussian data. In particular, we assume the data follow an elliptical distribution. Elliptical family contains many wellknown multivariate distributio ..."
Abstract
 Add to MetaCart
We propose a semiparametric sparse principal component analysis method named elliptical component analysis (ECA) for analyzing high dimensional nonGaussian data. In particular, we assume the data follow an elliptical distribution. Elliptical family contains many wellknown multivariate distributions such as multivariate Gaussian, multivariatet, Cauchy, Kotz, and logistic distributions. It allows extra flexibility on modeling heavytailed distributions and capture tail dependence between variables. Such modeling flexibility makes it extremely useful in modeling financial, genomics and bioimaging data, where the data typically present heavy tails and high tail dependence. Under a double asymptotic framework where both the sample size n and the dimension d increase, we show that a multivariate rank based ECA procedure attains the optimal rate of convergence in parameter estimation. This is the first optimality result established for sparse principal component analysis on high dimensional elliptical data.