Results 1  10
of
50
Estimating the Support of a HighDimensional Distribution
, 1999
"... Suppose you are given some dataset drawn from an underlying probability distribution P and you want to estimate a "simple" subset S of input space such that the probability that a test point drawn from P lies outside of S is bounded by some a priori specified between 0 and 1. We propo ..."
Abstract

Cited by 783 (29 self)
 Add to MetaCart
Suppose you are given some dataset drawn from an underlying probability distribution P and you want to estimate a "simple" subset S of input space such that the probability that a test point drawn from P lies outside of S is bounded by some a priori specified between 0 and 1. We propose a method to approach this problem by trying to estimate a function f which is positive on S and negative on the complement. The functional form of f is given by a kernel expansion in terms of a potentially small subset of the training data; it is regularized by controlling the length of the weight vector in an associated feature space. The expansion coefficients are found by solving a quadratic programming problem, which we do by carrying out sequential optimization over pairs of input patterns. We also provide a preliminary theoretical analysis of the statistical performance of our algorithm. The algorithm is a natural extension of the support vector algorithm to the case of unlabelled d...
Smooth Discrimination Analysis
 Ann. Statist
, 1998
"... Discriminant analysis for two data sets in IR d with probability densities f and g can be based on the estimation of the set G = fx : f(x) g(x)g. We consider applications where it is appropriate to assume that the region G has a smooth boundary. In particular, this assumption makes sense if di ..."
Abstract

Cited by 146 (3 self)
 Add to MetaCart
Discriminant analysis for two data sets in IR d with probability densities f and g can be based on the estimation of the set G = fx : f(x) g(x)g. We consider applications where it is appropriate to assume that the region G has a smooth boundary. In particular, this assumption makes sense if discriminant analysis is used as a data analytic tool. We discuss optimal rates for estimation of G. 1991 AMS: primary 62G05 , secondary 62G20 Keywords and phrases: discrimination analysis, minimax rates, Bayes risk Short title: Smooth discrimination analysis This research was supported by the Deutsche Forschungsgemeinschaft, Sonderforschungsbereich 373 "Quantifikation und Simulation okonomischer Prozesse", HumboldtUniversitat zu Berlin 1 Introduction Assume that one observes two independent samples X = (X 1 ; : : : ; X n ) and Y = (Y 1 ; : : : ; Ym ) of IR d valued i.i.d. observations with densities f or g, respectively. The densities f and g are unknown. An additional random variabl...
Consistency and convergence rates of oneclass SVM and related algorithms
, 2006
"... We determine the asymptotic limit of the function computed by support vector machines (SVM) and related algorithms that minimize a regularized empirical convex loss function in the reproducing kernel Hilbert space of the Gaussian RBF kernel, in the situation where the number of examples tends to inf ..."
Abstract

Cited by 40 (3 self)
 Add to MetaCart
We determine the asymptotic limit of the function computed by support vector machines (SVM) and related algorithms that minimize a regularized empirical convex loss function in the reproducing kernel Hilbert space of the Gaussian RBF kernel, in the situation where the number of examples tends to infinity, the bandwidth of the Gaussian kernel tends to 0, and the regularization parameter is held fixed. Nonasymptotic convergence bounds to this limit in the L2 sense are provided, together with upper bounds on the classification error that is shown to converge to the Bayes risk, therefore proving the Bayesconsistency of a variety of methods although the regularization term does not vanish. These results are particularly relevant to the oneclass SVM, for which the regularization can not vanish by construction, and which is shown for the first time to be a consistent density level set estimator.
Learning minimum volume sets
 J. Machine Learning Res
, 2006
"... Given a probability measure P and a reference measure µ, one is often interested in the minimum µmeasure set with Pmeasure at least α. Minimum volume sets of this type summarize the regions of greatest probability mass of P, and are useful for detecting anomalies and constructing confidence region ..."
Abstract

Cited by 39 (7 self)
 Add to MetaCart
Given a probability measure P and a reference measure µ, one is often interested in the minimum µmeasure set with Pmeasure at least α. Minimum volume sets of this type summarize the regions of greatest probability mass of P, and are useful for detecting anomalies and constructing confidence regions. This paper addresses the problem of estimating minimum volume sets based on independent samples distributed according to P. Other than these samples, no other information is available regarding P, but the reference measure µ is assumed to be known. We introduce rules for estimating minimum volume sets that parallel the empirical risk minimization and structural risk minimization principles in classification. As in classification, we show that the performances of our estimators are controlled by the rate of uniform convergence of empirical to true probabilities over the class from which the estimator is drawn. Thus we obtain finite sample size performance bounds in terms of VC dimension and related quantities. We also demonstrate strong universal consistency and an oracle inequality. Estimators based on histograms and dyadic partitions illustrate the proposed rules. 1
Bayesian Statistics
 in WWW', Computing Science and Statistics
, 1989
"... ∗ Signatures are on file in the Graduate School. This dissertation presents two topics from opposite disciplines: one is from a parametric realm and the other is based on nonparametric methods. The first topic is a jackknife maximum likelihood approach to statistical model selection and the second o ..."
Abstract

Cited by 33 (1 self)
 Add to MetaCart
(Show Context)
∗ Signatures are on file in the Graduate School. This dissertation presents two topics from opposite disciplines: one is from a parametric realm and the other is based on nonparametric methods. The first topic is a jackknife maximum likelihood approach to statistical model selection and the second one is a convex hull peeling depth approach to nonparametric massive multivariate data analysis. The second topic includes simulations and applications on massive astronomical data. First, we present a model selection criterion, minimizing the KullbackLeibler distance by using the jackknife method. Various model selection methods have been developed to choose a model of minimum KullbackLiebler distance to the true model, such as Akaike information criterion (AIC), Bayesian information criterion (BIC), Minimum description length (MDL), and Bootstrap information criterion. Likewise, the jackknife method chooses a model of minimum KullbackLeibler distance through bias reduction. This bias, which is inevitable in model
Kernel estimation of density level sets
 J. Multivariate Anal
, 2006
"... Abstract. Let f be a multivariate density and fn be a kernel estimate of f drawn from the nsample X1, · · ·,Xn of i.i.d. random variables with density f. We compute the asymptotic rate of convergence towards 0 of the volume of the symmetric difference between the tlevel set {f ≥ t} and its plug ..."
Abstract

Cited by 21 (3 self)
 Add to MetaCart
(Show Context)
Abstract. Let f be a multivariate density and fn be a kernel estimate of f drawn from the nsample X1, · · ·,Xn of i.i.d. random variables with density f. We compute the asymptotic rate of convergence towards 0 of the volume of the symmetric difference between the tlevel set {f ≥ t} and its plugin estimator {fn ≥ t}. As a corollary, we obtain the exact rate of convergence of a plugin type estimate of the density level set corresponding to a fixed probability for the law induced by f.
Multiscale Inference about a Density
, 2007
"... We introduce a multiscale test statistic based on local order statistics and spacings that provides simultaneous confidence statements for the existence and location of local increases and decreases of a density or a failure rate. The procedure provides guaranteed finitesample significance levels, ..."
Abstract

Cited by 20 (5 self)
 Add to MetaCart
We introduce a multiscale test statistic based on local order statistics and spacings that provides simultaneous confidence statements for the existence and location of local increases and decreases of a density or a failure rate. The procedure provides guaranteed finitesample significance levels, is easy to implement and possesses certain asymptotic optimality and adaptivity properties.
A Limit Theorem for Solutions of Inequalities
 Scandinavian Journal of Statistics
, 1998
"... Let H(p) be the set fx 2 X: h(x) pg, where h is a realvalued lower semicontinuous function on a locally compact second countable metric space X. A limit theorem is proved for the empirical counterpart of H(p) obtained by replacing of h with its estimator. AMS Subject Classification (1991): 52A22, ..."
Abstract

Cited by 16 (0 self)
 Add to MetaCart
(Show Context)
Let H(p) be the set fx 2 X: h(x) pg, where h is a realvalued lower semicontinuous function on a locally compact second countable metric space X. A limit theorem is proved for the empirical counterpart of H(p) obtained by replacing of h with its estimator. AMS Subject Classification (1991): 52A22, 60D05, 60F05, 62G99. Keywords & Phrases: Aumann expectation, polar set, random set, Hausdorff metric, weak convergence
How to Divide a Territory? A New Simple Differential Formalism for Optimization of Set Functions
 International Journal of Intelligent Systems
"... ..."