Results 1  10
of
112
Exponential integrability and transportation cost related to logarithmic Sobolev inequalities
 J. FUNCT. ANAL
, 1999
"... We study some problems on exponential integrability, concentration of measure, and transportation cost related to logarithmic Sobolev inequalities. On the real line, we then give a characterization of those probability measures which satisfy these inequalities. ..."
Abstract

Cited by 175 (9 self)
 Add to MetaCart
(Show Context)
We study some problems on exponential integrability, concentration of measure, and transportation cost related to logarithmic Sobolev inequalities. On the real line, we then give a characterization of those probability measures which satisfy these inequalities.
A Bennett Concentration Inequality and Its Application to Suprema of Empirical Processes
, 2002
"... We introduce new concentration inequalities for functions on product spaces They allow to obtain a Bennett type deviation bound for suprema of empirical processes indexed by upper bounded functions. The result is an improvement on Rio's version [6] of Talagrand's inequality [7] for equidis ..."
Abstract

Cited by 99 (6 self)
 Add to MetaCart
We introduce new concentration inequalities for functions on product spaces They allow to obtain a Bennett type deviation bound for suprema of empirical processes indexed by upper bounded functions. The result is an improvement on Rio's version [6] of Talagrand's inequality [7] for equidistributed variables.
Theory of classification: A survey of some recent advances
, 2005
"... The last few years have witnessed important new developments in the theory and practice of pattern classification. We intend to survey some of the main new ideas that have led to these recent results. ..."
Abstract

Cited by 93 (3 self)
 Add to MetaCart
The last few years have witnessed important new developments in the theory and practice of pattern classification. We intend to survey some of the main new ideas that have led to these recent results.
Concentration inequalities
 ADVANCED LECTURES IN MACHINE LEARNING
, 2004
"... Concentration inequalities deal with deviations of functions of independent random variables from their expectation. In the last decade new tools have been introduced making it possible to establish simple and powerful inequalities. These inequalities are at the heart of the mathematical analysis o ..."
Abstract

Cited by 89 (1 self)
 Add to MetaCart
(Show Context)
Concentration inequalities deal with deviations of functions of independent random variables from their expectation. In the last decade new tools have been introduced making it possible to establish simple and powerful inequalities. These inequalities are at the heart of the mathematical analysis of various problems in machine learning and made it possible to derive new efficient algorithms. This text attempts to summarize some of the basic tools.
Concentration inequalities using the entropy method
"... We investigate a new methodology, worked out by Ledoux and Massart, to prove concentrationofmeasure inequalities. The method is based on certain modified logarithmic Sobolev inequalities. We provide some very simple and general readytouse inequalities. One of these inequalities may be considered ..."
Abstract

Cited by 67 (3 self)
 Add to MetaCart
We investigate a new methodology, worked out by Ledoux and Massart, to prove concentrationofmeasure inequalities. The method is based on certain modified logarithmic Sobolev inequalities. We provide some very simple and general readytouse inequalities. One of these inequalities may be considered as an exponential version of the EfronStein inequality. The main purpose of this paper is to point out the simplicity and the generality of the approach. We show how the new method can recover many of Talagrand’s revolutionary inequalities and provide new applications in a variety of problems including Rademacher averages, Rademacher chaos, the number of certain small subgraphs in a random graph, and the minimum of the empirical risk in some statistical estimation problems.
Uniform in bandwidth consistency of kerneltype function estimators
 Ann. Stat
, 2005
"... We introduce a general method to prove uniform in bandwidth consistency of kerneltype function estimators. Examples include the kernel density estimator, the Nadaraya–Watson regression estimator and the conditional empirical process. Our results may be useful to establish uniform consistency of dat ..."
Abstract

Cited by 65 (6 self)
 Add to MetaCart
(Show Context)
We introduce a general method to prove uniform in bandwidth consistency of kerneltype function estimators. Examples include the kernel density estimator, the Nadaraya–Watson regression estimator and the conditional empirical process. Our results may be useful to establish uniform consistency of datadriven bandwidth kerneltype function estimators. 1. Introduction and statements of main results. Let X,X1,X2,... be i.i.d. Rd, d ≥ 1, valued random variables and assume that the common distribution function of these variables has a Lebesgue density function, which we shall denote by f. A kernel K will be any measurable function which
FINDING STRUCTURE WITH RANDOMNESS: STOCHASTIC ALGORITHMS FOR CONSTRUCTING APPROXIMATE MATRIX DECOMPOSITIONS
, 2009
"... Lowrank matrix approximations, such as the truncated singular value decomposition and the rankrevealing QR decomposition, play a central role in data analysis and scientific computing. This work surveys recent research which demonstrates that randomization offers a powerful tool for performing l ..."
Abstract

Cited by 61 (4 self)
 Add to MetaCart
Lowrank matrix approximations, such as the truncated singular value decomposition and the rankrevealing QR decomposition, play a central role in data analysis and scientific computing. This work surveys recent research which demonstrates that randomization offers a powerful tool for performing lowrank matrix approximation. These techniques exploit modern computational architectures more fully than classical methods and open the possibility of dealing with truly massive data sets. In particular, these techniques offer a route toward principal component analysis (PCA) for petascale data. This paper presents a modular framework for constructing randomized algorithms that compute partial matrix decompositions. These methods use random sampling to identify a subspace that captures most of the action of a matrix. The input matrix is then compressed—either explicitly or implicitly—to this subspace, and the reduced matrix is manipulated deterministically to obtain the desired lowrank factorization. In many cases, this approach beats its classical competitors in terms of accuracy, speed, and robustness. These claims are supported by extensive numerical experiments and a detailed error analysis. The specific benefits of randomized techniques depend on the computational environment. Consider