Results 1  10
of
70
Compressive Sensing and Structured Random Matrices
 RADON SERIES COMP. APPL. MATH XX, 1–95 © DE GRUYTER 20YY
"... These notes give a mathematical introduction to compressive sensing focusing on recovery using ℓ1minimization and structured random matrices. An emphasis is put on techniques for proving probabilistic estimates for condition numbers of structured random matrices. Estimates of this type are key to ..."
Abstract

Cited by 162 (19 self)
 Add to MetaCart
These notes give a mathematical introduction to compressive sensing focusing on recovery using ℓ1minimization and structured random matrices. An emphasis is put on techniques for proving probabilistic estimates for condition numbers of structured random matrices. Estimates of this type are key to providing conditions that ensure exact or approximate recovery of sparse vectors using ℓ1minimization.
Theory of classification: A survey of some recent advances
, 2005
"... The last few years have witnessed important new developments in the theory and practice of pattern classification. We intend to survey some of the main new ideas that have led to these recent results. ..."
Abstract

Cited by 93 (3 self)
 Add to MetaCart
The last few years have witnessed important new developments in the theory and practice of pattern classification. We intend to survey some of the main new ideas that have led to these recent results.
Concentration inequalities
 ADVANCED LECTURES IN MACHINE LEARNING
, 2004
"... Concentration inequalities deal with deviations of functions of independent random variables from their expectation. In the last decade new tools have been introduced making it possible to establish simple and powerful inequalities. These inequalities are at the heart of the mathematical analysis o ..."
Abstract

Cited by 89 (1 self)
 Add to MetaCart
(Show Context)
Concentration inequalities deal with deviations of functions of independent random variables from their expectation. In the last decade new tools have been introduced making it possible to establish simple and powerful inequalities. These inequalities are at the heart of the mathematical analysis of various problems in machine learning and made it possible to derive new efficient algorithms. This text attempts to summarize some of the basic tools.
Interpolated inequalities between exponential and Gaussian, Orlicz hypercontractivity and isoperimetry
, 2004
"... ..."
(Show Context)
Concentration of the adjacency matrix and of the Laplacian in random graphs with independent edges
, 2010
"... ..."
A tail inequality for suprema of unbounded empirical processes with applications to Markov chains
, 2008
"... ..."
Model selection by resampling penalization
, 2007
"... We present a new family of model selection algorithms based on the resampling heuristics. It can be used in several frameworks, do not require any knowledge about the unknown law of the data, and may be seen as a generalization of local Rademacher complexities and Vfold crossvalidation. In the cas ..."
Abstract

Cited by 37 (18 self)
 Add to MetaCart
We present a new family of model selection algorithms based on the resampling heuristics. It can be used in several frameworks, do not require any knowledge about the unknown law of the data, and may be seen as a generalization of local Rademacher complexities and Vfold crossvalidation. In the case example of leastsquare regression on histograms, we prove oracle inequalities, and that these algorithms are naturally adaptive to both the smoothness of the regression function and the variability of the noise level. Then, interpretating Vfold crossvalidation in terms of penalization, we enlighten the question of choosing V. Finally, a simulation study illustrates the strength of resampling penalization algorithms against some classical ones, in particular with heteroscedastic data.
RANKING AND EMPIRICAL MINIMIZATION OF USTATISTICS
, 2008
"... The problem of ranking/ordering instances, instead of simply classifying them, has recently gained much attention in machine learning. In this paper we formulate the ranking problem in a rigorous statistical framework. The goal is to learn a ranking rule for deciding, among two instances, which one ..."
Abstract

Cited by 34 (2 self)
 Add to MetaCart
The problem of ranking/ordering instances, instead of simply classifying them, has recently gained much attention in machine learning. In this paper we formulate the ranking problem in a rigorous statistical framework. The goal is to learn a ranking rule for deciding, among two instances, which one is “better,” with minimum ranking risk. Since the natural estimates of the risk are of the form of a Ustatistic, results of the theory of Uprocesses are required for investigating the consistency of empirical risk minimizers. We establish, in particular, a tail inequality for degenerate Uprocesses, and apply it for showing that fast rates of convergence may be achieved under specific noise assumptions, just like in classification. Convex risk minimization methods are also studied.
Exponential concentration for First Passage Percolation through modified Poincaré Inequalities
, 2006
"... We provide a new exponential concentration inequality for First Passage Percolation valid for a wide class of edge times distributions. This improves and extends a result by Benjamini, Kalai and Schramm [5] which gave a variance bound for Bernoulli edge times. Our approach is based on some function ..."
Abstract

Cited by 33 (1 self)
 Add to MetaCart
We provide a new exponential concentration inequality for First Passage Percolation valid for a wide class of edge times distributions. This improves and extends a result by Benjamini, Kalai and Schramm [5] which gave a variance bound for Bernoulli edge times. Our approach is based on some functional inequalities extending the work of Rossignol [20], Falik and Samorodnitsky [9]. Résumé: On obtient une nouvelle inégalité de concentration exponentielle pour la percolation de premier passage, valable pour une large classe de distributions des temps d’arêtes. Ceci améliore et étend un résultat de Benjamini, Kalai et Schramm [5] qui donnait une borne sur la variance pour des temps d’arêtes suivant une loi de Bernoulli. Notre approche se fonde sur des inégalités fonctionnelles étendant les travaux de Rossignol [20], Falik et Samorodnitsky [9].
Optimal concentration inequalities for dynamical systems
 Commun. Math. Phys
, 2012
"... ar ..."
(Show Context)