Results 1  10
of
10
Optimal Prediction for Linear Regression With Infinitely Many Parameters
 Pr'epublication n 541 du Laboratoire de Probabilit'es & Mod`eles Al'eatoires, Universit'e Paris VI, http://www.proba.jussieu.fr
, 1999
"... The problem of optimal prediction in the stochastic linear regression model with infinitely many parameters is considered. We suggest a prediction method that is asymptotically minimax over ellipsoids in ` 2 . The method is based on a regularized least squares estimator with weights of the Pinsker f ..."
Abstract

Cited by 6 (1 self)
 Add to MetaCart
(Show Context)
The problem of optimal prediction in the stochastic linear regression model with infinitely many parameters is considered. We suggest a prediction method that is asymptotically minimax over ellipsoids in ` 2 . The method is based on a regularized least squares estimator with weights of the Pinsker filter. We also consider the case of dynamic linear regression which is important in the context of transfer function modeling. 1 Introduction Consider the regression model y = 1 X k=1 ` k z k + ffl (1) where fz k g k=1;2;::: is a sequence of possible explanatory variables, y is the corresponding response, ffl is the error, and ` = (` 1 ; ` 2 ; : : :) 2 ` 2 is an unknown regression sequence. Assume that fz k g and ffl are random variables, and E ffl = 0 and E ffl 2 = oe 2 ; the stochastic series in (1) and later are assumed to converge in the mean squared sense. Suppose we are given n independent realizations of y and fz k g fy(t); z 1 (t); z 2 (t); : : : ; t = 1; : : : ; ng (2) com...
Asymptotically minimax estimation of a function with jumps. Bernoulli 4
, 1998
"... Asymptotically minimax nonparametric estimation of a regression function observed in white Gaussian noise over a bounded interval is considered, with respect to a L 2loss function. The unknown function f is assumed to be m times dierentiable except for an unknown, though nite, number of jumps, wit ..."
Abstract

Cited by 3 (0 self)
 Add to MetaCart
Asymptotically minimax nonparametric estimation of a regression function observed in white Gaussian noise over a bounded interval is considered, with respect to a L 2loss function. The unknown function f is assumed to be m times dierentiable except for an unknown, though nite, number of jumps, with piecewise mth derivative bounded in L 2norm. An estimator is constructed, attaining the same optimal risk bound, known as Pinsker's constant, as in the case of smooth functions (without jumps). Key words: Jumppoint estimation; Nonparametric regression; Optimal constant; Tapered orthogonal series estimator
Adaptive estimation on anisotropic Hölder spaces Part II. Partially adaptive case
, 2006
"... In this paper, we consider a particular case of adaptation. Let us recall that, in the first paper “Fully case”, a large collecton of anisotropic Hölder spaces is fixed and the goal is to construct an adaptive estimator with respect to the absolutely unknown smoothness parameter. Here the problem is ..."
Abstract
 Add to MetaCart
In this paper, we consider a particular case of adaptation. Let us recall that, in the first paper “Fully case”, a large collecton of anisotropic Hölder spaces is fixed and the goal is to construct an adaptive estimator with respect to the absolutely unknown smoothness parameter. Here the problem is quite different: an additionnal information is known, the effective smoothness of the signal. We prove a minimax result which demonstrates that a knowledge of is type is useful because the rate of convergence is better than that obtained without knowledge of the effective smothness. Moreover we linked this problem with the maxiset theory.
SMOOTHNESS PRIORS SOBOLEV SPACES SPLINE FUNCTIONS STATIONARY PROCESSES
"... We give an account of the Pinsker bound describing the exact asymptotics of the minimax risk in a class of nonparametric smoothing problems. The parameter spaces are Sobolev classes or ellipsoids, and the loss is of squared L2type. The result from 1980 turned out to be a major step in the theory of ..."
Abstract
 Add to MetaCart
We give an account of the Pinsker bound describing the exact asymptotics of the minimax risk in a class of nonparametric smoothing problems. The parameter spaces are Sobolev classes or ellipsoids, and the loss is of squared L2type. The result from 1980 turned out to be a major step in the theory of nonparametric function estimation. Keywords:
ASYMPTOTIC MINIMAX ESTIMATION IN NONPARAMETRIC AUTOREGRESSION
"... We develop asymptotic theory for nonparametric estimators of the autoregression function. To deal with irregularities in the pattern of explanatory variables caused by their randomness, we propose a new estimator which is a modification of the Priestley–Chao kernel method. It is shown that this esti ..."
Abstract
 Add to MetaCart
(Show Context)
We develop asymptotic theory for nonparametric estimators of the autoregression function. To deal with irregularities in the pattern of explanatory variables caused by their randomness, we propose a new estimator which is a modification of the Priestley–Chao kernel method. It is shown that this estimator has similar asymptotic properties to standard estimators of kernel type. We establish an asymptotic lower bound to the minimax risk in Sobolev classes and show that our modified Priestley–Chao estimator can get arbitrarily close to this efficiency bound. Key words: exact asymptotics, minimax risk, nonparametric autoregression, nonparametric estimation.
Optimal prediction for linear regression with infinitely many parameters
, 1999
"... The problem of optimal prediction in the stochastic linear regression model with infinitely many parameters is considered. We suggest a prediction method that outperforms asymptotically the ordinary least squares predictor. Moreover, if the random errors are Gaussian, the method is asymptotically mi ..."
Abstract
 Add to MetaCart
(Show Context)
The problem of optimal prediction in the stochastic linear regression model with infinitely many parameters is considered. We suggest a prediction method that outperforms asymptotically the ordinary least squares predictor. Moreover, if the random errors are Gaussian, the method is asymptotically minimax over ellipsoids in c2: The method is based on a regularized least squares estimator with weights of the Pinsker filter. We also consider the case of dynamic linear regression, which is important in the context of transfer function modeling.
Graph Structured Normal Means Inference
, 2013
"... This thesis addresses statistical estimation and testing of signals over a graph when measurements are noisy and highdimensional. Graph structured patterns appear in applications as diverse as sensor networks, virology in human networks, congestion in internet routers, and advertising in social net ..."
Abstract
 Add to MetaCart
This thesis addresses statistical estimation and testing of signals over a graph when measurements are noisy and highdimensional. Graph structured patterns appear in applications as diverse as sensor networks, virology in human networks, congestion in internet routers, and advertising in social networks. We will develop asymptotic guarantees of the performance of statistical estimators and tests, by stating conditions for consistency by properties of the graph (e.g. graph spectra). The goal of this thesis is to demonstrate theoretically that by exploiting the graph structure one can achieve statistical consistency in extremely noisy conditions. We begin with the study of a projection estimator called laplacian eigenmaps, and find that eigenvalue concentration plays a central role in the ability to estimate graph structured patterns. We continue with the study of the edge lasso, a least squares procedure with total variation penalty, and determine combinatorial conditions under which changepoints (edges across which the underlying signal changes) on the graph are recovered. We will shift focus to testing for anomalous activations in the graph, using the scan statistic relaxations, the spectral scan statistic and the graph ellipsoid scan statistic. We will also show how one can form a decomposition of the graph from a spanning tree which will lead to a test for activity in the graph. This will lead to the construction of a spanning tree wavelet basis, which can be used to localize activations on the graph.
Graph Structured Statistical Inference
, 2012
"... This thesis addresses statistical estimation and testing of signals over a graph when measurements are noisy and highdimensional. Graph structured patterns appear in applications as diverse as sensor networks, virology in human networks, congestion in internet routers, and advertising in social net ..."
Abstract
 Add to MetaCart
This thesis addresses statistical estimation and testing of signals over a graph when measurements are noisy and highdimensional. Graph structured patterns appear in applications as diverse as sensor networks, virology in human networks, congestion in internet routers, and advertising in social networks. We will develop asymptotic guarantees of the performance of statistical estimators and tests, by stating conditions for consistency by properties of the graph (e.g. graph spectra). The goal of this thesis is to demonstrate theoretically that by exploiting the graph structure one can achieve statistical consistency in extremely noisy conditions. We begin with the study of a projection estimator called laplacian eigenmaps, and find that eigenvalue concentration plays a central role in the ability to estimate graph structured patterns. We continue with the study of the edge lasso, a least squares procedure with total variation penalty, and determine combinatorial conditions under which changepoints (edges across which the underlying signal changes) on the graph are recovered. We will shift focus to testing for anomalous activations in the graph, using a scan statistic relaxation and through the construction of a spanning tree wavelet basis. Finally, we study the consistency of kernel density estimation for vertex valued random variables with densities that are Lipschitz with respect to graph metrics.