Results 1  10
of
12,197
Sparse Kernel Regressors
, 2001
"... Sparse kernel regressors have become popular by applying the support vector method to regression problems. Although this approach has been shown to exhibit excellent generalization properties in many experiments, it suffers from several drawbacks: the absence of probabilistic outputs, the restrictio ..."
Abstract
 Add to MetaCart
Sparse kernel regressors have become popular by applying the support vector method to regression problems. Although this approach has been shown to exhibit excellent generalization properties in many experiments, it suffers from several drawbacks: the absence of probabilistic outputs
Analyses on KernelSpecific Generalization Ability for Kernel Regressors with Training Samples
"... Abstract—Theoretical analyses on generalization error of a model space for kernel regressors with respect to training samples are given in this paper. In general, the distance between an unknown true function and a model space tends to be small with a larger set of training samples. However, it is n ..."
Abstract
 Add to MetaCart
Abstract—Theoretical analyses on generalization error of a model space for kernel regressors with respect to training samples are given in this paper. In general, the distance between an unknown true function and a model space tends to be small with a larger set of training samples. However
An Overview of the Special Regressor Method
, 2012
"... This chapter provides background for understanding and applying special regressor methods. This chapter is intended for inclusion in the "Handbook of Applied Nonparametric and Semiparametric Econometrics and Statistics, " Coedited by Aman Ullah, Jeffrey Racine, and Liangjun Su, to be publ ..."
Abstract

Cited by 5 (3 self)
 Add to MetaCart
This chapter provides background for understanding and applying special regressor methods. This chapter is intended for inclusion in the "Handbook of Applied Nonparametric and Semiparametric Econometrics and Statistics, " Coedited by Aman Ullah, Jeffrey Racine, and Liangjun Su
Recovering 3D Human Pose from Monocular Images
"... We describe a learning based method for recovering 3D human body pose from single images and monocular image sequences. Our approach requires neither an explicit body model nor prior labelling of body parts in the image. Instead, it recovers pose by direct nonlinear regression against shape descrip ..."
Abstract

Cited by 261 (0 self)
 Add to MetaCart
and Support Vector Machine (SVM) regression over both linear and kernel bases. The RVMs provide much sparser regressors without compromising performance, and kernel bases give a small but worthwhile improvement in performance. Loss of depth and limb labelling information often makes the recovery of 3D pose
M.: Conditional mean embeddings as regressors
 ICML
, 2012
"... We demonstrate an equivalence between reproducing kernel Hilbert space (RKHS) embeddings of conditional distributions and vectorvalued regressors. This connection introduces a natural regularized loss function which the RKHS embeddings minimise, providing an intuitive understanding of the embedding ..."
Abstract

Cited by 22 (9 self)
 Add to MetaCart
We demonstrate an equivalence between reproducing kernel Hilbert space (RKHS) embeddings of conditional distributions and vectorvalued regressors. This connection introduces a natural regularized loss function which the RKHS embeddings minimise, providing an intuitive understanding
On Selecting Regressors To Maximize Their Significance
, 1998
"... A common problem in applied regression analysis is to select the variables that enter a linear regression. Examples are selection among capital stock series constructed with different depreciation assumptions, or use of variables that depend on unknown parameters, such as BoxCox transformations, li ..."
Abstract
 Add to MetaCart
A common problem in applied regression analysis is to select the variables that enter a linear regression. Examples are selection among capital stock series constructed with different depreciation assumptions, or use of variables that depend on unknown parameters, such as BoxCox transformations, linear splines with parametric knots, and exponential functions with parametric decay rates. It is often computationally convenient to estimate such models by least squares, with variables selected from possible candidates by enumeration, grid search, or GaussNewton iteration to maximize their conventional least squares significance level; term this method Prescreened Least Squares (PLS). This note shows that PLS is equivalent to direct estimation by nonlinear least squares, and thus statistically consistent under mild regularity conditions. However, standard errors and test statistics provided by least squares are biased. When explanatory variables are smooth in the parameters that index ...
Small perturbations of Gaussian regressors
"... Abstract:We investigate the asymptotic behaviour as ε → 0 of the conditional expectation of X given the sum of a Gaussian stochastic process Y and an independent small noise εZ. ..."
Abstract
 Add to MetaCart
Abstract:We investigate the asymptotic behaviour as ε → 0 of the conditional expectation of X given the sum of a Gaussian stochastic process Y and an independent small noise εZ.
The Kernel Recursive Least Squares Algorithm
 IEEE Transactions on Signal Processing
, 2003
"... We present a nonlinear kernelbased version of the Recursive Least Squares (RLS) algorithm. Our KernelRLS (KRLS) algorithm performs linear regression in the feature space induced by a Mercer kernel, and can therefore be used to recursively construct the minimum mean squared error regressor. Spars ..."
Abstract

Cited by 141 (2 self)
 Add to MetaCart
We present a nonlinear kernelbased version of the Recursive Least Squares (RLS) algorithm. Our KernelRLS (KRLS) algorithm performs linear regression in the feature space induced by a Mercer kernel, and can therefore be used to recursively construct the minimum mean squared error regressor
Results 1  10
of
12,197