Results 1  10
of
815
Locally weighted learning
 ARTIFICIAL INTELLIGENCE REVIEW
, 1997
"... This paper surveys locally weighted learning, a form of lazy learning and memorybased learning, and focuses on locally weighted linear regression. The survey discusses distance functions, smoothing parameters, weighting functions, local model structures, regularization of the estimates and bias, ass ..."
Abstract

Cited by 600 (53 self)
 Add to MetaCart
This paper surveys locally weighted learning, a form of lazy learning and memorybased learning, and focuses on locally weighted linear regression. The survey discusses distance functions, smoothing parameters, weighting functions, local model structures, regularization of the estimates and bias, assessing predictions, handling noisy data and outliers, improving the quality of predictions by tuning t parameters, interference between old and new data, implementing locally weighted learning e ciently, and applications of locally weighted learning. A companion paper surveys how locally weighted learning can be used in robot learning and control.
Flexible smoothing with Bsplines and penalties
 STATISTICAL SCIENCE
, 1996
"... Bsplines are attractive for nonparametric modelling, but choosing the optimal number and positions of knots is a complex task. Equidistant knots can be used, but their small and discrete number allows only limited control over smoothness and fit. We propose to use a relatively large number of knots ..."
Abstract

Cited by 404 (6 self)
 Add to MetaCart
Bsplines are attractive for nonparametric modelling, but choosing the optimal number and positions of knots is a complex task. Equidistant knots can be used, but their small and discrete number allows only limited control over smoothness and fit. We propose to use a relatively large number of knots and a difference penalty on coefficients of adjacent Bsplines. We show connections to the familiar spline penalty on the integral of the squared second derivative. A short overview of Bsplines, their construction, and penalized likelihood is presented. We discuss properties of penalized Bsplines and propose various criteria for the choice of an optimal penalty parameter. Nonparametric logistic regression, density estimation and scatterplot smoothing are used as examples. Some details of the computations are presented.
Regularization networks and support vector machines
 Advances in Computational Mathematics
, 2000
"... Regularization Networks and Support Vector Machines are techniques for solving certain problems of learning from examples – in particular the regression problem of approximating a multivariate function from sparse data. Radial Basis Functions, for example, are a special case of both regularization a ..."
Abstract

Cited by 365 (38 self)
 Add to MetaCart
Regularization Networks and Support Vector Machines are techniques for solving certain problems of learning from examples – in particular the regression problem of approximating a multivariate function from sparse data. Radial Basis Functions, for example, are a special case of both regularization and Support Vector Machines. We review both formulations in the context of Vapnik’s theory of statistical learning which provides a general foundation for the learning problem, combining functional analysis and statistics. The emphasis is on regression: classification is treated as a special case.
When Networks Disagree: Ensemble Methods for Hybrid Neural Networks
, 1993
"... This paper presents a general theoretical framework for ensemble methods of constructing significantly improved regression estimates. Given a population of regression estimators, we construct a hybrid estimator which is as good or better in the MSE sense than any estimator in the population. We argu ..."
Abstract

Cited by 349 (3 self)
 Add to MetaCart
(Show Context)
This paper presents a general theoretical framework for ensemble methods of constructing significantly improved regression estimates. Given a population of regression estimators, we construct a hybrid estimator which is as good or better in the MSE sense than any estimator in the population. We argue that the ensemble method presented has several properties: 1) It efficiently uses all the networks of a population  none of the networks need be discarded. 2) It efficiently uses all the available data for training without overfitting. 3) It inherently performs regularization by smoothing in functional space which helps to avoid overfitting. 4) It utilizes local minima to construct improved estimates whereas other neural network algorithms are hindered by local minima. 5) It is ideally suited for parallel computation. 6) It leads to a very useful and natural measure of the number of distinct estimators in a population. 7) The optimal parameters of the ensemble estimator are given in clo...
Varyingcoefficient models.
 Journal of the Royal Statistical Society. Series B (Methodological),
, 1993
"... ..."
Empirical properties of asset returns: stylized facts and statistical issues
 Quantitative Finance
, 2001
"... We present a set of stylized empirical facts emerging from the statistical analysis of price variations in various types of financial markets. We first discuss some general issues common to all statistical studies of financial time series. Various statistical properties of asset returns are then des ..."
Abstract

Cited by 346 (4 self)
 Add to MetaCart
We present a set of stylized empirical facts emerging from the statistical analysis of price variations in various types of financial markets. We first discuss some general issues common to all statistical studies of financial time series. Various statistical properties of asset returns are then described: distributional properties, tail properties and extreme fluctuations, pathwise regularity, linear and nonlinear dependence of returns in time and across stocks. Our description emphasizes properties common to a wide variety of markets and instruments. We then show how these statistical properties invalidate many of the common statistical approaches used to study financial data sets and examine some of the statistical problems encountered in each case.
Competition and innovation: an inverted U relationship
, 2002
"... This paper investigates the relationship between product market competition (PMC) and innovation. A growth model is developed in which competition may increase the incremental profit from innovating; on the other hand, competition may also reduce innovation incentives for laggards. There are four ke ..."
Abstract

Cited by 184 (14 self)
 Add to MetaCart
This paper investigates the relationship between product market competition (PMC) and innovation. A growth model is developed in which competition may increase the incremental profit from innovating; on the other hand, competition may also reduce innovation incentives for laggards. There are four key predictions. First, the relationship between product market competition (PMC) and innovation is an inverted Ushape. Second, the equilibrium degree of technological ‘neckandneckness’ among firms should decrease with PMC. Third, the higher the average degree of ‘neckandneckness’ in an industry, the steeper the invertedU relationship. Fourth, firms may innovate more if subject to higher debtpressure, especially at lower levels of PMC. We confront these predictions with data on UK firms’ patenting activity at the US patenting office. They are found to accord well with observed behavior.
The bootstrap
 In Handbook of Econometrics
, 2001
"... The bootstrap is a method for estimating the distribution of an estimator or test statistic by resampling one’s data. It amounts to treating the data as if they were the population for the purpose of evaluating the distribution of interest. Under mild regularity conditions, the bootstrap yields an a ..."
Abstract

Cited by 177 (2 self)
 Add to MetaCart
The bootstrap is a method for estimating the distribution of an estimator or test statistic by resampling one’s data. It amounts to treating the data as if they were the population for the purpose of evaluating the distribution of interest. Under mild regularity conditions, the bootstrap yields an approximation to the distribution of an estimator or test statistic that is at least as accurate as the
Kernel regression for image processing and reconstruction
 IEEE TRANSACTIONS ON IMAGE PROCESSING
, 2007
"... In this paper, we make contact with the field of nonparametric statistics and present a development and generalization of tools and results for use in image processing and reconstruction. In particular, we adapt and expand kernel regression ideas for use in image denoising, upscaling, interpolation, ..."
Abstract

Cited by 172 (53 self)
 Add to MetaCart
In this paper, we make contact with the field of nonparametric statistics and present a development and generalization of tools and results for use in image processing and reconstruction. In particular, we adapt and expand kernel regression ideas for use in image denoising, upscaling, interpolation, fusion, and more. Furthermore, we establish key relationships with some popular existing methods and show how several of these algorithms, including the recently popularized bilateral filter, are special cases of the proposed framework. The resulting algorithms and analyses are amply illustrated with practical examples.
2004): “Endogeneity in Semiparametric Binary Response Models,”Review of Economic Studies
"... This paper develops and implements semiparametric methods for estimating binary response (binary choice) models with continuous endogenous regressors. It extends existing results on semiparametric estimation in single index binary response models to the case of endogenous regressors. It develops a c ..."
Abstract

Cited by 157 (8 self)
 Add to MetaCart
This paper develops and implements semiparametric methods for estimating binary response (binary choice) models with continuous endogenous regressors. It extends existing results on semiparametric estimation in single index binary response models to the case of endogenous regressors. It develops a control function approach to account for endogeneity in triangular and fully simulataneous binary response models. The proposed estimation method is applied to estimate the income effect in a labor market participation problem using a large micro data set from the British FES. The semiparametric estimator is found to perform well, detecting a significant attenuation bias. The proposed estimator is contrasted to the corresponding Probit and Linear Probability specifications.