• Documents
  • Authors
  • Tables
  • Log in
  • Sign up
  • MetaCart
  • DMCA
  • Donate

CiteSeerX logo

Advanced Search Include Citations
Advanced Search Include Citations

Applied Nonparametric Regression (1990)

by W Härdle
Add To MetaCart

Tools

Sorted by:
Results 1 - 10 of 831
Next 10 →

Locally weighted learning

by Christopher G. Atkeson, Andrew W. Moore , Stefan Schaal - ARTIFICIAL INTELLIGENCE REVIEW , 1997
"... This paper surveys locally weighted learning, a form of lazy learning and memorybased learning, and focuses on locally weighted linear regression. The survey discusses distance functions, smoothing parameters, weighting functions, local model structures, regularization of the estimates and bias, ass ..."
Abstract - Cited by 599 (51 self) - Add to MetaCart
This paper surveys locally weighted learning, a form of lazy learning and memorybased learning, and focuses on locally weighted linear regression. The survey discusses distance functions, smoothing parameters, weighting functions, local model structures, regularization of the estimates and bias, assessing predictions, handling noisy data and outliers, improving the quality of predictions by tuning t parameters, interference between old and new data, implementing locally weighted learning e ciently, and applications of locally weighted learning. A companion paper surveys how locally weighted learning can be used in robot learning and control.

Flexible smoothing with B-splines and penalties

by Paul H. C. Eilers, Brian D. Marx - STATISTICAL SCIENCE , 1996
"... B-splines are attractive for nonparametric modelling, but choosing the optimal number and positions of knots is a complex task. Equidistant knots can be used, but their small and discrete number allows only limited control over smoothness and fit. We propose to use a relatively large number of knots ..."
Abstract - Cited by 405 (7 self) - Add to MetaCart
B-splines are attractive for nonparametric modelling, but choosing the optimal number and positions of knots is a complex task. Equidistant knots can be used, but their small and discrete number allows only limited control over smoothness and fit. We propose to use a relatively large number of knots and a difference penalty on coefficients of adjacent B-splines. We show connections to the familiar spline penalty on the integral of the squared second derivative. A short overview of B-splines, their construction, and penalized likelihood is presented. We discuss properties of penalized B-splines and propose various criteria for the choice of an optimal penalty parameter. Nonparametric logistic regression, density estimation and scatterplot smoothing are used as examples. Some details of the computations are presented.

Regularization networks and support vector machines

by Theodoros Evgeniou, Massimiliano Pontil, Tomaso Poggio - Advances in Computational Mathematics , 2000
"... Regularization Networks and Support Vector Machines are techniques for solving certain problems of learning from examples – in particular the regression problem of approximating a multivariate function from sparse data. Radial Basis Functions, for example, are a special case of both regularization a ..."
Abstract - Cited by 366 (38 self) - Add to MetaCart
Regularization Networks and Support Vector Machines are techniques for solving certain problems of learning from examples – in particular the regression problem of approximating a multivariate function from sparse data. Radial Basis Functions, for example, are a special case of both regularization and Support Vector Machines. We review both formulations in the context of Vapnik’s theory of statistical learning which provides a general foundation for the learning problem, combining functional analysis and statistics. The emphasis is on regression: classification is treated as a special case.

Varying-coefficient models.

by T Hastie, R Tibshirani - Journal of the Royal Statistical Society B, , 1993
"... ..."
Abstract - Cited by 351 (5 self) - Add to MetaCart
Abstract not found

When Networks Disagree: Ensemble Methods for Hybrid Neural Networks

by Michael P. Perrone, Leaon N. Cooper , 1993
"... This paper presents a general theoretical framework for ensemble methods of constructing significantly improved regression estimates. Given a population of regression estimators, we construct a hybrid estimator which is as good or better in the MSE sense than any estimator in the population. We argu ..."
Abstract - Cited by 349 (3 self) - Add to MetaCart
This paper presents a general theoretical framework for ensemble methods of constructing significantly improved regression estimates. Given a population of regression estimators, we construct a hybrid estimator which is as good or better in the MSE sense than any estimator in the population. We argue that the ensemble method presented has several properties: 1) It efficiently uses all the networks of a population - none of the networks need be discarded. 2) It efficiently uses all the available data for training without over-fitting. 3) It inherently performs regularization by smoothing in functional space which helps to avoid over-fitting. 4) It utilizes local minima to construct improved estimates whereas other neural network algorithms are hindered by local minima. 5) It is ideally suited for parallel computation. 6) It leads to a very useful and natural measure of the number of distinct estimators in a population. 7) The optimal parameters of the ensemble estimator are given in clo...
(Show Context)

Citation Context

...ackknifing, bootstrapping and cross validation have proven useful for generating improved regression estimates through bias reduction (Efron, 1982; Miller, 1974; Stone, 1974; Gray and Schucany, 1972; =-=Hardle, 1990-=-; Wahba, 1990, for review). We show that these ideas can be fruitfully extended to neural networks by using the ensemble methods presented in this paper. The basic idea behind these resampling techniq...

Empirical properties of asset returns: stylized facts and statistical issues

by Rama Cont - Quantitative Finance , 2001
"... We present a set of stylized empirical facts emerging from the statistical analysis of price variations in various types of financial markets. We first discuss some general issues common to all statistical studies of financial time series. Various statistical properties of asset returns are then des ..."
Abstract - Cited by 347 (4 self) - Add to MetaCart
We present a set of stylized empirical facts emerging from the statistical analysis of price variations in various types of financial markets. We first discuss some general issues common to all statistical studies of financial time series. Various statistical properties of asset returns are then described: distributional properties, tail properties and extreme fluctuations, pathwise regularity, linear and nonlinear dependence of returns in time and across stocks. Our description emphasizes properties common to a wide variety of markets and instruments. We then show how these statistical properties invalidate many of the common statistical approaches used to study financial data sets and examine some of the statistical problems encountered in each case.

Competition and innovation: an inverted U relationship

by Philippe Aghion, Nicholas Bloom, Richard Blundell, Rachel Griffith, Peter Howitt , 2002
"... This paper investigates the relationship between product market competition (PMC) and innovation. A growth model is developed in which competition may increase the incremental profit from innovating; on the other hand, competition may also reduce innovation incentives for laggards. There are four ke ..."
Abstract - Cited by 185 (14 self) - Add to MetaCart
This paper investigates the relationship between product market competition (PMC) and innovation. A growth model is developed in which competition may increase the incremental profit from innovating; on the other hand, competition may also reduce innovation incentives for laggards. There are four key predictions. First, the relationship between product market competition (PMC) and innovation is an inverted U-shape. Second, the equilibrium degree of technological ‘neck-and-neckness’ among firms should decrease with PMC. Third, the higher the average degree of ‘neck-andneckness’ in an industry, the steeper the inverted-U relationship. Fourth, firms may innovate more if subject to higher debt-pressure, especially at lower levels of PMC. We confront these predictions with data on UK firms’ patenting activity at the US patenting office. They are found to accord well with observed behavior.

The bootstrap

by Joel L. Horowitz - In Handbook of Econometrics , 2001
"... The bootstrap is a method for estimating the distribution of an estimator or test statistic by resampling one’s data. It amounts to treating the data as if they were the population for the purpose of evaluating the distribution of interest. Under mild regularity conditions, the bootstrap yields an a ..."
Abstract - Cited by 182 (2 self) - Add to MetaCart
The bootstrap is a method for estimating the distribution of an estimator or test statistic by resampling one’s data. It amounts to treating the data as if they were the population for the purpose of evaluating the distribution of interest. Under mild regularity conditions, the bootstrap yields an approximation to the distribution of an estimator or test statistic that is at least as accurate as the

Kernel regression for image processing and reconstruction

by Hiroyuki Takeda, Sina Farsiu, Peyman Milanfar - IEEE TRANSACTIONS ON IMAGE PROCESSING , 2007
"... In this paper, we make contact with the field of nonparametric statistics and present a development and generalization of tools and results for use in image processing and reconstruction. In particular, we adapt and expand kernel regression ideas for use in image denoising, upscaling, interpolation, ..."
Abstract - Cited by 172 (53 self) - Add to MetaCart
In this paper, we make contact with the field of nonparametric statistics and present a development and generalization of tools and results for use in image processing and reconstruction. In particular, we adapt and expand kernel regression ideas for use in image denoising, upscaling, interpolation, fusion, and more. Furthermore, we establish key relationships with some popular existing methods and show how several of these algorithms, including the recently popularized bilateral filter, are special cases of the proposed framework. The resulting algorithms and analyses are amply illustrated with practical examples.

2004): “Endogeneity in Semiparametric Binary Response Models,”Review of Economic Studies

by Richard Blundell, James L. Powell
"... This paper develops and implements semiparametric methods for estimating binary response (binary choice) models with continuous endogenous regressors. It extends existing results on semiparametric estimation in single index binary response models to the case of endogenous regressors. It develops a c ..."
Abstract - Cited by 157 (8 self) - Add to MetaCart
This paper develops and implements semiparametric methods for estimating binary response (binary choice) models with continuous endogenous regressors. It extends existing results on semiparametric estimation in single index binary response models to the case of endogenous regressors. It develops a control function approach to account for endogeneity in triangular and fully simulataneous binary response models. The proposed estimation method is applied to estimate the income effect in a labor market participation problem using a large micro data set from the British FES. The semiparametric estimator is found to perform well, detecting a significant attenuation bias. The proposed estimator is contrasted to the corresponding Probit and Linear Probability specifications.
Powered by: Apache Solr
  • About CiteSeerX
  • Submit and Index Documents
  • Privacy Policy
  • Help
  • Data
  • Source
  • Contact Us

Developed at and hosted by The College of Information Sciences and Technology

© 2007-2019 The Pennsylvania State University