Results 1  10
of
175,715
1 L2Boost and Infinitesimal Forward Stagewise Linear Regression
, 2007
"... We congratulate the authors (hereafter BH) for an interesting take on the boosting technology, and for developing a modular computational environment in R for exploring their models. Their use of lowdegreeoffreedom smoothing splines as a base learner provides an interesting approach to adaptive a ..."
Abstract
 Add to MetaCart
and infinitesimal forward stagewise linear regression (Section 5.2.1 in BH). We then take the authors to task on their definition of degrees of freedom (Section 5.3 of BH).
FORWARD STAGEWISE LINEAR REGRESSION Motivated by a version of L2boosting in Chapter
, 804
"... We congratulate the authors (hereafter BH) for an interesting take on the boosting technology, and for developing a modular computational environment in R for exploring their models. Their use of lowdegreeoffreedom smoothing splines as a base learner provides an interesting approach to adaptive ad ..."
Abstract
 Add to MetaCart
and infinitesimal forward stagewise linear regression. We then take the authors to task on their definition of degrees of freedom.
Least angle regression
 Ann. Statist
"... The purpose of model selection algorithms such as All Subsets, Forward Selection and Backward Elimination is to choose a linear model on the basis of the same set of data to which the model will be applied. Typically we have available a large collection of possible covariates from which we hope to s ..."
Abstract

Cited by 1308 (43 self)
 Add to MetaCart
modification efficiently implements Forward Stagewise linear regression, another promising
Projection Pursuit Regression
 Journal of the American Statistical Association
, 1981
"... A new method for nonparametric multiple regression is presented. The procedure models the regression surface as a sum of general smooth functions of linear combinations of the predictor variables in an iterative manner. It is more general than standard stepwise and stagewise regression procedures, ..."
Abstract

Cited by 555 (6 self)
 Add to MetaCart
A new method for nonparametric multiple regression is presented. The procedure models the regression surface as a sum of general smooth functions of linear combinations of the predictor variables in an iterative manner. It is more general than standard stepwise and stagewise regression procedures
Forward Stagewise Regression and the Monotone Lasso
 Electronic Journal of Statistics
, 2006
"... We consider the least angle regression and forward stagewise algorithms for solving penalized least squares regression problems. In Efron et al. (2004) it is proven that the least angle regression algorithm, with a small modification, solves the lasso (L 1 constrained) regression problem. Here w ..."
Abstract

Cited by 25 (2 self)
 Add to MetaCart
We consider the least angle regression and forward stagewise algorithms for solving penalized least squares regression problems. In Efron et al. (2004) it is proven that the least angle regression algorithm, with a small modification, solves the lasso (L 1 constrained) regression problem. Here
Additive Logistic Regression: a Statistical View of Boosting
 Annals of Statistics
, 1998
"... Boosting (Freund & Schapire 1996, Schapire & Singer 1998) is one of the most important recent developments in classification methodology. The performance of many classification algorithms can often be dramatically improved by sequentially applying them to reweighted versions of the input dat ..."
Abstract

Cited by 1719 (25 self)
 Add to MetaCart
Boosting (Freund & Schapire 1996, Schapire & Singer 1998) is one of the most important recent developments in classification methodology. The performance of many classification algorithms can often be dramatically improved by sequentially applying them to reweighted versions of the input data, and taking a weighted majority vote of the sequence of classifiers thereby produced. We show that this seemingly mysterious phenomenon can be understood in terms of well known statistical principles, namely additive modeling and maximum likelihood. For the twoclass problem, boosting can be viewed as an approximation to additive modeling on the logistic scale using maximum Bernoulli likelihood as a criterion. We develop more direct approximations and show that they exhibit nearly identical results to boosting. Direct multiclass generalizations based on multinomial likelihood are derived that exhibit performance comparable to other recently proposed multiclass generalizations of boosting in most...
Stagewise Lasso Stagewise Lasso
"... Many statistical machine learning algorithms (in regression or classification) minimize either an empirical loss function as in AdaBoost, or a penalized empirical loss as in SVM. A single regularization tuning parameter controls the tradeoff between fidelity to the data and generalibility, or equiv ..."
Abstract
 Add to MetaCart
Many statistical machine learning algorithms (in regression or classification) minimize either an empirical loss function as in AdaBoost, or a penalized empirical loss as in SVM. A single regularization tuning parameter controls the tradeoff between fidelity to the data and generalibility
Predictive regressions
 Journal of Financial Economics
, 1999
"... When a rate of return is regressed on a lagged stochastic regressor, such as a dividend yield, the regression disturbance is correlated with the regressor's innovation. The OLS estimator's "nitesample properties, derived here, can depart substantially from the standard regression set ..."
Abstract

Cited by 452 (19 self)
 Add to MetaCart
When a rate of return is regressed on a lagged stochastic regressor, such as a dividend yield, the regression disturbance is correlated with the regressor's innovation. The OLS estimator's "nitesample properties, derived here, can depart substantially from the standard regression
Greedy Function Approximation: A Gradient Boosting Machine
 Annals of Statistics
, 2000
"... Function approximation is viewed from the perspective of numerical optimization in function space, rather than parameter space. A connection is made between stagewise additive expansions and steepest{descent minimization. A general gradient{descent \boosting" paradigm is developed for additi ..."
Abstract

Cited by 951 (12 self)
 Add to MetaCart
Function approximation is viewed from the perspective of numerical optimization in function space, rather than parameter space. A connection is made between stagewise additive expansions and steepest{descent minimization. A general gradient{descent \boosting" paradigm is developed
Results 1  10
of
175,715