Results 1  10
of
977,406
Functional Additive Regression
, 2011
"... We suggest a new method, called “Functional Additive Regression”, or FAR, for efficiently performing high dimensional functional regression. FAR extends the usual linear regression model involving a functional predictor, X(t), and a scalar response, Y, in two key respects. First, FAR uses a penalize ..."
Abstract

Cited by 6 (0 self)
 Add to MetaCart
We suggest a new method, called “Functional Additive Regression”, or FAR, for efficiently performing high dimensional functional regression. FAR extends the usual linear regression model involving a functional predictor, X(t), and a scalar response, Y, in two key respects. First, FAR uses a
Bayesian Additive Regression Trees
, 2005
"... We develop a Bayesian “sumoftrees ” model where each tree is constrained by a prior to be a weak leaner. Fitting and inference are accomplished via an iterative backfitting MCMC algorithm. This model is motivated by ensemble methods in general, and boosting algorithms in particular. Like boosting ..."
Abstract

Cited by 42 (3 self)
 Add to MetaCart
We develop a Bayesian “sumoftrees ” model where each tree is constrained by a prior to be a weak leaner. Fitting and inference are accomplished via an iterative backfitting MCMC algorithm. This model is motivated by ensemble methods in general, and boosting algorithms in particular. Like boosting, each weak learner (i.e., each weak tree) contributes a small amount to the overall model, and the training of a weak learner is conditional on the estimates for the other weak learners. The differences from boosting algorithms are just as striking as the similarities: BART is defined by a statistical model: a prior and a likelihood, while boosting is defined by an algorithm. MCMC is used both to fit the model and to quantify inferential uncertainty through the variation of the posterior draws. The BART modelling strategy can also be viewed in the context of Bayesian nonparametrics. The key idea is to use a model which is rich enough to respond to a variety of signal types, but constrained by the prior from overreacting to weak signals. The ensemble approach provides for a rich base model form which can expand as needed via the MCMC mechanism. The priors are formulated so as to be interpretable, relatively easy to specify, and provide results that are stable across a wide range of prior hyperparameter values. The MCMC algorithm, which exhibits fast burnin and good mixing, can be readily used for model averaging and for uncertainty assessment.
BAYESIAN ADDITIVE REGRESSION KERNELS by
, 2008
"... We propose a general Bayesian “sum of kernels ” model, named Bayesian Additive Regression Kernels (BARK), for both regression and classification problems. The unknown mean function is represented as a weighted sum of kernel functions, which is constructed by a prior using symmetric αstable (SαS) Lé ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
We propose a general Bayesian “sum of kernels ” model, named Bayesian Additive Regression Kernels (BARK), for both regression and classification problems. The unknown mean function is represented as a weighted sum of kernel functions, which is constructed by a prior using symmetric αstable (Sα
Additive Regression Models
"... Aditivní regresní modely s regresními spliny Katedra pravděpodobnosti a matematické statistiky Vedoucí diplomové práce: Doc. Petr Volf, CSc. Studijní program: matematika Obor: pravděpodobnost, matematická statistika a ekonometrie ..."
Abstract
 Add to MetaCart
Aditivní regresní modely s regresními spliny Katedra pravděpodobnosti a matematické statistiky Vedoucí diplomové práce: Doc. Petr Volf, CSc. Studijní program: matematika Obor: pravděpodobnost, matematická statistika a ekonometrie
Additive Logistic Regression: a Statistical View of Boosting
 Annals of Statistics
, 1998
"... Boosting (Freund & Schapire 1996, Schapire & Singer 1998) is one of the most important recent developments in classification methodology. The performance of many classification algorithms can often be dramatically improved by sequentially applying them to reweighted versions of the input dat ..."
Abstract

Cited by 1719 (25 self)
 Add to MetaCart
data, and taking a weighted majority vote of the sequence of classifiers thereby produced. We show that this seemingly mysterious phenomenon can be understood in terms of well known statistical principles, namely additive modeling and maximum likelihood. For the twoclass problem, boosting can
Confidence band for additive regression model
 Journal of Data Science
, 2007
"... Abstract: Additive model is widely recognized as an effective tool for dimension reduction. Existing methods for estimation of additive regression function, including backfitting, marginal integration, projection and spline methods, do not provide any level of uniform confidence. In this paper a sim ..."
Abstract

Cited by 2 (1 self)
 Add to MetaCart
Abstract: Additive model is widely recognized as an effective tool for dimension reduction. Existing methods for estimation of additive regression function, including backfitting, marginal integration, projection and spline methods, do not provide any level of uniform confidence. In this paper a
Regression quantiles
 Econometrica
, 1978
"... Your use of the JSTOR archive indicates your acceptance of JSTOR's Terms and Conditions of Use, available at ..."
Abstract

Cited by 870 (19 self)
 Add to MetaCart
Your use of the JSTOR archive indicates your acceptance of JSTOR's Terms and Conditions of Use, available at
Multivariate adaptive regression splines
 The Annals of Statistics
, 1991
"... A new method is presented for flexible regression modeling of high dimensional data. The model takes the form of an expansion in product spline basis functions, where the number of basis functions as well as the parameters associated with each one (product degree and knot locations) are automaticall ..."
Abstract

Cited by 679 (2 self)
 Add to MetaCart
A new method is presented for flexible regression modeling of high dimensional data. The model takes the form of an expansion in product spline basis functions, where the number of basis functions as well as the parameters associated with each one (product degree and knot locations
Generalized Additive Models
, 1984
"... Likelihood based regression models, such as the normal linear regression model and the linear logistic model, assume a linear (or some other parametric) form for the covariate effects. We introduce the Local Scotinq procedure which replaces the liner form C Xjpj by a sum of smooth functions C Sj(Xj) ..."
Abstract

Cited by 2413 (46 self)
 Add to MetaCart
Likelihood based regression models, such as the normal linear regression model and the linear logistic model, assume a linear (or some other parametric) form for the covariate effects. We introduce the Local Scotinq procedure which replaces the liner form C Xjpj by a sum of smooth functions C Sj
Projection Pursuit Regression
 Journal of the American Statistical Association
, 1981
"... A new method for nonparametric multiple regression is presented. The procedure models the regression surface as a sum of general smooth functions of linear combinations of the predictor variables in an iterative manner. It is more general than standard stepwise and stagewise regression procedures, ..."
Abstract

Cited by 555 (6 self)
 Add to MetaCart
A new method for nonparametric multiple regression is presented. The procedure models the regression surface as a sum of general smooth functions of linear combinations of the predictor variables in an iterative manner. It is more general than standard stepwise and stagewise regression procedures
Results 1  10
of
977,406