Results 1  10
of
10
Projection Pursuit Regression
 Journal of the American Statistical Association
, 1981
"... A new method for nonparametric multiple regression is presented. The procedure models the regression surface as a sum of general smooth functions of linear combinations of the predictor variables in an iterative manner. It is more general than standard stepwise and stagewise regression procedures, ..."
Abstract

Cited by 550 (6 self)
 Add to MetaCart
(Show Context)
A new method for nonparametric multiple regression is presented. The procedure models the regression surface as a sum of general smooth functions of linear combinations of the predictor variables in an iterative manner. It is more general than standard stepwise and stagewise regression procedures, does not require the definition of a metric in the predictor space, and lends itself to graphical interpretation.
Piecewisepolynomial regression trees
 Statistica Sinica
, 1994
"... A nonparametric function 1 estimation method called SUPPORT (“Smoothed and Unsmoothed PiecewisePolynomial Regression Trees”) is described. The estimate is typically made up of several pieces, each piece being obtained by fitting a polynomial regression to the observations in a subregion of the data ..."
Abstract

Cited by 51 (8 self)
 Add to MetaCart
A nonparametric function 1 estimation method called SUPPORT (“Smoothed and Unsmoothed PiecewisePolynomial Regression Trees”) is described. The estimate is typically made up of several pieces, each piece being obtained by fitting a polynomial regression to the observations in a subregion of the data space. Partitioning is carried out recursively as in a treestructured method. If the estimate is required to be smooth, the polynomial pieces may be glued together by means of weighted averaging. The smoothed estimate is thus obtained in three steps. In the first step, the regressor space is recursively partitioned until the data in each piece are adequately fitted by a polynomial of a fixed order. Partitioning is guided by analysis of the distributions of residuals and crossvalidation estimates of prediction mean square error. In the second step, the data within a neighborhood of each partition are fitted by a polynomial. The final estimate of the regression function is obtained by averaging the polynomial pieces, using smooth weight functions each of which diminishes rapidly to zero outside its associated partition. Estimates of derivatives of the regression function may be
2003): “TreeStructured Smooth Transition Regression Models Based on CART Algorithm,” Textos para Discussão 469, Pontifical Catholic University of Rio de Janeiro
"... ABSTRACT. The goal of this paper is to introduce a class of treestructured models that combines aspects of regression trees and smooth transition regression models. The model is called the Smooth Transition Regression Tree (STRTree). The main idea relies on specifying a multipleregime parametric ..."
Abstract

Cited by 9 (3 self)
 Add to MetaCart
ABSTRACT. The goal of this paper is to introduce a class of treestructured models that combines aspects of regression trees and smooth transition regression models. The model is called the Smooth Transition Regression Tree (STRTree). The main idea relies on specifying a multipleregime parametric model through a treegrowing procedure with smooth transitions among different regimes. Decisions about splits are entirely based on a sequence of Lagrange Multiplier (LM) tests of hypotheses.
Fitting functions to noisy data in high dimensions
 In Computing Science and Statistics
, 1988
"... ..."
Density estimation trees
 In Proceedings of the 17th ACM SIGKDD international conference on Knowledge discovery and data mining, KDD ’11
, 2011
"... In this paper we develop density estimation trees (DETs), the natural analog of classification trees and regression trees, for the task of density estimation. We consider the estimation of a joint probability density function of a ddimensional random vector X and define a piecewise constant estima ..."
Abstract

Cited by 5 (1 self)
 Add to MetaCart
(Show Context)
In this paper we develop density estimation trees (DETs), the natural analog of classification trees and regression trees, for the task of density estimation. We consider the estimation of a joint probability density function of a ddimensional random vector X and define a piecewise constant estimator structured as a decision tree. The integrated squared error is minimized to learn the tree. We show that the method is nonparametric: under standard conditions of nonparametric density estimation, DETs are shown to be asymptotically consistent. In addition, being decision trees, DETs perform automatic feature selection. They empirically exhibit the interpretability, adaptability and feature selection properties of supervised decision trees while incurring slight loss in accuracy over other nonparametric density estimators. Hence they might be able to avoid the curse of dimensionality if the true density is sparse in dimensions. We believe that density estimation trees provide a new tool for exploratory data analysis with unique capabilities.
RANDOM HYPERPLANE SEARCH TREES
, 2009
"... A hyperplane search tree is a binary tree used to store a set S of nddimensional data points. In a random hyperplane search tree for S, the root represents a hyperplane defined by d data points drawn uniformly at random from S. The remaining data points are split by the hyperplane, and the definit ..."
Abstract

Cited by 1 (1 self)
 Add to MetaCart
(Show Context)
A hyperplane search tree is a binary tree used to store a set S of nddimensional data points. In a random hyperplane search tree for S, the root represents a hyperplane defined by d data points drawn uniformly at random from S. The remaining data points are split by the hyperplane, and the definition is used recursively on each subset. We assume that the data are points in general position in Rd. We show that, uniformly over all such data sets S, the expected height of the hyperplane tree is not worse than that of the kd tree or the ordinary onedimensional random binary search tree, and that, for any fixed d ≥ 3, the expected height improves over that of the standard random binary search tree by an asymptotic factor strictly greater than one.
Cellular Tree Classifiers
, 2013
"... The cellular tree classifier model addresses a fundamental problem in the design of classifiers for a parallel or distributed computing world: Given a data set, is it sufficient to apply a majority rule for classification, or shall one split the data into two or more parts and send each part to a po ..."
Abstract
 Add to MetaCart
The cellular tree classifier model addresses a fundamental problem in the design of classifiers for a parallel or distributed computing world: Given a data set, is it sufficient to apply a majority rule for classification, or shall one split the data into two or more parts and send each part to a potentially different computer (or cell) for further processing? At first sight, it seems impossible to define with this paradigm a consistent classifier as no cell knows the “original data size”, n. However, we show that this is not so by exhibiting two different consistent classifiers. The consistency is universal but is only shown for distributions with nonatomic marginals.
Additive Regression Models
"... Aditivní regresní modely s regresními spliny Katedra pravděpodobnosti a matematické statistiky Vedoucí diplomové práce: Doc. Petr Volf, CSc. Studijní program: matematika Obor: pravděpodobnost, matematická statistika a ekonometrie ..."
Abstract
 Add to MetaCart
Aditivní regresní modely s regresními spliny Katedra pravděpodobnosti a matematické statistiky Vedoucí diplomové práce: Doc. Petr Volf, CSc. Studijní program: matematika Obor: pravděpodobnost, matematická statistika a ekonometrie
Temperature Wind
"... ) model (Lewis and Stevens, 1991; Lewis et al., 1994). The modelling is done by letting the predictor variables for the øth value in the time series fy ø g be given by y ø \Gamma1 (= x ø;1 ); y ø \Gamma2 (= x ø;2 ); : : : ; y ø \Gammap (= x ø;p ). Note that if we combined these predictors to form a ..."
Abstract
 Add to MetaCart
) model (Lewis and Stevens, 1991; Lewis et al., 1994). The modelling is done by letting the predictor variables for the øth value in the time series fy ø g be given by y ø \Gamma1 (= x ø;1 ); y ø \Gamma2 (= x ø;2 ); : : : ; y ø \Gammap (= x ø;p ). Note that if we combined these predictors to form a linear additive function we would just be modelling the time series as a usual AR(p) process. However, the ASTAR method involves modelling these lagged predictors variables using a MARS model. Thus the predictor 5.6. MODELLING TIME SERIES USING BAYESIAN MARS 127 variables can have both threshold terms, because of the form of the truncated linear spline basis functions, and interactions