Results 1 
5 of
5
Functional adaptive model estimation
 J. Amer
, 2005
"... In this article we are interested in modeling the relationship between a scalar, Y, and a functional predictor, X(t). We introduce a highly flexible approach called ”Functional Adaptive Model Estimation” (FAME) which extends generalized linear models (GLM), generalized additive models (GAM) and proj ..."
Abstract

Cited by 39 (8 self)
 Add to MetaCart
In this article we are interested in modeling the relationship between a scalar, Y, and a functional predictor, X(t). We introduce a highly flexible approach called ”Functional Adaptive Model Estimation” (FAME) which extends generalized linear models (GLM), generalized additive models (GAM) and projection pursuit regression (PPR) to handle functional predictors. The FAME approach can model any of the standard exponential family of response distributions that are assumed for GLM or GAM while maintaining the flexibility of PPR. For example standard linear or logistic regression with functional predictors, as well as far more complicated models, can easily be applied using this approach. A functional principal components decomposition of the predictor functions is used to aid visualization of the relationship between X(t) and Y. We also show how the FAME procedure can be extended to deal with multiple functional and standard finite dimensional predictors, possibly with missing data. The FAME approach is illustrated on simulated data as well as on the prediction of arthritis based on bone shape. We end with a discussion of the relationships between standard regression approaches, their extensions to functional data and FAME.
Reducedrank Vector Generalized Linear Models
 Statistical Modelling
, 2000
"... this article we extend the reducedrank idea to the VGLM/VGAM classes to obtain subclasses which we term RRVGLMs and RRVGAMs. The multinomial logit model (MLM; Nerlove and Press, 1973) for categorical data is used as the main example to bring out some of the characteristics of the RRsubclasses, a ..."
Abstract

Cited by 22 (3 self)
 Add to MetaCart
this article we extend the reducedrank idea to the VGLM/VGAM classes to obtain subclasses which we term RRVGLMs and RRVGAMs. The multinomial logit model (MLM; Nerlove and Press, 1973) for categorical data is used as the main example to bring out some of the characteristics of the RRsubclasses, and investigate its use to regression and classication problems. Recently, Srivastava (1997) considered the problem of reducedrank regression for classication or discrimination, but only for the Gaussian model. Hastie and Tibshirani (1996) also discuss the ideas of reduced rank regression to discrimination problems, but in a larger framework involving mixture models. Gabriel (1998) and Aldrin (2000) are also recent works. One model where the reducedrank regression idea has been applied to nonGaussian errors is the MLM. This was proposed and referred to as the stereotype model by Anderson (1984). However, in that paper and in subsequent papers by others, the reducedrank regression idea was not explicitly stated in the framework presented below. The aim of this paper is twofold. Firstly, we extend the reducedrank concept to the VGLM and VGAM class. Secondly, we describe and motivate the reducedrank idea applied to regression models for categorical data analysis, especially the MLM. We do this by elaborating on its connections to other statistical models such as neural networks, projection pursuit regression, linear discriminant analysis, canonical correspondence analysis and biplots. An outline of this paper is as follows. In the remainder of this section we briey review 2 VGLMs and VGAMsfurther details can be found in Yee and Wild (1996). In Section 2 we propose reducedrank regression for the VGLM class. In Section 3 we focus on the RRMLM, and show how it relates ...
Constructive Feedforward Neural Networks for Regression Problems: A Survey
, 1995
"... In this paper, we review the procedures for constructing feedforward neural networks in regression problems. While standard backpropagation performs gradient descent only in the weight space of a network with fixed topology, constructive procedures start with a small network and then grow additiona ..."
Abstract

Cited by 21 (0 self)
 Add to MetaCart
In this paper, we review the procedures for constructing feedforward neural networks in regression problems. While standard backpropagation performs gradient descent only in the weight space of a network with fixed topology, constructive procedures start with a small network and then grow additional hidden units and weights until a satisfactory solution is found. The constructive procedures are categorized according to the resultant network architecture and the learning algorithm for the network weights. The Hong Kong University of Science & Technology Technical Report Series Department of Computer Science 1 Introduction In recent years, many neural network models have been proposed for pattern classification, function approximation and regression problems. Among them, the class of multilayer feedforward networks is perhaps the most popular. Standard backpropagation performs gradient descent only in the weight space of a network with fixed topology; this approach is analogous to ...
Use of Bias Term in Projection Pursuit Learning Improves Approximation and Convergence Properties
 IEEE Trans. Neural Networks
, 1996
"... In a regression problem, one is given a d dimensional random vector X, the components of which are called predictor variables, and a random variable, Y , called response. A regression surface describes a general relationship between variables X and Y . One nonparametric regression technique that h ..."
Abstract

Cited by 8 (1 self)
 Add to MetaCart
In a regression problem, one is given a d dimensional random vector X, the components of which are called predictor variables, and a random variable, Y , called response. A regression surface describes a general relationship between variables X and Y . One nonparametric regression technique that has been successfully applied to highdimensional data is projection pursuit regression (PPR). In this method, the regression surface is approximated by a sum of empirically determined univariate functions of linear combinations of the predictors. Projection pursuit learning (PPL) proposed by Hwang et al. formulates PPR using a twolayer feedforward neural network. One of the main differences between PPR and PPL is that the smoothers in PPR are nonparametric, whereas those in PPL are based on Hermite functions of some predefined highest order R. While the convergence property of PPR is already known, that for PPL has not been thoroughly studied. In this paper, we demonstrate that PPL networks...