Results 1  10
of
1,662,260
Determining the Number of Factors in Approximate Factor Models
, 2000
"... In this paper we develop some statistical theory for factor models of large dimensions. The focus is the determination of the number of factors, which is an unresolved issue in the rapidly growing literature on multifactor models. We propose a panel Cp criterion and show that the number of factors c ..."
Abstract

Cited by 561 (30 self)
 Add to MetaCart
In this paper we develop some statistical theory for factor models of large dimensions. The focus is the determination of the number of factors, which is an unresolved issue in the rapidly growing literature on multifactor models. We propose a panel Cp criterion and show that the number of factors
Greedy Function Approximation: A Gradient Boosting Machine
 Annals of Statistics
, 2000
"... Function approximation is viewed from the perspective of numerical optimization in function space, rather than parameter space. A connection is made between stagewise additive expansions and steepest{descent minimization. A general gradient{descent \boosting" paradigm is developed for additi ..."
Abstract

Cited by 1000 (13 self)
 Add to MetaCart
Function approximation is viewed from the perspective of numerical optimization in function space, rather than parameter space. A connection is made between stagewise additive expansions and steepest{descent minimization. A general gradient{descent \boosting" paradigm is developed
Integrated architectures for learning, planning, and reacting based on approximating dynamic programming
 Proceedings of the SevenLh International Conference on Machine Learning
, 1990
"... gutton~gte.com Dyna is an AI architecture that integrates learning, planning, and reactive execution. Learning methods are used in Dyna both for compiling planning results and for updating a model of the effects of the agent's actions on the world. Planning is incremental and can use the probab ..."
Abstract

Cited by 563 (22 self)
 Add to MetaCart
gutton~gte.com Dyna is an AI architecture that integrates learning, planning, and reactive execution. Learning methods are used in Dyna both for compiling planning results and for updating a model of the effects of the agent's actions on the world. Planning is incremental and can use
Loopy belief propagation for approximate inference: An empirical study. In:
 Proceedings of Uncertainty in AI,
, 1999
"... Abstract Recently, researchers have demonstrated that "loopy belief propagation" the use of Pearl's polytree algorithm in a Bayesian network with loops can perform well in the context of errorcorrecting codes. The most dramatic instance of this is the near Shannonlimit performanc ..."
Abstract

Cited by 676 (15 self)
 Add to MetaCart
in a more gen eral setting? We compare the marginals com puted using loopy propagation to the exact ones in four Bayesian network architectures, including two realworld networks: ALARM and QMR. We find that the loopy beliefs of ten converge and when they do, they give a good approximation
Maximum Likelihood Phylogenetic Estimation from DNA Sequences with Variable Rates over Sites: Approximate Methods
 J. Mol. Evol
, 1994
"... Two approximate methods are proposed for maximum likelihood phylogenetic estimation, which allow variable rates of substitution across nucleotide sites. Three data sets with quite different characteristics were analyzed to examine empirically the performance of these methods. The first, called ..."
Abstract

Cited by 557 (29 self)
 Add to MetaCart
the "discrete gamma model," uses several categories of rates to approximate the gamma distribution, with equal probability for each category. The mean of each category is used to represent all the rates falling in the category. The performance of this method is found to be quite good
Active Learning with Statistical Models
, 1995
"... For manytypes of learners one can compute the statistically "optimal" way to select data. We review how these techniques have been used with feedforward neural networks [MacKay, 1992# Cohn, 1994]. We then showhow the same principles may be used to select data for two alternative, statist ..."
Abstract

Cited by 679 (10 self)
 Add to MetaCart
, statisticallybased learning architectures: mixtures of Gaussians and locally weighted regression. While the techniques for neural networks are expensive and approximate, the techniques for mixtures of Gaussians and locally weighted regression are both efficient and accurate.
Some simple effective approximations to the 2Poisson model for probabilistic weighted retrieval
 In Proceedings of SIGIR’94
, 1994
"... The 2–Poisson model for term frequencies is used to suggest ways of incorporating certain variables in probabilistic models for information retrieval. The variables concerned are withindocument term frequency, document length, and withinquery term frequency. Simple weighting functions are develope ..."
Abstract

Cited by 460 (14 self)
 Add to MetaCart
The 2–Poisson model for term frequencies is used to suggest ways of incorporating certain variables in probabilistic models for information retrieval. The variables concerned are withindocument term frequency, document length, and withinquery term frequency. Simple weighting functions
Modeling and Forecasting Realized Volatility
, 2002
"... this paper is built. First, although raw returns are clearly leptokurtic, returns standardized by realized volatilities are approximately Gaussian. Second, although the distributions of realized volatilities are clearly rightskewed, the distributions of the logarithms of realized volatilities are a ..."
Abstract

Cited by 549 (50 self)
 Add to MetaCart
are approximately Gaussian. Third, the longrun dynamics of realized logarithmic volatilities are well approximated by a fractionallyintegrated longmemory process. Motivated by the three ABDL empirical regularities, we proceed to estimate and evaluate a multivariate model for the logarithmic realized volatilities
Fit indices in covariance structure modeling: Sensitivity to underparameterized model misspecification
 Psychological Methods
, 1998
"... This study evaluated the sensitivity of maximum likelihood (ML), generalized least squares (GLS), and asymptotic distributionfree (ADF)based fit indices to model misspecification, under conditions that varied sample size and distribution. The effect of violating assumptions of asymptotic robustn ..."
Abstract

Cited by 543 (0 self)
 Add to MetaCart
), and the ML and GLSbased gamma hat, McDonald's centrality index (1989; Me), and rootmeansquare error of approximation (RMSEA) were the most sensitive indices to models with misspecified factor loadings. With ML and GLS methods, we recommend the use of SRMR, supplemented by TLI, BL89, RNI, CFI, gamma
Atmospheric Modeling, Data Assimilation and Predictability
, 2003
"... Numerical weather prediction (NWP) now provides major guidance in our daily weather forecast. The accuracy of NWP models has improved steadily since the first successful experiment made by Charney, Fj!rtoft and von Neuman (1950). During the past 50 years, a large number of technical papers and repor ..."
Abstract

Cited by 626 (33 self)
 Add to MetaCart
of data assimilation and predictability. It incorporates all aspects of environmental computer modeling including an historical overview of NWP, equations of motion and their approximations, a modern description of the methods to determine the initial conditions using weather observations and a clear
Results 1  10
of
1,662,260