Results 1  10
of
257,728
Bagging Predictors
 Machine Learning
, 1996
"... Bagging predictors is a method for generating multiple versions of a predictor and using these to get an aggregated predictor. The aggregation averages over the versions when predicting a numerical outcome and does a plurality vote when predicting a class. The multiple versions are formed by making ..."
Abstract

Cited by 3574 (1 self)
 Add to MetaCart
Bagging predictors is a method for generating multiple versions of a predictor and using these to get an aggregated predictor. The aggregation averages over the versions when predicting a numerical outcome and does a plurality vote when predicting a class. The multiple versions are formed by making
Temperatureaware microarchitecture
 In Proceedings of the 30th Annual International Symposium on Computer Architecture
, 2003
"... With power density and hence cooling costs rising exponentially, processor packaging can no longer be designed for the worst case, and there is an urgent need for runtime processorlevel techniques that can regulate operating temperature when the package’s capacity is exceeded. Evaluating such techn ..."
Abstract

Cited by 469 (51 self)
 Add to MetaCart
level also shows that power metrics are poor predictors of temperature, and that sensor imprecision has a substantial impact on the performance of DTM. 1.
Parents ’ Income is a Poor Predictor of SAT Score
, 2014
"... Parents ’ annual income lacks statistical significance as a predictor of state SAT scores when additional variables are well controlled. Spearman rank correlation coefficients reveal parents ’ income to be a weaker predictor of average SAT scores for each income bracket within each state than parent ..."
Abstract
 Add to MetaCart
Parents ’ annual income lacks statistical significance as a predictor of state SAT scores when additional variables are well controlled. Spearman rank correlation coefficients reveal parents ’ income to be a weaker predictor of average SAT scores for each income bracket within each state than
Internal consistency reliability is a poor predictor of responsiveness
, 2005
"... This is an Open Access article distributed under the terms of the Creative Commons Attribution License ..."
Abstract
 Add to MetaCart
This is an Open Access article distributed under the terms of the Creative Commons Attribution License
Empirical exchange rate models of the Seventies: do they fit out of sample?
 JOURNAL OF INTERNATIONAL ECONOMICS
, 1983
"... This study compares the outofsample forecasting accuracy of various structural and time series exchange rate models. We find that a random walk model performs as well as any estimated model at one to twelve month horizons for the dollar/pound, dollar/mark, dollar/yen and tradeweighted dollar exch ..."
Abstract

Cited by 831 (12 self)
 Add to MetaCart
exchange rates. The candidate structural models include the flexibleprice (FrenkelBilson) and stickyprice (DornbuschFrankel) monetary models, and a stickyprice model which incorporates the current account (HooperMorton). The structural models perform poorly despite the fact that we base
Representing twentieth century spacetime climate variability, part 1: development of a 196190 mean monthly terrestrial climatology
 Journal of Climate
, 1999
"... The construction of a 0.58 lat 3 0.58 long surface climatology of global land areas, excluding Antarctica, is described. The climatology represents the period 1961–90 and comprises a suite of nine variables: precipitation, wetday frequency, mean temperature, diurnal temperature range, vapor pressur ..."
Abstract

Cited by 551 (12 self)
 Add to MetaCart
to the period 1961–90, describes an extended suite of surface climate variables, explicitly incorporates elevation as a predictor variable, and contains an evaluation of regional errors associated with this and other commonly used climatologies. The climatology is already being used by researchers in the areas
Understanding and using the Implicit Association Test: I. An improved scoring algorithm
 Journal of Personality and Social Psychology
, 2003
"... behavior relations Greenwald et al. Predictive validity of the IAT (Draft of 30 Dec 2008) 2 Abstract (131 words) This review of 122 research reports (184 independent samples, 14,900 subjects), found average r=.274 for prediction of behavioral, judgment, and physiological measures by Implic ..."
Abstract

Cited by 592 (92 self)
 Add to MetaCart
behavior relations Greenwald et al. Predictive validity of the IAT (Draft of 30 Dec 2008) 2 Abstract (131 words) This review of 122 research reports (184 independent samples, 14,900 subjects), found average r=.274 for prediction of behavioral, judgment, and physiological measures by Implicit Association Test (IAT) measures. Parallel explicit (i.e., selfreport) measures, available in 156 of these samples (13,068 subjects), also predicted effectively (average r=.361), but with much greater variability of effect size. Predictive validity of selfreport was impaired for socially sensitive topics, for which impression management may distort selfreport responses. For 32 samples with criterion measures involving Black–White interracial behavior, predictive validity of IAT measures significantly exceeded that of selfreport measures. Both IAT and selfreport measures displayed incremental validity, with each measure predicting criterion variance beyond that predicted by the other. The more highly IAT and selfreport measures were intercorrelated, the greater was the predictive validity of each.
Regression Shrinkage and Selection Via the Lasso
 Journal of the Royal Statistical Society, Series B
, 1994
"... We propose a new method for estimation in linear models. The "lasso" minimizes the residual sum of squares subject to the sum of the absolute value of the coefficients being less than a constant. Because of the nature of this constraint it tends to produce some coefficients that are exactl ..."
Abstract

Cited by 4055 (51 self)
 Add to MetaCart
We propose a new method for estimation in linear models. The "lasso" minimizes the residual sum of squares subject to the sum of the absolute value of the coefficients being less than a constant. Because of the nature of this constraint it tends to produce some coefficients that are exactly zero and hence gives interpretable models. Our simulation studies suggest that the lasso enjoys some of the favourable properties of both subset selection and ridge regression. It produces interpretable models like subset selection and exhibits the stability of ridge regression. There is also an interesting relationship with recent work in adaptive function estimation by Donoho and Johnstone. The lasso idea is quite general and can be applied in a variety of statistical models: extensions to generalized regression models and treebased models are briefly described. Keywords: regression, subset selection, shrinkage, quadratic programming. 1 Introduction Consider the usual regression situation: we h...
Exceptional Exporter Performance: Cause, Effect or Both
 Journal of International Economics
, 1999
"... A growing body of empirical work has documented the superior performance characteristics of exporting plants and firms relative to nonexporters. Employment, shipments, wages, productivity and capital intensity are all higher at exporters at any given moment. This paper asks whether good firms becom ..."
Abstract

Cited by 685 (19 self)
 Add to MetaCart
A growing body of empirical work has documented the superior performance characteristics of exporting plants and firms relative to nonexporters. Employment, shipments, wages, productivity and capital intensity are all higher at exporters at any given moment. This paper asks whether good firms become exporters or whether exporting improves firm performance. The evidence is quite clear on one point: good firms become exporters, both growth rates and levels of success measures are higher exante for exporters. The benefits of exporting for the firm are less clear. Employment growth and the probability of survival are both higher for exporters; however, productivity and wage growth is not superior,
Very simple classification rules perform well on most commonly used datasets
 Machine Learning
, 1993
"... The classification rules induced by machine learning systems are judged by two criteria: their classification accuracy on an independent test set (henceforth "accuracy"), and their complexity. The relationship between these two criteria is, of course, of keen interest to the machin ..."
Abstract

Cited by 542 (5 self)
 Add to MetaCart
The classification rules induced by machine learning systems are judged by two criteria: their classification accuracy on an independent test set (henceforth "accuracy"), and their complexity. The relationship between these two criteria is, of course, of keen interest to the machine learning community. There are in the literature some indications that very simple rules may achieve surprisingly high accuracy on many datasets. For example, Rendell occasionally remarks that many real world datasets have "few peaks (often just one) " and so are "easy to learn" (Rendell & Seshu, 1990, p.256). Similarly, Shavlik et al. (1991) report that, with certain qualifications, "the accuracy of the perceptron is hardly distinguishable from the more complicated learning algorithms " (p.134). Further evidence is provided by studies of pruning methods (e.g. Buntine & Niblett, 1992; Clark & Niblett, 1989; Mingers, 1989), where accuracy is rarely seen to decrease as pruning becomes more severe (for example, see Table 1) 1. This is so even when rules are pruned to the extreme, as happened with the "Errcomp " pruning method in Mingers (1989). This method produced the most accurate decision trees, and in four of the five domains studied these trees had only 2 or 3 leaves
Results 1  10
of
257,728