Results 1 - 10
of
4,800
An experimental comparison of three methods for constructing ensembles of decision trees
- Bagging, boosting, and randomization. Machine Learning
, 2000
"... Abstract. Bagging and boosting are methods that generate a diverse ensemble of classifiers by manipulating the training data given to a “base ” learning algorithm. Breiman has pointed out that they rely for their effectiveness on the instability of the base learning algorithm. An alternative approac ..."
Abstract
-
Cited by 610 (6 self)
- Add to MetaCart
Abstract. Bagging and boosting are methods that generate a diverse ensemble of classifiers by manipulating the training data given to a “base ” learning algorithm. Breiman has pointed out that they rely for their effectiveness on the instability of the base learning algorithm. An alternative
Online Learning with Kernels
, 2003
"... Kernel based algorithms such as support vector machines have achieved considerable success in various problems in the batch setting where all of the training data is available in advance. Support vector machines combine the so-called kernel trick with the large margin idea. There has been little u ..."
Abstract
-
Cited by 2831 (123 self)
- Add to MetaCart
Kernel based algorithms such as support vector machines have achieved considerable success in various problems in the batch setting where all of the training data is available in advance. Support vector machines combine the so-called kernel trick with the large margin idea. There has been little
Learning from Little: Comparison of Classifiers Given Little Training
- in 8th European Conference on Principles and Practice of Knowledge Discovery in Databases (PKDD
, 2004
"... Many real-world machine learning tasks are faced with the problem of small training sets. Additionally, the class distribution of the training set often does not match the target distribution. In this paper we compare the performance of many learning models on a substantial benchmark of binary t ..."
Abstract
-
Cited by 23 (2 self)
- Add to MetaCart
Many real-world machine learning tasks are faced with the problem of small training sets. Additionally, the class distribution of the training set often does not match the target distribution. In this paper we compare the performance of many learning models on a substantial benchmark of binary
Learning from Little: Comparison of Classifiers Given Little Training
- in 8th European Conference on Principles and Practice of Knowledge Discovery in Databases (PKDD
, 2004
"... Many real-world machine learning tasks are faced with the problem of small training sets. Additionally, the class distribution of the training set often does not match the target distribution. In this paper we compare the performance of many learning models on a substantial benchmark of binary t ..."
Abstract
- Add to MetaCart
Many real-world machine learning tasks are faced with the problem of small training sets. Additionally, the class distribution of the training set often does not match the target distribution. In this paper we compare the performance of many learning models on a substantial benchmark of binary
The Foundations of Cost-Sensitive Learning
- In Proceedings of the Seventeenth International Joint Conference on Artificial Intelligence
, 2001
"... This paper revisits the problem of optimal learning and decision-making when different misclassification errors incur different penalties. We characterize precisely but intuitively when a cost matrix is reasonable, and we show how to avoid the mistake of defining a cost matrix that is economically i ..."
Abstract
-
Cited by 402 (6 self)
- Add to MetaCart
that changing the balance of negative and positive training examples has little effect on the classifiers produced by standard Bayesian and decision tree learning methods. Accordingly, the recommended way of applying one of these methods in a domain with differing misclassification costs is to learn a
Why Do Firms Train? Theory and Evidence
- Quarterly Journal of Economics
, 1998
"... This paper offers a theory of training whereby workers do not pay for general training they receive. The superior information of the current employer regarding its employees’ abilities relative to other firms creates ex post monopsony power, and encourages this employer to provide and pay for traini ..."
Abstract
-
Cited by 280 (15 self)
- Add to MetaCart
for training, even if these skills are general. The model can lead to multiple equilibria. In one equilibrium quits are endogenously high, and as a result employers have limited monopsony power and provide little training, while in another equilibrium quits are low and training is high. Using microdata
An Algorithm that Learns What's in a Name
, 1999
"... In this paper, we present IdentiFinder^TM, a hidden Markov model that learns to recognize and classify names, dates, times, and numerical quantities. We have evaluated the model in English (based on data from the Sixth and Seventh Message Understanding Conferences [MUC-6, MUC-7] and broadcast news) ..."
Abstract
-
Cited by 372 (7 self)
- Add to MetaCart
on performance, demonstrating that as little as 100,000 words of training data is adequate to get performance around 90% on newswire. Although we present our understanding of why this algorithm performs so well on this class of problems, we believe that significant improvement in performance may still
Statistical shallow semantic parsing despite little training data. Technical report available at http://www.isi.edu/˜rahul
- In
, 2005
"... Natural language understanding is an essential module in any dialogue system. To obtain satisfactory performance levels, a dialogue system needs a semantic parser/natural language understanding ..."
Abstract
-
Cited by 4 (1 self)
- Add to MetaCart
Natural language understanding is an essential module in any dialogue system. To obtain satisfactory performance levels, a dialogue system needs a semantic parser/natural language understanding
Applying the Rasch model: Fundamental measurement in the human sciences
, 2001
"... I guess I just grew sick and tired of the same old request after almost every presentation I made at conferences involving developmental psychologists: “Trevor, could you just give me a simple ten minute explanation of what Rasch analysis is all about? ” After a dozen or so inquiries of this nature, ..."
Abstract
-
Cited by 319 (4 self)
- Add to MetaCart
, I thought there must be a shortcut. A couple of us Piagetians gathered and devel-oped a little web site for developmentalists interested in Rasch analysis. When we tried to sug-gest introductory readings for our colleagues — many of whom had very little explicit training in statistics
Neuronal spike trains and stochastic point processes. II. Simultaneous spike trains
- Biophys. J
, 1967
"... AsTRAcT The statistical analysis oftwo simultaneously observed trainsofneuronal spikes is described, using as a conceptual framework the theory of stochastic point processes. The first statistical question that arises is whether the observed trains are independent; statistical techniques for testing ..."
Abstract
-
Cited by 190 (2 self)
- Add to MetaCart
for simultaneous spike trains are also discussed. For two-train comparisons of irregularly discharging nerve cells, moderate nonstationarities areshown tohave little effecton the detection of interactions. Combining repetitive stimulation and simultaneous recording of spike trains from two (or more) neurons yields
Results 1 - 10
of
4,800