• Documents
  • Authors
  • Tables
  • Log in
  • Sign up
  • MetaCart
  • DMCA
  • Donate

CiteSeerX logo

Advanced Search Include Citations

Tools

Sorted by:
Try your query at:
Semantic Scholar Scholar Academic
Google Bing DBLP
Results 1 - 10 of 4,800
Next 10 →

An experimental comparison of three methods for constructing ensembles of decision trees

by Thomas G. Dietterich, Doug Fisher - Bagging, boosting, and randomization. Machine Learning , 2000
"... Abstract. Bagging and boosting are methods that generate a diverse ensemble of classifiers by manipulating the training data given to a “base ” learning algorithm. Breiman has pointed out that they rely for their effectiveness on the instability of the base learning algorithm. An alternative approac ..."
Abstract - Cited by 610 (6 self) - Add to MetaCart
Abstract. Bagging and boosting are methods that generate a diverse ensemble of classifiers by manipulating the training data given to a “base ” learning algorithm. Breiman has pointed out that they rely for their effectiveness on the instability of the base learning algorithm. An alternative

Online Learning with Kernels

by Jyrki Kivinen, Alexander J. Smola, Robert C. Williamson , 2003
"... Kernel based algorithms such as support vector machines have achieved considerable success in various problems in the batch setting where all of the training data is available in advance. Support vector machines combine the so-called kernel trick with the large margin idea. There has been little u ..."
Abstract - Cited by 2831 (123 self) - Add to MetaCart
Kernel based algorithms such as support vector machines have achieved considerable success in various problems in the batch setting where all of the training data is available in advance. Support vector machines combine the so-called kernel trick with the large margin idea. There has been little

Learning from Little: Comparison of Classifiers Given Little Training

by George Forman, Ira Cohen - in 8th European Conference on Principles and Practice of Knowledge Discovery in Databases (PKDD , 2004
"... Many real-world machine learning tasks are faced with the problem of small training sets. Additionally, the class distribution of the training set often does not match the target distribution. In this paper we compare the performance of many learning models on a substantial benchmark of binary t ..."
Abstract - Cited by 23 (2 self) - Add to MetaCart
Many real-world machine learning tasks are faced with the problem of small training sets. Additionally, the class distribution of the training set often does not match the target distribution. In this paper we compare the performance of many learning models on a substantial benchmark of binary

Learning from Little: Comparison of Classifiers Given Little Training

by George Forman Ira, George Forman, Ira Cohen, Ira Cohen - in 8th European Conference on Principles and Practice of Knowledge Discovery in Databases (PKDD , 2004
"... Many real-world machine learning tasks are faced with the problem of small training sets. Additionally, the class distribution of the training set often does not match the target distribution. In this paper we compare the performance of many learning models on a substantial benchmark of binary t ..."
Abstract - Add to MetaCart
Many real-world machine learning tasks are faced with the problem of small training sets. Additionally, the class distribution of the training set often does not match the target distribution. In this paper we compare the performance of many learning models on a substantial benchmark of binary

The Foundations of Cost-Sensitive Learning

by Charles Elkan - In Proceedings of the Seventeenth International Joint Conference on Artificial Intelligence , 2001
"... This paper revisits the problem of optimal learning and decision-making when different misclassification errors incur different penalties. We characterize precisely but intuitively when a cost matrix is reasonable, and we show how to avoid the mistake of defining a cost matrix that is economically i ..."
Abstract - Cited by 402 (6 self) - Add to MetaCart
that changing the balance of negative and positive training examples has little effect on the classifiers produced by standard Bayesian and decision tree learning methods. Accordingly, the recommended way of applying one of these methods in a domain with differing misclassification costs is to learn a

Why Do Firms Train? Theory and Evidence

by Daron Acemoglu, Jörn-steffen Pischke - Quarterly Journal of Economics , 1998
"... This paper offers a theory of training whereby workers do not pay for general training they receive. The superior information of the current employer regarding its employees’ abilities relative to other firms creates ex post monopsony power, and encourages this employer to provide and pay for traini ..."
Abstract - Cited by 280 (15 self) - Add to MetaCart
for training, even if these skills are general. The model can lead to multiple equilibria. In one equilibrium quits are endogenously high, and as a result employers have limited monopsony power and provide little training, while in another equilibrium quits are low and training is high. Using microdata

An Algorithm that Learns What's in a Name

by Daniel M. Bikel, Richard Schwartz, Ralph M. Weischedel , 1999
"... In this paper, we present IdentiFinder^TM, a hidden Markov model that learns to recognize and classify names, dates, times, and numerical quantities. We have evaluated the model in English (based on data from the Sixth and Seventh Message Understanding Conferences [MUC-6, MUC-7] and broadcast news) ..."
Abstract - Cited by 372 (7 self) - Add to MetaCart
on performance, demonstrating that as little as 100,000 words of training data is adequate to get performance around 90% on newswire. Although we present our understanding of why this algorithm performs so well on this class of problems, we believe that significant improvement in performance may still

Statistical shallow semantic parsing despite little training data. Technical report available at http://www.isi.edu/˜rahul

by Rahul Bhagat - In , 2005
"... Natural language understanding is an essential module in any dialogue system. To obtain satisfactory performance levels, a dialogue system needs a semantic parser/natural language understanding ..."
Abstract - Cited by 4 (1 self) - Add to MetaCart
Natural language understanding is an essential module in any dialogue system. To obtain satisfactory performance levels, a dialogue system needs a semantic parser/natural language understanding

Applying the Rasch model: Fundamental measurement in the human sciences

by Trevor G. Bond, Ph. D, Trevor G. Bond, Christine M. Fox , 2001
"... I guess I just grew sick and tired of the same old request after almost every presentation I made at conferences involving developmental psychologists: “Trevor, could you just give me a simple ten minute explanation of what Rasch analysis is all about? ” After a dozen or so inquiries of this nature, ..."
Abstract - Cited by 319 (4 self) - Add to MetaCart
, I thought there must be a shortcut. A couple of us Piagetians gathered and devel-oped a little web site for developmentalists interested in Rasch analysis. When we tried to sug-gest introductory readings for our colleagues — many of whom had very little explicit training in statistics

Neuronal spike trains and stochastic point processes. II. Simultaneous spike trains

by Donald H. Perkel, George L. Gerstein, George P. Moore - Biophys. J , 1967
"... AsTRAcT The statistical analysis oftwo simultaneously observed trainsofneuronal spikes is described, using as a conceptual framework the theory of stochastic point processes. The first statistical question that arises is whether the observed trains are independent; statistical techniques for testing ..."
Abstract - Cited by 190 (2 self) - Add to MetaCart
for simultaneous spike trains are also discussed. For two-train comparisons of irregularly discharging nerve cells, moderate nonstationarities areshown tohave little effecton the detection of interactions. Combining repetitive stimulation and simultaneous recording of spike trains from two (or more) neurons yields
Next 10 →
Results 1 - 10 of 4,800
Powered by: Apache Solr
  • About CiteSeerX
  • Submit and Index Documents
  • Privacy Policy
  • Help
  • Data
  • Source
  • Contact Us

Developed at and hosted by The College of Information Sciences and Technology

© 2007-2019 The Pennsylvania State University