Results 1  10
of
47
Separateandconquer rule learning
 Artificial Intelligence Review
, 1999
"... This paper is a survey of inductive rule learning algorithms that use a separateandconquer strategy. This strategy can be traced back to the AQ learning system and still enjoys popularity as can be seen from its frequent use in inductive logic programming systems. We will put this wide variety of ..."
Abstract

Cited by 168 (29 self)
 Add to MetaCart
This paper is a survey of inductive rule learning algorithms that use a separateandconquer strategy. This strategy can be traced back to the AQ learning system and still enjoys popularity as can be seen from its frequent use in inductive logic programming systems. We will put this wide variety of algorithms into a single framework and analyze them along three different dimensions, namely their search, language and overfitting avoidance biases.
Tree induction vs. logistic regression: A learningcurve analysis
 CEDER WORKING PAPER #IS0102, STERN SCHOOL OF BUSINESS
, 2001
"... Tree induction and logistic regression are two standard, offtheshelf methods for building models for classi cation. We present a largescale experimental comparison of logistic regression and tree induction, assessing classification accuracy and the quality of rankings based on classmembership pr ..."
Abstract

Cited by 86 (16 self)
 Add to MetaCart
Tree induction and logistic regression are two standard, offtheshelf methods for building models for classi cation. We present a largescale experimental comparison of logistic regression and tree induction, assessing classification accuracy and the quality of rankings based on classmembership probabilities. We use a learningcurve analysis to examine the relationship of these measures to the size of the training set. The results of the study show several remarkable things. (1) Contrary to prior observations, logistic regression does not generally outperform tree induction. (2) More specifically, and not surprisingly, logistic regression is better for smaller training sets and tree induction for larger data sets. Importantly, this often holds for training sets drawn from the same domain (i.e., the learning curves cross), so conclusions about inductionalgorithm superiority on a given domain must be based on an analysis of the learning curves. (3) Contrary to conventional wisdom, tree induction is effective atproducing probabilitybased rankings, although apparently comparatively less so foragiven training{set size than at making classifications. Finally, (4) the domains on which tree induction and logistic regression are ultimately preferable canbecharacterized surprisingly well by a simple measure of signaltonoise ratio.
Logic Regression
 Journal of Computational and Graphical Statistics
, 2003
"... The odyssey cohort study consists of 8,394 participants who donated blood samples in 1974 and 1989 in Washington County, Maryland. The cohort has been followed until 2001, and environmental factors such as smoking and dietary intake are available. The goals of the study include finding associatio ..."
Abstract

Cited by 76 (13 self)
 Add to MetaCart
(Show Context)
The odyssey cohort study consists of 8,394 participants who donated blood samples in 1974 and 1989 in Washington County, Maryland. The cohort has been followed until 2001, and environmental factors such as smoking and dietary intake are available. The goals of the study include finding associations between polymorphisms in candidate genes and disease (including cancer and cardiovascular disease). Particularly, geneenvironment and genegene interactions associated with disease are of interest. Currently, SNP data from 51 sites are available for some 1600 subjects.
Structural Regression Trees
, 1996
"... In many realworld domains the task of machine learning algorithms is to learn a theory predicting numerical values. In particular several standard test domains used in Inductive Logic Programming (ILP) are concerned with predicting numerical values from examples and relational and mostly nondeterm ..."
Abstract

Cited by 67 (10 self)
 Add to MetaCart
In many realworld domains the task of machine learning algorithms is to learn a theory predicting numerical values. In particular several standard test domains used in Inductive Logic Programming (ILP) are concerned with predicting numerical values from examples and relational and mostly nondeterminate background knowledge. However, so far no ILP algorithm except one can predict numbers and cope with nondeterminate background knowledge. (The only exception is a covering algorithm called FORS.) In this paper we present Structural Regression Trees (SRT), a new algorithm which can be applied to the above class of problems by integrating the statistical method of regression trees into ILP. SRT constructs a tree containing a literal (an atomic formula or its negation) or a conjunction of literals in each node, and assigns a numerical value to each leaf. SRT provides more comprehensible results than purely statistical methods, and can be applied to a class of problems most other ILP syste...
Noisy Time Series Prediction using a Recurrent Neural Network and Grammatical Inference
 Machine Learning
, 2001
"... Financial forecasting is an example of a signal processing problem which is challenging due to small sample sizes, high noise, nonstationarity, and nonlinearity. Neural networks have been very successful in a number of signal processing applications. We discuss fundamental limitations and inherent ..."
Abstract

Cited by 58 (0 self)
 Add to MetaCart
(Show Context)
Financial forecasting is an example of a signal processing problem which is challenging due to small sample sizes, high noise, nonstationarity, and nonlinearity. Neural networks have been very successful in a number of signal processing applications. We discuss fundamental limitations and inherent difficulties when using neural networks for the processing of high noise, small sample size signals. We introduce a new intelligent signal processing method which addresses the difficulties. The method proposed uses conversion into a symbolic representation with a selforganizing map, and grammatical inference with recurrent neural networks. We apply the method to the prediction of daily foreign exchange rates, addressing difficulties with nonstationarity, overfitting, and unequal a priori class probabilities, and we find significant predictability in comprehensive experiments covering 5 different foreign exchange rates. The method correctly predicts the direction of change for th...
Discovering Simple Rules in Complex Data: A MetaLearning Algorithm and Some Surprising Musical Discoveries
 ARTIFICIAL INTELLIGENCE
, 2001
"... This article presents a new rule discovery algorithm named PLCG that can find simple, robust partial rule models (sets of classification rules) in complex data where it is difficult or impossible to find models that completely account for all the phenomena of interest. Technically speaking, ..."
Abstract

Cited by 38 (12 self)
 Add to MetaCart
This article presents a new rule discovery algorithm named PLCG that can find simple, robust partial rule models (sets of classification rules) in complex data where it is difficult or impossible to find models that completely account for all the phenomena of interest. Technically speaking,
Data Mining with Decision Trees and Decision Rules
 FUTURE GENERATION COMPUTER SYSTEMS
, 1997
"... This paper describes the use of decision tree and rule induction in data mining applications. Of methods for classification and regression that have been developed in the fields of pattern recognition, statistics, and machine learning, these areofparticular interest for data mining since they utiliz ..."
Abstract

Cited by 35 (3 self)
 Add to MetaCart
This paper describes the use of decision tree and rule induction in data mining applications. Of methods for classification and regression that have been developed in the fields of pattern recognition, statistics, and machine learning, these areofparticular interest for data mining since they utilize symbolic and interpretable representations. Symbolic solutions can provide a high degree of insight into the decision boundaries that exist in the data, and the logic underlying them. This aspect makes these predictive mining techniques particularly attractive in commercial and industrial data mining applications. We present hereasynopsis of some major stateoftheart tree and rule mining methodologies, as well as some recent advances.
Functional Models for Regression Tree Leaves
, 1997
"... This paper presents a study about functional models for regression tree leaves. We evaluate experimentally several alternatives to the averages commonly used in regression trees. We have implemented a regression tree learner (HTL) that is able to use several alternative models in the tree leaves. We ..."
Abstract

Cited by 34 (3 self)
 Add to MetaCart
This paper presents a study about functional models for regression tree leaves. We evaluate experimentally several alternatives to the averages commonly used in regression trees. We have implemented a regression tree learner (HTL) that is able to use several alternative models in the tree leaves. We study the effect on accuracy and the computational cost of these alternatives. The experiments carried out on 11 data sets revealed that it is possible to significantly outperform the "naive" averages of regression trees. Among the four alternative models that we evaluated, kernel regressors were usually the best in terms of accuracy. Our study also indicates that by integrating regression trees with other regression approaches we are able to overcome the limitations of individual methods both in terms of accuracy as well as in computational efficiency. 1 INTRODUCTION In this paper we present an empirical evaluation of alternative regression models for the leaves of decision trees that dea...
Using AI and Machine Learning to Study Expressive Music Performance: project Survey and First Report
, 2001
"... This article presents a longterm interdisciplinary research project situated at the intersection of the scientific disciplines of Musicology and Artificial Intelligence. The goal is to develop AI, and in particular machine learning and data mining, methods to study the complex phenomenon of expres ..."
Abstract

Cited by 30 (13 self)
 Add to MetaCart
This article presents a longterm interdisciplinary research project situated at the intersection of the scientific disciplines of Musicology and Artificial Intelligence. The goal is to develop AI, and in particular machine learning and data mining, methods to study the complex phenomenon of expressive music performance. Formulating formal, quantitative models of expressive performance is one of the big open research problems in contemporary (empirical and cognitive) musicology. Our project develops a new direction in this field: we use inductive learning techniques to discover general and valid expression principles from (large amounts of) real performance data. The project is currently starting its third year and is planned to continue for at least four more years. In the
Generating Rule Sets from Model Trees
 in Proc. of the 12th Australian Joint Conf. on Artificial Intelligence
"... Abstract. Model trees—decision trees with linear models at the leaf nodes—have recently emerged as an accurate method for numeric prediction that produces understandable models. However, it is known that decision lists—ordered sets of IfThen rules—have the potential to be more compact and therefore ..."
Abstract

Cited by 25 (0 self)
 Add to MetaCart
Abstract. Model trees—decision trees with linear models at the leaf nodes—have recently emerged as an accurate method for numeric prediction that produces understandable models. However, it is known that decision lists—ordered sets of IfThen rules—have the potential to be more compact and therefore more understandable than their tree counterparts. We present an algorithm for inducing simple, accurate decision lists from model trees. Model trees are built repeatedly and the best rule is selected at each iteration. This method produces rule sets that are as accurate but smaller than the model tree constructed from the entire dataset. Experimental results for various heuristics which attempt to find a compromise between rule accuracy and rule coverage are reported. We show that our method produces comparably accurate and smaller rule sets than the commercial stateoftheart rule learning system Cubist. 1