DMCA
Supervised and Unsupervised Discretization of Continuous Features (1995)
Cached
Download Links
- [ai.stanford.edu]
- [ai.stanford.edu]
- [robotics.stanford.edu]
- [robotics.stanford.edu]
- [www.dsi.unive.it]
- DBLP
Other Repositories/Bibliography
Venue: | MACHINE LEARNING: PROCEEDINGS OF THE TWELFTH INTERNATIONAL CONFERENCE |
Citations: | 540 - 11 self |
Citations
1666 |
Self-Organisation and Associative Memory
- Kohonen
- 1984
(Show Context)
Citation Context ...itioning of a continuous feature in O(m(log m + k 2 )) time, where k is the number of intervals and m is the number of instances. This method has yet to be tested experimentally. Vector Quantization (=-=Kohonen 1989-=-) is also related to the notion of discretization. This method attempts to partition an N-dimensional continuous space into a Voronoi Tessellation and then represent the set of points in each region b... |
1281 | A Study of Cross-Validation and Bootstrap for Accuracy Estimation and Model Selection
- Kohavi
- 1995
(Show Context)
Citation Context ...ne continuous feature. For the datasets that had more than 3000 test instances, we ran a single train/test experiment and report the theoretical standard deviation estimated using the Binomial model (=-=Kohavi 1995-=-). For the remaining datasets, we ran five-fold cross-validation and report the standard deviation of the cross-validation. Table 2 describes the datasets with the last column showing the accuracy of ... |
1059 |
C4.5: Programs for
- Quinlan
- 1993
(Show Context)
Citation Context ... transformations for a learning algorithm, and no careful study of how this discretization affects the learning process is performed (Weiss & Kulikowski 1991). In decision tree methods, such as C4.5 (=-=Quinlan 1993-=-), continuous values are discretized during the learning process. The advantages of discretizing during the learning process have not yet been shown. In this paper, we include such a comparison. Other... |
854 |
UCI repository of machine learning databases
- Murphy, Aha
- 1995
(Show Context)
Citation Context ...istinct observed values for each attribute. The heuristic was chosen based on examining S-plus's histogram binning algorithm (Spector 1994). We chose sixteen datasets from the U.C. Irvine repository (=-=Murphy & Aha 1994-=-) that each had at least one continuous feature. For the datasets that had more than 3000 test instances, we ran a single train/test experiment and report the theoretical standard deviation estimated ... |
829 | Multi-interval discretization of continuous-valued attributes for classification learning
- Fayyad, Irani
- 1993
(Show Context)
Citation Context ...ization methods require some parameter, k, indicating the maximum number of intervals to produce in discretizing a feature. Static methods, such as binning, entropy-based partitioning (Catlett 1991b, =-=Fayyad & Irani 1993-=-, Pfahringer 1995), and the 1R algorithm (Holte 1993), perform one discretization pass of the data for each feature and determine the value of k for each feature independent of the other features. Dyn... |
755 | Irrelevant Features and the Subset Selection Problem - John, Kohavi, et al. - 1994 |
547 | Very simple classification rules perform well on most commonly used dataset
- Holte
- 1993
(Show Context)
Citation Context ...imum number of intervals to produce in discretizing a feature. Static methods, such as binning, entropy-based partitioning (Catlett 1991b, Fayyad & Irani 1993, Pfahringer 1995), and the 1R algorithm (=-=Holte 1993-=-), perform one discretization pass of the data for each feature and determine the value of k for each feature independent of the other features. Dynamic methods conduct a search through the space of p... |
439 | An analysis of Bayesian classifier - Langley, Iba, et al. - 1992 |
305 |
Learning from observation: conceptual clustering. In
- Michalski, Stepp
- 1983
(Show Context)
Citation Context ...non considering the fact that C4.5 is capable of locally discretizing features. 1 Introduction Many algorithms developed in the machine learning community focus on learning in nominal feature spaces (=-=Michalski & Stepp 1983-=-, Kohavi 1994). However, many real-world classi cation tasks exist that involve continuous features where such algorithms could not be applied unless the continuous features are rst discretized. Conti... |
217 | Boolean feature discovery in empirical learning.
- Pagallo, Haussler
- 1990
(Show Context)
Citation Context ...tion method and did not signi cantly degrade on any dataset, although it did decrease slightly on some. The entropy-based discretization is a global method and does not su er from data fragmentation (=-=Pagallo & Haussler 1990-=-). Since there is no signi cantDataset C4.5 Continuous Bin-log ` Entropy 1RD Ten Bins 1 anneal 91.65 1.60 90.32 1.06 89.65 1.00 87.20 1.66 89.87 1.30 2 australian 85.36 0.74 84.06 0.97 85.65 1.82 85.... |
204 |
Chimerge: Discretization of numeric attributes
- Kerber
- 1992
(Show Context)
Citation Context ...tting partition boundaries, it is likely that classification information will be lost by binning as a result of combining values that are strongly associated with different classes into the same bin (=-=Kerber 1992-=-). In some cases this could make effective classification much more difficult. A variation of equal frequency intervals---maximal marginal entropy--- adjusts the boundaries to decrease entropy at each... |
193 |
On changing continuous attributes into ordered discrete attributes
- Catlett
- 1991
(Show Context)
Citation Context ... In this paper, we include such a comparison. Other reasons for variable discretization, aside from the algorithmic requirements mentioned above, include increasing the speed of induction algorithms (=-=Catlett 1991-=-b) and viewing General Logic Diagrams (Michalski 1978) of the induced classifier. In this paper, we address the effects of discretization on learning accuracy by comparing a range of discretization me... |
103 |
Megainduction : A Machine Learning on Very Large Databases
- Catlett
- 1991
(Show Context)
Citation Context ... In this paper, we include such a comparison. Other reasons for variable discretization, aside from the algorithmic requirements mentioned above, include increasing the speed of induction algorithms (=-=Catlett 1991-=-b) and viewing General Logic Diagrams (Michalski 1978) of the induced classifier. In this paper, we address the effects of discretization on learning accuracy by comparing a range of discretization me... |
98 | MLC++: a machine learning library - Kohavi, John, et al. - 1994 |
77 |
Induction of one-level decision trees
- Iba, Langley
- 1992
(Show Context)
Citation Context ...soever and is thus an unsupervised discretization method. 3.2 Holte's 1R Discretizer Holte (1993) describes a simple classi er that induces one-level decision trees, sometimes called decision stumps (=-=Iba & Langley 1992-=-). In order to properly deal with domains that contain continuous valued features, a simple supervised discretization method is given. This method, referred to here as 1RD (OneRule Discretizer), sorts... |
62 | Global Discretization of Continuous Attributes as Preprocessing for Machine Learning
- Chmielewski, Grzymala-Busse
- 1996
(Show Context)
Citation Context ...al vs. local, supervised vs. unsupervised, andstatic vs. dynamic. Local methods, as exempli ed by C4.5, produce partitions that are applied to localized regions of the instance space. Global methods (=-=Chmielewski & Grzymala-Busse 1994-=-), such as binning, produce a mesh over the entire n-dimensional continuous instance space, where each feature is partitioned into regions independent of the other attributes. The mesh contains Qn i=1... |
51 | Bottom-Up Induction of Oblivious Read-Once Decision Graphs
- Kohavi
- 1994
(Show Context)
Citation Context ... that C4.5 is capable of locally discretizing features. 1 Introduction Many algorithms developed in the machine learning community focus on learning in nominal feature spaces (Michalski & Stepp 1983, =-=Kohavi 1994-=-). However, many real-world classification tasks exist that involve continuous features where such algorithms could not be applied unless the continuous features are first discretized. Continuous vari... |
48 | Compression-based discretization of continuous attributes
- Pfahringer
- 1995
(Show Context)
Citation Context ...re some parameter, k, indicating the maximum number of intervals to produce in discretizing a feature. Static methods, such as binning, entropy-based partitioning (Catlett 1991b, Fayyad & Irani 1993, =-=Pfahringer 1995-=-), and the 1R algorithm (Holte 1993), perform one discretization pass of the data for each feature and determine the value of k for each feature independent of the other features. Dynamic methods cond... |
38 | Efficient agnostic pac-learning with simple hypotheses - Maass - 1994 |
36 | An efficient algorithm for for finding multi-way splits for decision trees - FULTON, KASIF, et al. - 1989 |
26 |
A Planar Geometric Model for Representing Multidimensional Discrete Spaces and Multiple-valued Logic Functions
- Michalski
- 1978
(Show Context)
Citation Context ... reasons for variable discretization, aside from the algorithmic requirements mentioned above, include increasing the speed of induction algorithms (Catlett 1991b) and viewing General Logic Diagrams (=-=Michalski 1978-=-) of the induced classifier. In this paper, we address the effects of discretization on learning accuracy by comparing a range of discretization methods using C4.5 and a Naive Bayes classifier. The Na... |
23 |
Very simple classi cation rules perform well on most commonly used datasets
- Holte
- 1993
(Show Context)
Citation Context ...imum number of intervals to produce in discretizing a feature. Static methods, such as binning, entropy-based partitioning (Catlett 1991b, Fayyad & Irani 1993, Pfahringer 1995), and the 1R algorithm (=-=Holte 1993-=-), perform one discretization pass of the data for each feature and determine the value of k for each feature independent of the other features. Dynamic methods conduct a search through the space of p... |
19 | An analysis of Bayesian classi ers - Langley, Iba, et al. - 1992 |
13 | Determination of Quantization Intervals in Rule Based Model for Dynamic - Chan, Batur, et al. - 1991 |
4 | An e cient algorithm for nding multi-way splits for decision trees, Unpublished paper - Fulton, Kasif, et al. - 1994 |
2 | Information synthesis based on hierarchical entropy discretization - Chiu, Cheung - 1990 |