### Table 5: Solution Statistics

"... In PAGE 8: ... We will also examine the formulation with the violated 2-, 3-, 4- and 5-cycle breaking (CB) cuts at the root node. The solution statistics of these three cases are given in Table5 .... ..."

### Table 7: Solution statistics of the examples Example 1 Example 2 Example 3

"... In PAGE 12: ... Bound Gap [(UB-ZH)/ZH] 0.70 0.65 0.71 Computational Results The computational results for examples 1, 2 and 3 are given in Table7 . An optimality tolerance of 0.... ..."

### Table 3. Tree size

"... In PAGE 10: ... Note that LMT also outperforms PLUS, even though the selection of the best result from the three modes for PLUS introduces an optimistic bias. To answer our third question, Table3 gives the observed average tree sizes (measured in number of leaves) for LMT, C4.5 and PLUS.... ..."

### Table or tree size

2004

Cited by 25

### Table or tree size

2004

Cited by 25

### Table 3. Decision Tree Size

1996

"... In PAGE 12: ... The #0Crst exper- iment compares the performance of these algorithms to the full pruning algo- rithm. Table 2 shows the classi#0Ccation accuracy of the di#0Berent algorithms while Table3 shows the sizes of the #0Cnal decision tree. The execution times of the three algorithms nearly the same and have therefore not been shown.... ..."

Cited by 18

### Table 2: Initial tree sizes

1997

Cited by 8

### Table 4: Pruned-Tree Size

1995

"... In PAGE 4: ... A smaller decision tree is desir- able since it provides more compact class descriptions, unless the smaller tree size leads to a loss in accuracy. Table4 shows the sizes for each of the datasets. The results show that the MDL pruning algorithm achieves trees that are signi cantly smaller than the trees gen- erated by the Pessimistic and C4.... ..."

Cited by 47

### Table 4: Pruned-Tree Size

1995

"... In PAGE 4: ... A smaller decision tree is desir- able since it provides more compact class descriptions, unless the smaller tree size leads to a loss in accuracy. Table4 shows the sizes for each of the datasets. The results show that the MDL pruning algorithm achieves trees that are signi#0Ccantly smaller than the trees gen- erated by the Pessimistic and C4.... ..."

Cited by 47

### Table 4: Pruned-Tree Size

1995

"... In PAGE 4: ... A smaller decision tree is desir- able since it provides more compact class descriptions, unless the smaller tree size leads to a loss in accuracy. Table4 shows the sizes for each of the datasets. The results show that the MDL pruning algorithm achieves trees that are signi cantly smaller than the trees gen- erated by the Pessimistic and C4.... ..."

Cited by 47