### Table 3: Performance of MDL and BE-MDL classifiers Exhaustive search MDL BE-MDL

"... In PAGE 21: ... The noise variance was again set to 2 v = 10?4. Table3 presents the classification performance for increasing N of the (exhaustive search) MDL and BE-MDL classifiers estimated on 100 Monte-Carlo runs. The correct column corresponds to the number of times (out of 100) the true model has been selected.... In PAGE 21: ... It must also be noted that when a spurious component is erroneously detected, its associated estimated variance ^ 2 i is generally very small. Comparing the two sets of results of Table3 , it seems that the BE-MDL method slightly out-performs the exhaustive search MDL for small N. This can be explained as follows.... In PAGE 22: ... Only the BE-MDL algorithm was tested this time. Comparison of the results of Table3 with those of Table 6 reveals that there is apparently a loss in perfor- mance, but that the algorithm seems to work with non-Gaussian data. Note that in the case of non-Gaussian data, the penalized spectral matching alternative interpretation of the MDL criterion (28) can still be used, but the consistency proof of Theorem 6 is no longer valid.... ..."

Cited by 2

### Table 5. MDL and Bayesian

2002

"... In PAGE 4: ...(7) from [3] [4]. The experimental results, as shown in Table5 , confirmed that the model selection using our Bayesian criterion re- sulted in better word recognition rates compared with that using the MDL criterion, especially in the case of small amounts of training data. Table 4.... ..."

Cited by 4

### Table 5. MDL and Bayesian

2002

"... In PAGE 4: ...(7) from [3] [4]. The experimental results, as shown in Table5 , confirmed that the model selection using our Bayesian criterion re- sulted in better word recognition rates compared with that using the MDL criterion, especially in the case of small amounts of training data. Table 4.... ..."

Cited by 4

### Table 14 Example generalization results for SA and MDL.

1998

Cited by 73

### Table 4.10: Example generalization results for SA and MDL.

1998

Cited by 10

### Table 2: The BE-MDL algorithm Initialization:

"... In PAGE 13: ...full search MDL. The MDL criterion can be combined with a backward elimination (BE) scheme. In the BE search, the algorithm is started by assuming first that all components from the dictionary are present and then one component at a time is removed in a greedy fashion. Formally, the BE search starts with (M) 4 = and proceeds from k = M ? 1 to k = 0 by (k) = (k+1) n arg max i2 (k+1) L (k+1)nfig(^ (k+1)nfig): (29) Once the series of nested index sets f (k)gM k=0 has been computed, the BE-MDL solution is simply the minimizer ^ BE = arg min 2f (k)gM k=0 MDL( ): (30) Table2 summarizes the BE-MDL algorithm. It can be shown that the BE scheme preserves the optimality properties of the MDL solution.... ..."

Cited by 2

### Table 4: Ad-Hoc and proper MDL, size of the polynomial Ad-hoc Ciper MDL Ciper

"... In PAGE 3: ...ncreased. We also present the search space complexity i.e the number of generated equations. From Table4 we notice that, generally, proper MDL generates simpler polynomials than ad- hoc MDL. The complexity of the induced polynomial is smaller in al- most all cases, and in some cases a lot smaller.... ..."

### TABLE 3 INDICATORS FOR CRITERION 3: Degree to which cumulative environmental impacts of salmon farming on an entire bay or other ecosystem are considered in siting decisions.

2003

### Table 2: Test results for the Finnish and English corpus. Method names are abbreviated: Recursive seg- mentation and MDL cost (Rec. MDL), Sequential segmentation and ML cost (Seq. ML), and Linguistica (Ling.). The total MDL cost measures the compression of the corpus. However, the cost is computed accord- ing to Equation (1), which favors the Recursive MDL method. The final number of morphs in the codebook (#morphs in codebook) is a measure of the size of the morph vocabulary . The relative codebook cost gives the share of the total MDL cost that goes into coding the codebook. The alignment distance is the total distance computed over the sequence of morph/morphemic label pairs in the test data. The unseen aligned pairs is the percentage of all aligned morph/label pairs in the test set that were never observed in the training set. This gives an indication of the generalization capacity of the method to new word forms.

2002

"... In PAGE 8: ... The test data contained 12 053 word types. Test results for the three methods and the two lan- guages are shown in Table2 . We observe different tendencies for Finnish and English.... ..."

Cited by 39

### Table 2: Test results for the Finnish and English corpus. Method names are abbreviated: Recursive seg- mentation and MDL cost (Rec. MDL), Sequential segmentation and ML cost (Seq. ML), and Linguistica (Ling.). The total MDL cost measures the compression of the corpus. However, the cost is computed accord- ing to Equation (1), which favors the Recursive MDL method. The final number of morphs in the codebook (#morphs in codebook) is a measure of the size of the morph vocabulary . The relative codebook cost gives the share of the total MDL cost that goes into coding the codebook. The alignment distance is the total distance computed over the sequence of morph/morphemic label pairs in the test data. The unseen aligned pairs is the percentage of all aligned morph/label pairs in the test set that were never observed in the training set. This gives an indication of the generalization capacity of the method to new word forms.

2002

"... In PAGE 8: ... The test data contained 12 053 word types. Test results for the three methods and the two lan- guages are shown in Table2 . We observe different tendencies for Finnish and English.... ..."

Cited by 39