### Table 5.1 - Connections Satisfied The blocking metric, which will show important (that is, passing the trumping criterion introduced earlier) improvements as various levels of optimization are introduced, is shown in the following figures.

2006

### Table 1. Comparison of results for various approaches.

"... In PAGE 8: ... 4. Numerical Results Table1 compares the balance and uniformity (t,s) of (n,2) de Bruijn sequences... In PAGE 9: ... In the case of Algorithm II, the characteristics of the sequences obtained by the optimal mappings with respect to both balance and uniformity criteria are shown. ------------------------- Table1 goes here ------------------------- In Table 1, we observe that: 1. Although Algorithm I generates sequences with optimal uniformity (minimum s), the corresponding balance criterion t is rather large.... In PAGE 9: ... In the case of Algorithm II, the characteristics of the sequences obtained by the optimal mappings with respect to both balance and uniformity criteria are shown. -------------------------Table 1 goes here ------------------------- In Table1 , we observe that: 1. Although Algorithm I generates sequences with optimal uniformity (minimum s), the corresponding balance criterion t is rather large.... ..."

### Table 6. Comparison of Control Policies with Criterion II.

1997

"... In PAGE 15: ... Also used in this table is the second criterion, which obtains an initial state for each of the methods for which the best hedging point is that initial state. Finally, in Table6 we use... In PAGE 22: ...nd KC for initial (x1; x2) = (2:64; 1:58) is (2.62,1.60). Table6 uses the third criterion so that the parameter values used for all di erent initial states are the ones that appear in Table 5 in the row with the initial state (0,0). In Table 7, we provide computational times for obtaining HC and TBC with respect to problems C1-C4.... In PAGE 24: ... Indeed, in cases with large positive surplus, the value of 1 for KC must be smaller than that for HC. Furthermore, in these cases with positive surplus, the cost di erences in Table6 must be larger than those in Table 5, since Table 6 uses hedging point parameters that are best for initial (x1; x2) = (0,0). These parameters are the same for HC and KC.... In PAGE 24: ... For example, this happens when the initial (x1; x2) = (0,50); see Table 5. As should be expected, the di erence in cost for initial (x1; x2) = (0,50) in Table6 is quite large compared to the corresponding di erence in Table 5. Asymptotic Behavior of HC, TBC and OC Before concluding the paper in the next section, let us make some important remarks regarding asymptotic optimality.... ..."

Cited by 5

### Table 5: Ordinal Optimization applied to original criterion

### Table 6: Ordinal Optimization applied to surrogate criterion

### Table 6. Feature Importance in 3-Classes Using Entropy Criterion

2003

"... In PAGE 10: ... In most cases the results are similar to Multiple Linear Regressions or tree-based software that use statistical methods to measure feature importance. Table6 shows the importance of the six features in the 3-classes case using the En- tropy splitting criterion. Based on entropy, a statistical property called information gain measures how well a given feature separates the training examples in relation to their target classes.... ..."

Cited by 3

### Table 1: Optimal Values for Fisher and Optimal Separation criterion

2005

"... In PAGE 4: ... The training data was used for checking the Fisher (4) as well as the Optimal Separation (12) criterion where optimization of (4) is done under the CDA (5) as well as under the FSDA (6) constraint. Table1 shows the good performance of the FSDA method for the Fisher criterion and also the bad performance of the Optimal Separation Projection method in terms of the Fisher criterion but the good performance especially for r = 2 in terms of the OS criterion. But this is expected as the OSP method aims at minimizing this criterion.... ..."

### TABLE 2 Correlations Between Performance and Deviations From the Optimal Reward Criterion and the Optimal Accuracy Criterion

### Table 1: Clustering Criterion Functions.

2002

"... In PAGE 3: ... For those partitional clustering algorithms, the clustering problem can be stated as computing a clustering solution such that the value of a particular criterion function is optimized. In this paper we use six different clustering criterion functions that are defined in Table1 and were recently compared and analyzed in a study presented in [45]. These functions optimize various aspects of intra-cluster similarity, inter-cluster dissimilarity, and their combinations, and represent some of the most widely-used criterion functions for document clustering.... ..."

Cited by 69

### Table 1: Clustering Criterion Functions.

2002

"... In PAGE 3: ... For those partitional clustering algorithms, the clustering problem can be stated as computing a clustering solution such that the value of a particular criterion function is optimized. In this paper we use six different clustering criterion functions that are defined in Table1 and were recently compared and analyzed in a study presented in [45]. These functions optimize various aspects of intra-cluster similarity, inter-cluster dissimilarity, and their combinations, and represent some of the most widely-used criterion functions for document clustering.... ..."

Cited by 69