### Table 1: Results averaged over 1000 problems of each size. The entries % legal and lt;CPU time gt; refer to the MF Potts approach. The time is given in seconds using DEC Alpha 2000. Typical times for the BB method are around 600 seconds.

1998

"... In PAGE 8: ...efer to the MF Potts approach. The time is given in seconds using DEC Alpha 2000. Typical times for the BB method are around 600 seconds. Table1 indicates an excellent performance of the MF Potts approach, with respect to giving rise to... ..."

Cited by 9

### Table 1: Results averaged over 1000 problems of each size. The entries \% legal quot; and lt;CPU time gt; refer to the MF Potts approach. The time is given in seconds using DEC Alpha 2000. Typical times for the BB method are around 600 seconds.

1998

"... In PAGE 8: ...efer to the MF Potts approach. The time is given in seconds using DEC Alpha 2000. Typical times for the BB method are around 600 seconds. Table1 indicates an excellent performance of the MF Potts approach, with respect to giving rise to... ..."

Cited by 9

### Table 4.4 presents the total memory requirements of the two approaches for computing the Jacobian of the MINPACK-2 problems for the case of n = 160; 000. In addition to the notation introduced in Table 3.2, we use MfJg to denote the memory requirements of the NONSPARSE approach, which includes the memory needed for the graph-coloring computa- tion, and MfJSparseg to denote the memory requirements of the SPARSE approach. For each problem, the ratio MfJg=MfFg remains constant for all n. Hence, Table 4.4 is a su cient 18

1996

Cited by 5

### Table 1: Results for small problems, 1000 problem of each size are probed. Only legal entries are used when calculating the averages lt; BB gt; and lt;CPU-time gt;. lt;CPU time gt; refers to the MF Potts approach using a DEC Alpha 2000, and is given in seconds. Typical times for the BB method are around 600 seconds.

1998

"... In PAGE 9: ... As a measure of the complexity of a problem we use the entropy, S, de ned as the logarithm of the total number of possible con gurations, disregarding load constraints. Table1 indicates an excellent performance of the MF Potts approach, with respect to giving rise to legal solutions with good quality, with a very modest... ..."

Cited by 9

### Table 4.4 presents the total memory requirements of the two approaches for computing the Jacobian of the MINPACK-2 problems for the case of n = 160; 000. In addition to the notation introduced in Table 3.2, we use MfJg to denote the memory requirements of the COMPRESSED approach, which includes the memory needed for the graph-coloring computa- tion, and MfJSparseg to denote the memory requirements of the SPARSE approach. For each problem, the ratio MfJg=MfFg remains constant for all n. Hence, Table 4.4 is a su cient summary of all the memory results. As mentioned in Section 3.1, the NONSPARSE mode of ADIFOR tends to augment memory requirements of the function computation linearly. In the case of the compressed Jacobian computations the augmentation factor is the chromatic number, p. However, since the graph-coloring algorithm also introduces additional memory requirements proportional to nnz(f0(x)), the MfJg=MfFg ratios in Table 4.4 are larger than the corresponding p values in Table 4.2, and we have

### Table 2: Mean relative deviation from the optimal (or best known) solution and normalized CPU times for the mean eld method (MF) and other approaches. The results for the genetic algorithm (GA) are taken from [2]. R-Gr and Alt-Gr are greedy heuristics algorithms from [5]. BB is a branch and bound algorithm and LH is a Lagrangian based heuristic, with results taken from [12] and [3], respectively. (y) The CPU times reported for these problems was 0.0 and a comparison is therefore not possible. Problem Rel. deviation (%) (Normalized) CPU Time

"... In PAGE 8: ... For 10 of the problems our method found the optimal solution. The MF results are typically within a few percent of the optimal solutions, as can be seen in Table2 . Large relative deviations from optimum are seen in the unicost CLR problems (see Appendix C, tables C1 and C4), which appear to be di cult for our approach.... In PAGE 9: ... It is however very competitive with respect to speed. Table2 shows a more detailed comparison with the genetic algorithm. In order to compare CPU times for di erent computers we have used the Linpack benchmark [4] as a reference.... ..."

### Table 1 Input/output mf1 mf2 mf3 mf4 mf5

1998

"... In PAGE 3: ... The resulting models (with rules with weights less than 0.1 removed for the sake of clarity ) are given in Table1 and 2, correspondingly. Table 1 Input/output mf1 mf2 mf3 mf4 mf5... ..."

Cited by 2

### Table 4. Number of MF based on two methods Center-based method Chi2-based method

"... In PAGE 6: ...2% were off by one membership class. MFs based on Chi2 approach MFs for golf course problem based on Chi2 approach are shown in Table4 , which also lists the number of membership function for each attribute based on the center-based approach. The membership functions for rainfall are shown in Figure 3(b) A three layer fuzzy neural network was created with 18 input nodes, 20 hidden nodes, and 5 output nodes to calculate the output membership degrees.... ..."

### Table 2 Input/output mf1 mf2 mf3 mf4 mf5

1998

Cited by 2

### Table 3. Associations between GO terms belonging to cc and mf taking hierarchy into consideration

"... In PAGE 3: ... (Resnik 1995) has pointed out that the semantic similarity of terms as one traverses the hierarchical tree reduces by a factor of log(p(c)) where p(c) is the probability of finding a child for the term when seeking information. Table3 thus represents a quantification of this semantic similarity, which can be used to extend the results presented here by using an approach similar to that advanced in (Azuahe and Bodenreider, 2004). ... ..."