### Table 10: Comparison of various network inference methods (Y: Yes, N: No).

"... In PAGE 52: ... The same is observed in the case of the T-cell data, from the results reported in [22]. A compar- ison of all the presented methods, along with regime-SSM, has been presented in Table10... In PAGE 54: ... We can thus incorporate condition specificity of gene expression dynamics for understanding gene influences. Comparison of the proposed approach with other current procedures like GGM or CoD reveals some strengths and would very well complement existing approaches ( Table10 ). We believe that this approach, in conjunction with sequence and transcrip- tion factor binding information, can give very valuable clues to understand the mechanisms of transcriptional regulation in higher eukaryotes.... ..."

### Table 3: Complexity of various procedures

1998

Cited by 110

### Table 3: Complexity of various procedures

1998

"... In PAGE 23: ... Finally, number of tries highlights in a more evident way the easy-hard-easy pattern, as shown by Figure 3. This situation is rather typical: for few clauses (those on the left side) there is a good correspondence between the two parameters; for larger instances CPU time is a ected by costs that depend on our implementation (as shown by Table3 , costs depend on the number of clauses m). 0 0,05 0,1 0,15 0,2 0,25 0,3 0 2 4 6 8 10 #clauses / #variables per set CPU time 0 200 400 600 800 1000 1200 1400 1600 avearge tries CPU time tries Figure 3: Number of tries versus CPU time in Evaluate (2QBF-5CNF, 20 vars per set, FCL2 model) All the experiments reported here have been run using the implementation 2 (see Table 2), which turned out to be the best in our comparison tests.... ..."

Cited by 110

### Table 1. Performance features obtained by the ARE inference method (left) and just the (DFA) regular inference procedure (right) for the test languages.

in Learning of Context-Sensitive Language Acceptors Through Regular Inference and Constraint Induction

1996

"... In PAGE 10: ...i.e. the 48 positive examples not used for training and the whole 64 negative examples), and the correct classi cation rates on Tij were computed. The results of the experiment are summarized in Table1 . Five features are displayed for each test language: the former three are averages over the eight learning samples of the correct positive, negative, and total classi cation rates, respectively; the fourth one refers to the arithmetic mean of the positive and negative classi cation rates [10]; and the fth one (identi cation rate) is the percentage of times an ARE describing the target language was inferred.... In PAGE 10: ... Five features are displayed for each test language: the former three are averages over the eight learning samples of the correct positive, negative, and total classi cation rates, respectively; the fourth one refers to the arithmetic mean of the positive and negative classi cation rates [10]; and the fth one (identi cation rate) is the percentage of times an ARE describing the target language was inferred. The last row of Table1 displays the... In PAGE 11: ... Performance features obtained by the ARE inference method (left) and just the (DFA) regular inference procedure (right) for the test languages. Table1 shows the over-generalization carried out by the RGI step, which is indeed desirable to enable the discovery of context constraints afterwards, and the good classi cation results of the inferred AREs, with only L2, L4 and L8 below 90% of correct classi cation rate. On the other hand, the results con rmed that ARE identi cation is hard.... ..."

### Table 1. Execution times and number of inferences for the examples Example Arithmetic Procedure Combined Procedure

1995

"... In PAGE 13: ...Results Table1 gives run times and garbage collection times in seconds, and the number of primitive inferences, for applications of the linear arithmetic decision proce- dure in the HOL arith library and of the combined procedure described in this paper to the following examples: 1. m n ^ :(m = n) =) SUC m n 2.... ..."

Cited by 14

### Table 2: The models of causal inference.

"... In PAGE 3: ... The information collected about factual and causal uncertainty was then used to parameterise various models of causal inference, in order to see if the inferences participants made could be predicted. Models of Causal Inference The models of causal inference investigated are listed in Table2 . The probabilistic model defines the normative method of inferring the probability of an effect given information about a related cause.... ..."

### Table 2. Computing times for various solution procedures.

1998

"... In PAGE 3: ... Here, we used MIPO, a branch-and-cut algorithm which uses lift-and-project cuts [1]. The computa- tional results were even worse than with the basic branch-and-bound approach (see Table2 ). This is not surprising since, in this case, the linear programming relaxation has a huge number of basic solutions with value 0.... In PAGE 4: ... Hence, even when we decreased the multi- pliers, we faced space problems. See Table2 . We note that there are other more sophisticated dynamic programming based procedures [14] which could be used here.... In PAGE 4: ... 2.5 Computational Experience Table2 contains our computational experience with the di erent approaches given in Section 2. We choose 4 settings for the problem size: m n =3 20, 4 30, 5 40 and 6 50.... ..."

Cited by 16

### Table 1: Number of trials (out of 30) where RSP does better/worse than various methods. In partic- ular, the last row (opt) shows the number of times that RSP does worse than the optimal solution. Defining Sim(i; j) = log(pi;j=(1 pi;j)), we minimize E(x) to cluster the test set. We found that the various inference algorithms perform poorly on the MRF for large U, even when they converge (probably due to a large number of minima in the approximation). We are able to obtain lower energy configurations by the recursive 2-way partitioning procedure in [5] used for graph cuts. (Graph cuts do not apply here as weights can be negative). This procedure involves recursively running, for e.g. RSP, on the MRF for E(x) with U = 2, and applying the Kernighan-Lin algorithm [10] for local refinements among current partitions. Each time RSP returns a configuration that partitions the data, we run RSP on each of the two partitions. We terminate the recursion when RSP assigns a same value to all variables, placing all remaining items in one cluster.

### Table 2. An example of context inference.

"... In PAGE 5: ... Dependency models have been proposed for spec- ifying the relationship among various context para- meters.12 Table2 provides an example of a context-dependent access-control policy requiring context inference. Here, an accumulated set of con- text parameters is used to infer the high-level con- text abstraction (or access pattern) hasAccessed- FromMultipleLocationsInOneHour (MLOH).... ..."