### Table 1. zChaff on grid pebbling formulas. Note that problem size substantially increases as we move down the table. z denotes out of memory

2003

"... In PAGE 10: ...Chaff. We analyzed the performance with random restarts turned off. For all other parameters, we used the default values of zChaff. Table1 shows the performance on grid pebbling formulas. Results are reported for zChaff with no learning or specified branching sequence (DPLL), with speci- fied branching sequence only, with clause learning only (original zChaff), and both.... ..."

Cited by 5

### Table 3: Increasing permutations generation problems Clearly the addition of lazy arc consistency substantially improves LSDL when the problems involve a large amount of arc inconsistency (the rst set of problems), for both LSDL(genet) and LSDL(imp). By reducing the search space as computation proceeds we can reduce the computation 22

2000

Cited by 5

### Table 5.1: mp3d cc-NUMA simulation statistics from alite statistics con rm that the miss rate is substantially lower for mp3d-3, so the strategy of reducing sharing has been successful. The drop in performance is believed to be because of the problem size (i.e. 3000 molecules) used for these tests. Unfortunately, there was insu cient time to con rm this by running tests for a substantially greater number of molecules.

### Table 3 and Figure 4 show how the processing time and the message size vary with the initial group size. The horizontal axis in Figure 4 is in log scale. When extrapolated, this implies that the CKMSS scheme is scalable to large groups because the processing time per request increases almost linearly with the logarithm of the group size. The reduction of the problem from O(n) to O(log(n)) leads to substantial economy in commercial applications.

1983

"... In PAGE 8: ... Table3 . Message size versus initial group size 0 5 10 15 20 25 32 64 128 256 512 1024 2048 4096 8192 16384 Initial group size Processing time (msec) Server processing time per join Server processing time per leave Server processing time per request... ..."

Cited by 1

### Table 3 and Figure 4 show how the processing time and the message size vary with the initial group size. The horizontal axis in Figure 4 is in log scale. When extrapolated, this implies that the CKMSS scheme is scalable to large groups because the processing time per request increases almost linearly with the logarithm of the group size. The reduction of the problem from O(n) to O(log(n)) leads to substantial economy in commercial applications.

1983

"... In PAGE 8: ... Table3 . Message size versus initial group size 0 5 10 15 20 25 32 64 128 256 512 1024 2048 4096 8192 16384 Initial group size Processing time (msec) Server processing time per join Server processing time per leave Server processing time per request... ..."

Cited by 1

### Table 4: Comparison of branching rules R1 to R5 Branching Rules The success of branch amp; bound algorithms depends very much on the choice of the edge to branch on next. Numerous di erent branching rules of arbitrary complexity can be thought of but it is quite impossible to predict their performance on an arbitrary cost function. We will report our experience with a few very simple rules in the following. Probably the rst strategy which comes to mind is to select the edge ij with jxijj maximal. Separating or contracting the vertices i and j as suggested by the sign of xij will not change the problem substantially but setting xij opposite to its current sign should lead to a sharp drop of the optimal solution in the corresponding subtree. If the bound also drops as fast we can hope that this subtree will be cut o quickly. We will call this rule R1.

### TABLE 5.3. Results on Disjunctive Scheduling Application 1963 [29] and left open for 25 years before being solved in [6]. The algorithm in [6] is very involved including relaxation techniques to preemptive scheduling. This problem requires about 90 hours of computation. The message behind this result is twofold: on the one hand, cc(FD) can express sophisticated pruning techniques and solve some problems considered hard in 1986 and, on the other hand, cc(FD) is still substantially slower than specialized algorithms. Better support for scheduling problems is certainly needed to bridge the gap between cc(FD) and specialized programs.The above results seem to indicate that cc(FD) is a step in closing the gap between declarative constraint languages and procedural languages. Very di cult problems are now in the scope of cc(FD), which comes close in e ciency to spe- cialized algorithms written in procedural programs. However, there are classes of applications where the gap is still substantial and more work is needed to nd the right abstractions and compilation techniques.

### Table 1 shows that, as expected, an in nite choice is the best when f is a quadratic function, and the problem is unconstrained. On the other hand, a substantial increase in the number of conjugate gradient iterations is observed in Table 2 (except for problem TORSIONF) when bound constraints are imposed, while the number of major iterations decreases. At rst glance, these results may be quite surprising, but they closely depend on the LANCELOT package itself. This package includes a branch, after the conjugate gradient procedure, that allows re-entry of this conjugate gradient procedure when the convergence criterion (based on the relative residual) has been satis ed, but the step computed is small relative to the trust region radius and the model apos;s gradient norm. This is intended to save major iterations, when possible. In 9

1995

Cited by 12

### Table 2: 5-Value Assignments are Cis, and the inputs are Ai and Bi. Index i = 0 represents the least signi cant bit of a signal, and i = 2 represents the most signi cant bit. These equations are obtained by means of replacing the values on table 1 by the assignments on table 2, generating and minimizing the boolean equations for Ci. De ning the cost of each assignment as the sum of the number of inputs for operations AND and OR, the rst assignment on table 2 results in a cost of 43 and the second one in a cost of 122. This example indicates that a careful look at the assignment problem in a compiled simulator can result in substantial savings. Assignment # 1:

"... In PAGE 3: ... The values used in this table represent a low level (0), high level (1), upward transition (U), downward transition (D), and unknown (X). AND 0 1 U D X 0 0 0 0 0 0 1 0 1 U D X U 0 U U X X D 0 D X D X X 0 X X X X OR 0 1 U D X 0 0 1 U D X 1 1 1 1 1 1 U U 1 U X X D D 1 X D X X X 1 X X X NOT 0 1 U D X 1 0 D U X Table 1: 5-Value Truth Tables To illustrate the impact of the assignment on the equations that drive the simulator, the two assignments of Table2 are analyzed here. The assignment #1 was done according to the heuristic presented in Section 3.... In PAGE 7: ... For the truth tables of Table 1, application of equation 1 results in the matrix of Table 3. At this point the reader might want to check how the assignments of Table2 are in accordance with the weighted connections in the DAG represented in Table 3. According to equation 2, assignment #1 has a tness of 101 and assignment #2 has a tness of 85.... ..."

### Table 1: Comparative summary of normalized prediction errors for rates of return on Industrial Production for the period January 1980 to January 1990 as presented in Moody et al. (1993). The four model types were trained on data from January 1950 to December 1979. The neural network models significantly outperform the trivial predictors and linear models. For each forecast horizon, the normalization factor is the variance of the target variable for the training period. Nonstationarity in the IP series makes the test errors for the trivial predictors larger than 1.0. In subsequent work, we have obtained substantially better results for the IP problem (Levin et al., 1994; Rehfuss, 1994; Utans et al., 1995; Moody et al., 1996; Wu and Moody, 1996).

1995

Cited by 3