### Table 3: Number of the linear threshold gates NLTG. These terms are used in the tables:

1997

Cited by 2

### TABLE I. Threshold function used in the example to illustrate the approximate implementation of linear threshold schedul- ing.

1995

Cited by 24

### Table 1: Predictions from the different neural mechanisms: FFN: divisive feed-forward, DFB: divisive feedback, LIN: linear threshold, SPK: spiking mechanism

"... In PAGE 17: ...the postulated three-layer architecture, we would then need a set of model-specific predictions to test whether these neurons implement the MAX operation via a network structure similar to any of our models. Some of these model- specific predictions are summarized in Table1 , touching upon anatomical, neurophysiological, and computational aspects of the individual models that we have discussed in this paper. The active involvement of shunting inhibition or gating would suggest divisive interactions at the synaptic level, lending more plausibility to the cellular implementation of the divisive feedforward and feedback models.... ..."

### Table 8. Linear Threshold Gates for -bit Serial Adder with Double Partition n DP DO Method DP CWL Method DP LWL Method = dpn e = 4 = [logn]

### Table 4. Results with a truncation to zero. #Iterations: number of BP training iter- ations. %Conv.: percentage of converged runs. %Miscl.: percentage of misclassi ed test patterns. However, they illustrate that the use of a non-linear thresholding function that is limited to the positive domain, does not deteriorate neural network train- ing.

1996

"... In PAGE 3: ... A more detailed description of the bench- mark problems and the simulation conditions can be found in [12]. The simulation results are outlined in Table 3 for the untruncated sigmoid and in Table4 for the truncated sigmoid. All the results are averaged over a certain number of runs (#Runs in Table 2) with di erent random weight initializations.... ..."

Cited by 2

### Table 2: Percentage sim of the simulated values of exceeding the (1 ? 2 ){ quantile of the 2 1{distribution in the case of the linear threshold model under the null hypothesis of a linear model without breakpoint, 0 = 0, 1 = 1, Y jX = 0:3, 1000 replications each.

"... In PAGE 11: ... In contrast to the results for the broken line model (1), here there is nearly no dependence observable of the results on the con guration of the independent variable. Table2 again gives an overview over the actual signi cance levels from tests... ..."

### Table 1: An Example Abstract

1995

"... In PAGE 9: ... One algorithm (we term the FFA algorithm) is based on quot;fixing quot; all attributesexcept one, the quot;free quot; attribute, and comparing across the values of the free attribute. Using the abstract in Table1 , we can fix loan-outcome (say to quot;approved quot;) and allow purchaseitem to vary. We then fix loan-outcome to quot;rejected7 apos; and repeat the process.... ..."

Cited by 1

### Table 2: Algorithm for Producing Abstraction Hierarchies

1990

"... In PAGE 3: ...em space. The hierarchy is ordered such that the highest level is the most abstract. The nal hierarchy has the ordered monotonicity property [ Knoblock, 1990a ] ,which requires that the literals are partitioned in suchaway that the achievement of a literal introduced at one level cannot change the truth value of a literal in a more abstract level. The algorithm for producing a hierarchy of abstraction spaces is shown in Table2 . The algorithm forms a directed graph, where the vertices of the graph represent one or more lit- erals (called a literal class) and the edges represent constraints between literals.... ..."

Cited by 6

### Table 2: Abstract Parsing Algorithm Input:

"... In PAGE 21: ... 5.3 Worst-Case Computational Complexity The abstract parsing algorithm in Table2 has several sources of computational complexity. If the simplest possible search strategy is used (such as CKY), then the dominant source of complexity is the logic.... In PAGE 33: ... For example, it is possible to compute the reverse value of the NP in Figure 6 as soon as the forward value of the V is known, without having computed the forward value of the S, the D, or the N. It is possible to elaborate the abstract parsing algorithm in Table2 so that it computes reverse values using Equation 32.... ..."

### Table 2: Abstract Parsing Algorithm Input:

"... In PAGE 21: ... 5.3 Worst-Case Computational Complexity The abstract parsing algorithm in Table2 has several sources of computational complexity. If the simplest possible search strategy is used (such as CKY), then the dominant source of complexity is the logic.... In PAGE 33: ... For example, it is possible to compute the reverse value of the NP in Figure 6 as soon as the forward value of the V is known, without having computed the forward value of the S, the D, or the N. It is possible to elaborate the abstract parsing algorithm in Table2 so that it computes reverse values using Equation 32.... ..."