### Table 1. Training patterns

### Table 3: Obtaining training patterns.

2002

"... In PAGE 12: ...able 2.Instance coverage of Drumstore. 1 Each Drumstore scenario is designed through the six terms as follows: Robot (R), Thing (T), Gripper (G), Object (O), Relation (Rel) and Reference (Ref). EVentus Terms2 T A C V Ca 1 2 3 4 5 9 5 8 5 13 4 6 7 5 7 1 1 2 3 1 1 1 2 1 2 3 2 2 4 6 Table3 .Instance coverage of EVentus.... ..."

### TABLE II RNN is trained with these patterns.

### Table 5: Training patterns for the gating function.

"... In PAGE 8: ....2.2 Use of a Gating Input The second non-separable function we tried is described by the conditional on the rst of four inputs: if (x1 = +1) then OR(x2; x3) else OR(x3; x4) We tried this with the strictly layered network having two hidden units depicted in Figure 2. The 6 partially speci ed training patterns we used for this problem are listed in Table5 . With one exception (out of a total of 7 runs on this problem) in which the backpropagation algorithm became stuck at a local minimum, the network was able to learn the function, usually in less than 200 passes.... ..."

### Table 3: Training pattern set in 4D

in A Binary-input Supervised Neural Unit that Forms Input Dependent Higher-order Synaptic Correlations

"... In PAGE 5: ... The neural unit generalized the remaining 7 patterns in consistency with the Boolean function f( ~ X). Example 3: The neural unit is trained with the 10 patterns, shown in Table3 , that are generated using the Boolean function f( ~ X) = (x1 x2) ^ (x3 x4) where represents the XOR operation. The neural unit comes up with the relation ( ~ X) = (?0:95 + 0:06x1)(?0:94 + 0:82x2)(0:07 + 0:75x3)(0:04 ? 0:77x4) + (?0:05 ? 0:93x1)(?0:64 ? 0:62x2)(?0:93 + 0:00x3)(0:89 ? 0:01x4) = ?0:49x1 ? 0:48x1x2 ? 0:52x3x4 + 0:45x2x3x4 = 0:5(?x1 ? x1x2 ? x3x4 + x2x3x4) = ?0:5x1(1 + x2) ? 0:5x3x4(1 ? x2) which can be interpreted as: ( ~ X) = ( ?x3x4 if x2 = ?1 ?x1 if x2 = +1 and is represented by the Boolean function: ( ~ X) = ( x1 ^ x2) _ ((x3 x4) ^ x2) which is a simpler relation than the one used in generating the pattern set.... ..."

### TABLE IV Features vs. training patterns.

2004

Cited by 3

### Table 1: Training patterns for the 4-input OR function.

### Table 1: Neural network training patterns.

### Table 2: Con icting training patterns in the Cake World training set.

1998

"... In PAGE 13: ... h0:5; 0:0; 3:0; 3:5; 10i and h0:5; 0:0; 3:0; 3:5; ?10i, where the rst four components are LEFT, FRONT, RIGHT, and REAR sensor values and the fth is a desired motor speed. In fact, the set contains 22 such con icts, listed in Table2 . On the other hand, ANFIS networks necessarily compute functions, so they cannot possibly duplicate a non-functional training set.... ..."

### Table 1: Datasets used in experiments. train is the number of training patterns, test is the

1997

"... In PAGE 11: ... The real-world data sets were obtained from the machine learning data repository at the University of California at Irvine #5BMurphy amp; Aha, 1994#5D. Table1 summarizes the characteristics of the data sets selected for our experiments. 3.... ..."

Cited by 6