### Table IV compares the results achieved by many neural network architectures [7]-[8] and our HCGA approach for the same data sets. HCGA2 performs better than most other approaches in most of the cases, showing its great generalization capabilities when applied to classification problems. For instance, the HCGA2 improvement over the second best ANN for the Card3 problem reaches nearly 20%, taking into account the classification error rate. Another advantage over most other ANNs is the easiness of use, since

### Table 1: Shown is classi cation error for di erent values of lambda on the full 20 newsgroups classi cation task. The rst column gives the regularization parameter used. The second column gives training error. The third column gives test error. The fourth column gives the LOOCV error bound. The LOOCV error bound greatly over-estimates generalization error for small values of lambda.

2003

"... In PAGE 3: ... We could have almost always done better (!) by simply chosing the largest lambda from the set of lambdas that gave the lowest error (often, many di erent lambdas yielded zero training error). Table1 gives the results for the full 20 class classi cation task. The lowest test error (15.... ..."

### Table 3.3-5. General Circulation Model (GCM) Scenario Impacts on the Great Lakes (see notes para 3.3.4 amp; Chapter 1 concerning CCCma, HadCM2 and later HadCM3 model results)

### Table 3: Example of Inference inference, in spite of the fact that little information, given only in the set of example inferences, was available. These surprisingly good results illustrate the great generalization ability of the system, comparable to human intelligence, and con rms the correctness of the heuristics proposals for work with incomplete and irrelevant information, and for explanation. This program, because of its generality, can be applied in arbitrary areas where the middle-complex expert system needs to be created in a relatively short time. We can state that the neural approach to expert system design studied here is a possible alternative to the classical rule-based methods. Finally, we are working on the third version of EXPSYS, which will be much more user-comfortable than the previous one. It will be also easily adjustable to any other computer environment. The basic con guration will include interfaces for MS Windows and X Windows. We hope that the implementation under the Unix operating system will speed up the learning process, and enable the management

### Table 3. Geologic chart showing selected formations and generalized hydrogeologic units in the Great and Little Miami River Basins, Ohio and Indiana [Shaded areas represent unconformities caused by periods of non-deposition or erosion; geologic names and descriptions from Larsen, Ohio Department of Natural Resources, written commun., 1998; Casey, 1992; 1996; Schumacher, 1993; Indiana Department of Natural Resources, 1988; Shaver and others, 1986; Gray and others, 1985; Shaver, 1985. Nomenclature may vary from that of the U.S. Geological Survey]

2000

### Table 11 General summary

"... In PAGE 21: ... Having made these observations, we compare the results obtained, examining the average efficiency, the coefficient of correlation and the order of the units when applying each of the methods considered. Table 10 presents the complete data on efficiency and Table11 summarises the average efficiencies according to various methods. The fact that the average efficiency is considerably less when applying the parametric models of frontier is as expected, due to the lesser flexibility of these approaches, resulting as a consequence in a great number of efficient units and high indices of efficiency... ..."

### Table 3: Optimal schedule at H = 4 2:5 and 4. For values of H as small as 1:5, the approximate optimal sequence captures the non-monotonic feature of times between inspections. Since inspections far in the sequence are very sensitive to small deviations in the value of the rst inspection, the estimation of their value is not very accurate. In terms of risk, however, this has little impact, since later inspections have small probability of being actually performed. In the above example it appears that an horizon xed at the 99-th percentile of the failure distribution can provide a perfectly adequate inspection schedule and a reliable estimate of the risk. If con rmed more generally, this would be of great practical relevance when the information about the extreme tails of the distribution is di cult to obtain, as in many of the medical applications considered in Part II. 17

### Table 4. Comparison of resistance values for workable tinning and non-tinning lands screened through 200 mesh.

"... In PAGE 9: ... Thus a 1 percent weight dilution with a non- tinning agent might contribute 5 to almost 10 percent of volume dilution. Table4 compares some of the resistance values for typical tinning and non-tinning formulations. It can be seen that the dilution effects, although real, are not of great magnitude and are important only in critical formats.... ..."

### Table 2: Algorithm for checking the viable pre x. (7): The parse-stack grows concurrently. As already stated, these general algorithms could be greatly improved in order not to recompute things which do not need to have to. At some places in the look-ahead oracle, if it is known that a call to the rule engine will not change its result (i.e. the LR(0) con ict will be solved within a certain length which will not modify the grammar), the checking of the look-ahead could be performed within the same grammar. Hence the call to the rule engine could be avoided, the LR(0) automaton does not need to be modi ed neither when moving forward nor when returning backward.

1994

"... In PAGE 19: ....1.2 Viable Pre x Checking When the new grammar Gi+1 has been built, we must check that B is a dynamic viable pre x for Gi+1, recompute (if necessary) the transition function and the corresponding states over B, and build the new parse-stack. This work is performed by the function check-viable-pre x in Table2 which is called with B as an argument. (1): When a transition from state q over symbol X leads to an empty state, this goal is (re)computed by advancing the LR(0) mark past X and by calling the closure function on the resulting subset.... In PAGE 49: ... This means that, for any input string, at most a unique parse is produced, even when the dynamic grammar is am- biguous. The dynamic parser and the rule engine are coupled via syntactic action and predicate calls, and the consistency of any grammar modi cation with the theory is checked (see Table2 ). This experimental system has been used on a sample of signi cative examples which demonstrate that this method should be considered as a viable alternative solution to usual methods, as far as context-sensitive properties are concerned.... ..."

### Table 2.4: Summary of Computational Results Using Lower Bound Based Branching Rules. From the tables we make the following observations: It is in general too costly to perform 10 dual simplex pivots on all fractional vari- ables. Strong branching can be highly e ective on some problems, but the e ectiveness is impacted greatly by the ability to select a suitable subset of variables on which to perform a number of dual simplex pivots. 11

1999

Cited by 33