### Table 6.1: Comparison of accuracy and conciseness of the surfaces after each phase. Accuracy is measured by the residual sum of squares Edist; conciseness is measured by the number of kilobytes required to store the representation in compressed form.

1994

### Table 6.1: Comparison of accuracy and conciseness of the surfaces after each phase. Ac- curacy is measured by the residual sum of squares 69100105115116; conciseness is measured by the number of kilobytes required to store the representation in compressed form.

1994

### Table 6.1: Comparison of accuracy and conciseness of the surfaces after each phase. Accuracy is measured by the residual sum of squares Edist; conciseness is measured by the number of kilobytes required to store the representation in compressed form.

1994

### Table 2. Irredundant and Minimal Prime Cover Computation Results. These tables show that in all cases the sizes of the prime covers generated using the algorithms presented here are much smaller than the numbers of prime implicants of the Boolean functions under treatment, and that in most cases they provide users with a very concise representation of the function under treatment. In all cases the CPU times needed to compute the minimal prime covers are larger than the CPU times needed to compute the irredundant prime covers, though these irredundant prime covers are in most cases nearly as good as the minimal prime covers. 9

"... In PAGE 9: ...Experimental Results Table 1 and Table2 present the experimental results that have been obtained using the di erent prime cover computation algorithms presented in this paper. Column \#Vars quot; of Table 1 gives, for each of the treated examples, the number of variables of the function to be covered, and column \j f j quot; the size of the BDD that represents this function.... ..."

### Table 4 shows the average number of prototypes created by MSP for each data set. The first column indicates the number of output classes for the appropriate data set. This may be thought of as a lower bound on the number of prototypes required by MSP. The second column indicates the average number of prototypes actually created by MSP. The third column indicates the number of instances in the training set (an upper bound on the number of prototypes), and the fourth column is the ratio #Prototypes/#Instances. This ratio gives an indication of parsimony, indicating to what degree the training information was able to be assimilated into a concise representation. As may be seen from the table, a high degree of parsimony is achieved in all cases. For all data sets, MSP created no more than an average of seven prototypes per output class, which always resulted in at least an 87% reduction in information stored.

"... In PAGE 8: ... Table4 . Number of prototypes created by MSP 4.... ..."

### Table 1. Time and space costs of polymorphic type analyses. The main contribution of this paper is the application of abstract compilation to polymorphic type inference associating each type with an incarnation of the Prop do- main. It is this view which has led us to a concise and e cient implementation of a type system. Of particular interest is the application of an open semantics and algebraic simpli cation to obtain time and space e cient analyses. Acknowledgments We thank Roberto Giacobazzi, Phuong Lan Nguyen, T.K. Lakshman and Eyal Yardeni for useful discusions on types. Bart Demoen is partially sponsored by contract IT/4 of the Belgian D.W.T.C.

1994

"... In PAGE 10: ... As programs and/or the domain of types grow larger it becomes crucial to maintain a concise representation in the implementation. In fact the polymorphic type analysis for the balance program of Example 6 is already not reasonable following the naive approach (see Table1 below). Instead, we follow an approach in which lub=3 and neq=2 predicates in the bodies of the (abstract) clauses are viewed as open predicates and hence not unfolded in the evaluation of the least xed point.... In PAGE 13: ... 6 Discussion The analyses described in this paper are implemented in Prolog and consist of 150 lines of code for the analyzer which is based on \standard quot; TP semantics and 400 lines for the analyzer which is based on an open semantics. Table1 illustrates the advantage of considering an open semantics in analyses of this type. The rst column of numbers indicates the time (in seconds running on a SPARC1 station) and the space (number of facts) costs for the analyzer based on the TP semantics.... ..."

Cited by 37

### Table 3: Conciseness by Target Complexity

2005

"... In PAGE 12: ... Like accuracy, conciseness too declined as the complexity of targets increased. Table3 breaks down the comparative conciseness rates according to the complexity of the targets. 4.... ..."

### Table 9 descriptive statistics of concise

2006

"... In PAGE 13: ...able 8 descriptive statistics of interpretability ..................................................67 Table9 descriptive statistics of concise .... In PAGE 66: ... Although in this case it is not the mean of item 24, the item causing the low alpha, that differs from other items, which does happen to be the case with another item in this scale. It is the high standard deviation and variance, as seen in Table9 that lead to believe that it is not standard for a product to contain an abstract. The items that deal with the irrelevance of text and superfluous pages show a better correlation then with the item on the abstract.... ..."