### Table 1 Comparison of inductive principles.

"... In PAGE 15: ...989]. However, application of MDL to other types of model, i.e. feedforward neural networks has not been successful, due to difficulty in developing optimal encoding of the network in a data-dependent fashion. We conclude this section by summarizing properties of various inductive principles (see Table1 ). All inductive principles use a (given) class of approximating functions.... In PAGE 15: ... Meaningful (empirical) comparisons could be certainly helpful, but are not readily available, mainly because each inductive approach comes with its own set of assumptions and terminology. In this respect, the comparison in Table1 may be helpful for developing future comparisons. Each inductive principle, when reasonably applied, usually yields an acceptable solution for practical applications.... ..."

### Table 3. Axioms for approximation operators and induction principle.

"... In PAGE 6: ...2 Approximation Induction Principle An in nite thread in BTAsvp is represented by a projective sequence consisting of its nite approximations. These nite approximations are de ned inductively by means of the approximation operators n( ) of depth n of threads with n 2 N whose axioms on nite threads are given as P0-P5 in Table3 . Note that axioms P4 and P5 makes use of the assumption that BA is nite.... In PAGE 6: ... A projective sequence is a sequence (pn)n2N such that n(pn+1) = pn for all n 2 N. The Approximation Induction Principle (AIP) in Table3 states that two threads are considered identical if their nite approximations at every depth are identical. We write A1 for the set of ( nite and in nite) threads, and n(p) for the projection at depth n of a thread p 2 A1.... ..."

### Table 3. Axioms for approximation operators and induction principle.

"... In PAGE 4: ...2 Approximation Induction Principle An in nite thread in BTAsvp is represented by a projective sequence consisting of its nite approximations. These nite approximations are de ned inductively by means of the approximation operators n( ) of depth n of threads with n 2 N whose axioms on nite threads are given as P0-P5 in Table3 . Note that axioms P4 and P5 makes use of the assumption that BA is nite.... In PAGE 4: ... A projective sequence is a sequence (pn)n2N such that for all n 2 N, n(pn+1) = pn. The Approximation Induction Principle (AIP) in Table3 states that two threads are considered identical if their nite approximations at every depth are identical. We write A1 for the set of ( nite and in nite) threads, and n(p) for the projection at depth n of a thread p 2 A1.... ..."

### Tables 4, 5 and 6 lists the typical CRL axioms and rules for interaction between data and processes. The axioms for summation are denoted by SUM, the axioms for the conditional by COND and the rules for the booleans by BOOL. Beside the axioms and rules mentioned above, CRL incorporates two other important proof prin- ciples. First, it supports an principle for induction not only on data but also on data in processes. The second principle is RSP (Recursive Speci cation Principle) taken from [BW90] extended to processes with data. Informally, it says that each guarded recursive speci cation has at most one solution.

1994

Cited by 14

### Table 1. Three Families of Accounts of Inductive Inference Each of the three families will be discussed in turn in the three sections to follow and the entries in this table explicated.2 While most accounts of inductive inference fit into one of these three families, some span across two families. Achinstein apos;s (2001) theory of evidence, for example, draws on ideas from both hypothetical induction and probabilistic induction, in so far as it invokes both explanatory power and probabilististic notions. Demonstrative induction, listed here under inductive generalization, can also be thought of as an extension of hypothetical induction.

2003

"... In PAGE 3: ... As a result, it is possible to group virtually all accounts of induction into three families. This system is summarized in Table1 below. Each family is governed by a principle upon which every account in each family depends.... ..."

### Table 2: Building algorithm of the matrix S(n) k?1. many lost data back. We must note that the parallel projections method comes to the same result, but its convergence rate is far weaker. The principle which is illustrated in Figure 1 could be used for image compres- sion. The reduction of the original image corresponds to the compression and the induction to the opposite operation.

1998

Cited by 3

### Table 1: Inductive rules for EMPA integrated interleaving semantics

1998

"... In PAGE 8: ..., and i(M) to denote the multiset obtained by projecting the tuples in multiset M on their i-th component. Thus, e.g., ( 1(PM2))( lt;a; gt;) in the fth part of Table1 denotes the multiplicity of tuples of PM2 whose rst component is lt;a; gt;.... In PAGE 9: ... 3(c) is exactly the result of the application to E of the rules in Table 1 equipped with the auxiliary functions mentioned above. The formal de nition of the integrated interleaving semantics for EMPA is based on the transition relation ???!, which is the least subset of G Act G satisfying the inference rule in the rst part of Table1 . This rule selects the potential moves that have the highest priority level (or are passive), and then merges together those having the same action type, the same priority level and the same derivative term.... In PAGE 9: ... The rst operation is carried out through functions Select : Mu n(PMove) ?! Mu n(PMove) and PL : Act ?! APLev, which are de ned in the third part of Table 1. The second operation is carried out through function Melt : Mu n(PMove) ?! P n(PMove) and partial function Min : (ARate ARate) ?! o ARate, which are de ned in the fourth part of Table1 . We recall that function Melt, whose introduction is motivated by the drawback cited in the example above, avoids burdening transitions with auxiliary labels as well as keeping track of the fact that some transitions may have multiplicity greater than one.... In PAGE 11: ...in the second part of Table1 according to the intuitive meaning of operators explained in Sect.... In PAGE 11: ... The normalization operates in such a way that applying Min to the rates of the synchronizations involving the active action gives as a result the rate of the active action itself, and that each synchronization is assigned the same execution probability. This normalization is carried out through partial function Norm : (AType ARate ARate Mu n(PMove) Mu n(PMove)) ?!o ARate and function Split : (ARate R I ]0;1]) ?! ARate, which are de ned in the fth part of Table1 . Note that Norm(a; ~ 1; ~ 2; PM 1; PM 2) is de ned if and only if min(~ ; ~ ) = , which is the condition on action rates we have required in Sect.... In PAGE 27: ... To solve the problem, we follow the proposal of [BBK96] by introducing a priority operator \ ( ) quot;: priority levels are taken to be potential, and they become e ective only within the scope of the priority operator. We thus consider the language L generated by the following syntax E ::= 0 j lt;a; ~ gt;:E j E=L j E[ apos;] j (E) j E + E j E kS E j A whose semantic rules are those in Table1 except that the rule in the rst part is replaced by ( lt;a; ~ gt;; E0) 2 Melt(PM (E)) E a;~ ???! E0 and the following rule for the priority operator is introduced in the second part... In PAGE 33: ...ollowing the guideline of Sect. 3.2, we de ne the transition relation ???! as the least subset of Mu n(V) ActMufin(V) Mu n(V) generated by the inference rule reported in the rst part of Table 2, which in turn is based on the multiset PM (Q) 2 Mu n(ActMufin(V) Mu n(V)) of potential moves of Q 2 Mu n(V) de ned by structural induction in the second part of Table 2. These rules are strictly related to those in Table1 for the integrated interleaving semantics of EMPA terms. The major di erences are listed below and are clari ed by the corresponding upcoming examples: 1.... In PAGE 34: ...6 Consider term E lt;a; ~ gt;:0k; lt;b; ~ gt;:0 whose decomposition is given bydec(E) = fj lt;a; ~ gt;:0 k; id; id k; lt;b; ~ gt;:0jg By applying the rules in Table 2, we get the two independent transitions fj lt;a; ~ gt;:0 k; id jg norm( lt;a;~ gt;; lt;a;~ gt;:0k; id;1) ????????????????????! fj 0k; id jg fj id k; lt;b; ~ gt;:0 jg norm( lt;b;~ gt;;id k; lt;b;~ gt;:0;1) ????????????????????! fj id k; 0 jg as expected. If we replaced the three rules for the parallel composition operator with a single rule similar to that in Table1 , then we would get instead the two alternative transitions dec(E) norm( lt;a;~ gt;; lt;a;~ gt;:0k; id;1) ????????????????????! fj 0k; id; id k; lt;b; ~ gt;:0jg dec(E) norm( lt;b;~ gt;;id k; lt;b;~ gt;:0;1) ????????????????????! fj lt;a; ~ gt;:0k; id; id k; 0 jg which are not consistent with the fact that the two subterms of E are independent, thereby resulting in a violation of the concurrency principle (see Sect. 7:4).... In PAGE 49: ... The tool driver, which is written in C [KR88] and uses Lex [Les75] and YACC [Joh75], includes routines for parsing EMPA speci cations and performing lexical, syntactic, and static semantic (closure, guardedness, niteness) checks on the speci cations. The integrated kernel, which is implemented in C, currently contains only the routines to generate the integrated interleaving semantic model of EMPA speci cations according to the rules of Table1 : this kernel will be extended by implementing a EMB checking algorithm. The functional kernel, which is written in C, is based on a version of CWB-NC [CS96] that was retargeted for EMPA using PAC-NC [CMS95].... ..."

Cited by 25

### Table 2: Inductive rules for EMPA integrated location oriented net semantics

1998

"... In PAGE 33: ... 3.2, we de ne the transition relation ???! as the least subset of Mu n(V) ActMufin(V) Mu n(V) generated by the inference rule reported in the rst part of Table2 , which in turn is based on the multiset PM (Q) 2 Mu n(ActMufin(V) Mu n(V)) of potential moves of Q 2 Mu n(V) de ned by structural induction in the second part of Table 2. These rules are strictly related to those in Table 1 for the integrated interleaving semantics of EMPA terms.... In PAGE 33: ...ring rule, as well as di cult to implement, due to the distributed notion of state (see Ex. 7.7). 5. Rate normalization is carried out through function norm : (Act V0 N I +) ?! ActMufin(V) de ned in the third part of Table2 , where V0 is generated by the same syntax as V except that V + V is replaced by V + id and id + V . In order to determine the correct rate of transitions deriving from the synchronization of the same active action with several independent or alternative passive actions of the same type, function norm considers for each transition three parameters: the basic action, the basic place and the passive contribution.... In PAGE 33: ...ynchronization. Again, this is a consequence of the distributed notion of state. 6. Potential move merging is carried out through functions melt1 : Mu n(ActMufin(V) Mu n(V)) ?! P n(ActMufin(V) Mu n(V)) and melt2 : P n(ActMufin(V) Mu n(V)) ?! P n(ActMufin(V) Mu n(V)) de ned in the fourth part of Table2 . Function melt1 merges the potential moves having the same basic action, the same basic place and the same postset by summingtheir passive contributions (see Ex.... In PAGE 34: ...dec(E) = fj ( lt;a; gt;:0 k; id) + lt;c; gt;:0; (id k; lt;b; gt;:0) + lt;c; gt;:0jg By applying the rules in Table2 , we get the following transitions fj ( lt;a; gt;:0k; id) + lt;c; gt;:0jg norm( lt;a; gt;;( lt;a; gt;:0k; id)+id;1) ????????????????????! fj 0k; id jg fj (id k; lt;b; gt;:0) + lt;c; gt;:0jg norm( lt;b; gt;;(id k; lt;b; gt;:0)+id;1) ????????????????????! fj id k; 0 jg dec(E) norm( lt;c; gt;;id+ lt;c; gt;:0;1) ????????????????????! fj 0jg If dec(E) is the current marking then all the transitions above are enabled and ring the rst transition results in marking fj 0k; id; (id k; lt;b; gt;:0)+ lt;c; gt;:0 jg which cannot be the preset of any transition labeled with action type c, because the execution of either lt;a; gt; or lt;b; gt; prevents lt;c; gt; from being executed according to the intended meaning of E.... In PAGE 34: ...dec(E) = fj ( lt;a; gt;:0 k; id) + lt;c; gt;:0; (id k; lt;b; gt;:0) + lt;c; gt;:0jg By applying the rules in Table 2, we get the following transitions fj ( lt;a; gt;:0k; id) + lt;c; gt;:0jg norm( lt;a; gt;;( lt;a; gt;:0k; id)+id;1) ????????????????????! fj 0k; id jg fj (id k; lt;b; gt;:0) + lt;c; gt;:0jg norm( lt;b; gt;;(id k; lt;b; gt;:0)+id;1) ????????????????????! fj id k; 0 jg dec(E) norm( lt;c; gt;;id+ lt;c; gt;:0;1) ????????????????????! fj 0jg If dec(E) is the current marking then all the transitions above are enabled and ring the rst transition results in marking fj 0k; id; (id k; lt;b; gt;:0)+ lt;c; gt;:0 jg which cannot be the preset of any transition labeled with action type c, because the execution of either lt;a; gt; or lt;b; gt; prevents lt;c; gt; from being executed according to the intended meaning of E. This fact is detected by the rules in Table2 , i.e.... In PAGE 34: ...ccording to the intended meaning of E. This fact is detected by the rules in Table 2, i.e. they generate no transition labeled with action type c for the marking above, since the alternative id k; lt;b; gt;:0 of lt;c; gt;:0 is not complete. To understand the presence of Q3 in the rst two rules for the alternative composition operator, let us now slightly modify term E in the following way E0 ( lt;a; gt;: lt;b; gt;:0 kfbg lt;b; gt;:0) + lt;c; gt;:0 where dec(E0) = fj ( lt;a; gt;: lt;b; gt;:0kfbg id) + lt;c; gt;:0; (id kfbg lt;b; gt;:0) + lt;c; gt;:0jg By applying the rules in Table2 , we get the two transitions fj ( lt;a; gt;: lt;b; gt;:0kfbg id) + lt;c; gt;:0jg norm( lt;a; gt;;( lt;a; gt;: lt;b; gt;:0 kfbg id)+id;1) ????????????????????! fj lt;b; gt;:0kfbg id jg dec(E0) norm( lt;c; gt;;id+ lt;c; gt;:0;1) ????????????????????! fj 0jg If dec(E0) is the current marking then all the transitions above are enabled and ring the rst transition results in marking fj lt;b; gt;:0 kfbg id; (id kfbg lt;b; gt;:0)+ lt;c; gt;:0jg which is the preset of the following tran- sitionfj lt;b; gt;:0kfbg id; (id kfbg lt;b; gt;:0) + lt;c; gt;:0jg norm( lt;b; gt;;(id kfbg lt;b; gt;:0)+id;1) ????????????????????! fj 0kfbg id; id kfbg 0jg If Q3 were not taken into account, then the transition above would not be constructed. Example 7.... In PAGE 34: ... Example 7.6 Consider term E lt;a; ~ gt;:0k; lt;b; ~ gt;:0 whose decomposition is given bydec(E) = fj lt;a; ~ gt;:0 k; id; id k; lt;b; ~ gt;:0jg By applying the rules in Table2 , we get the two independent transitions fj lt;a; ~ gt;:0 k; id jg norm( lt;a;~ gt;; lt;a;~ gt;:0k; id;1) ????????????????????! fj 0k; id jg fj id k; lt;b; ~ gt;:0 jg norm( lt;b;~ gt;;id k; lt;b;~ gt;:0;1) ????????????????????! fj id k; 0 jg as expected. If we replaced the three rules for the parallel composition operator with a single rule similar to that in Table 1, then we would get instead the two alternative transitions dec(E) norm( lt;a;~ gt;; lt;a;~ gt;:0k; id;1) ????????????????????! fj 0k; id; id k; lt;b; ~ gt;:0jg dec(E) norm( lt;b;~ gt;;id k; lt;b;~ gt;:0;1) ????????????????????! fj lt;a; ~ gt;:0k; id; id k; 0 jg which are not consistent with the fact that the two subterms of E are independent, thereby resulting in a violation of the concurrency principle (see Sect.... In PAGE 34: ...iolation of the concurrency principle (see Sect. 7:4). Example 7.7 Consider termE ( lt;a; gt;:0 + lt;c; 11;1 gt;:0) kfcg( lt;b; gt;:0 + lt;c; gt;:0) whose decomposition comprises places V1 kfcg id and id kfcg V2 where V1 lt;a; gt;:0 + lt;c; 11;1 gt;:0 V2 lt;b; gt;:0 + lt;c; gt;:0 By applying the rules in Table2 , we get the three transitions... In PAGE 35: ... Example 7.8 Consider term E ( lt;a; gt;:0kfag lt;a; gt;:(0 + 0)) + ( lt;a; gt;:0kfag lt;a; gt;:0) whose decomposition comprises places (V1 kfag id) + (V1 kfag id), (V1 kfag id) + (id kfag V3), (id kfag V2) + (V1 kfag id) and (id kfag V2) + (id kfag V3) whereV1 lt;a; gt;:0 V2 lt;a; gt;:(0 + 0) V3 lt;a; gt;:0 By applying the rules in Table2 , we get the following two transitions dec(E) norm( lt;a; gt;;(V1 kfag id)+id;1) ????????????????????! fj 0kfag id; id kfag(0 + 0) jg dec(E) norm( lt;a; gt;;id+(V1 kfag id);1) ????????????????????! fj 0kfag id; id kfag 0 jg If dec(E) is the current marking then both transitions are enabled and the normalizing factor is 1 for both transitions, as expected. This example motivates the use of V0 instead of V for expressing the basic place: if V were used, then the two transitions above would have the same basic place (beside the same basic action), so they would be given the wrong normalizing factor 1=2 by function norm.... In PAGE 35: ... Example 7.9 Consider termE lt;a; gt;:0kfag(( lt;a; gt;:0 + lt;a; gt;:0) k; lt;a; gt;:0) whose decomposition comprises places V1 kfag id, id kfag(V2 k; id) and id kfag(id k; V3) where V1 lt;a; gt;:0 V2 lt;a; gt;:0 + lt;a; gt;:0 V3 lt;a; gt;:0 By applying the rules in Table2 , we get the following two transitions fj V1 kfag id; id kfag(V2 k; id) jg norm( lt;a; gt;;V1 kfag id;2) ????????????????????! fj 0kfag id; id kfag(0 k; id) jg fj V1 kfag id; id kfag(id k; V3) jg norm( lt;a; gt;;V1 kfag id;1) ????????????????????! fj 0kfag id; id kfag(id k; 0) jg where value 2 for the passive contribution of the rst transition is determined by function melt1. If dec(E) is the current marking then both transitions are enabled and the normalizing factor is 2=3 for the rst transition, and 1=3 for the second transition, as expected.... In PAGE 35: ... Example 7.10 Consider termE lt;a; gt;:0kfag( lt;a; gt;: lt;a; gt;:0 k; lt;a; gt;:0) whose decomposition comprises places V1 kfag id, id kfag(V2 k; id) and id kfag(id k; V3) where V1 lt;a; gt;:0 V2 lt;a; gt;: lt;a; gt;:0 V3 lt;a; gt;:0 By applying the rules in Table2 , we get the following three transitions fj V1 kfag id; id kfag(V2 k; id) jg norm( lt;a; gt;;V1 kfag id;1) ????????????????????! fj 0kfag id; id kfag(V3 k; id) jg fj V1 kfag id; id kfag(id k; V3) jg norm( lt;a; gt;;V1 kfag id;1) ????????????????????! fj 0kfag id; id kfag(id k; 0) jg fj V1 kfag id; id kfag(V3 k; id) jg norm( lt;a; gt;;V1 kfag id;1) ????????????????????! fj 0kfag id; id kfag(0 k; id) jg... In PAGE 36: ... Example 7.11 Consider termE ( lt;a; gt;:0 + lt;a; gt;:0) kfag( lt;a; gt;:0k; lt;a; gt;:0) whose decomposition comprises places V1 kfag id, id kfag(V2 k; id) and id kfag(id k; V2) where V1 lt;a; gt;:0 + lt;a; gt;:0 V2 lt;a; gt;:0 By applying the rules in Table2 , we get the following two transitions fj V1 kfag id; id kfag(V2 k; id) jg norm( lt;a;2 gt;;( lt;a; gt;:0+id)kfag id);1) ????????????????????! fj 0kfag id; id kfag(0 k; id) jg fj V1 kfag id; id kfag(id k; V2) jg norm( lt;a;2 gt;;( lt;a; gt;:0+id)kfag id);1) ????????????????????! fj 0kfag id; id kfag(id k; 0) jg each of which is obtained by applying function melt2 to two potential moves having as a basic place ( lt;a; gt;:0 + id) kfag id and (id + lt;a; gt;:0) kfag id, respectively. If dec(E) is the current marking then both transitions are enabled and the normalizing factor is 1=2 for both transitions, as expected.... ..."

Cited by 25

### Table 2 describes the features that were selected for the subsequent induction step. The first three features correspond to the average luminance and colour differences in a region. The location of the region is expressed as the X and Y co-ordinates of the region centroid. Orientation is expressed as the sine and cosine of the angle of the principal axis. The next feature corresponds to the principle mode of the PCA (principle component analysis [12]) transformed region boundary description. The last two features arise from the use of a psychophysically plausible model of texture, based upon Gabor filters. In this case the features correspond to two high frequency (128 and 256) isotropic Gabor filters. A full description of all features is presented in [11]. # Position Linguistic

1999

"... In PAGE 3: ... Class No. Class # Train examples # Validation examples # Test examples 1 NotRoad 8381 1796 1797 2 Road 1157 248 249 TOTAL 13628 9538 2044 2046 Table2 : Selected features for each region that are considered for learning. 10 Selected FEATURES No.... ..."

Cited by 4

### Table 2: Effect of Induction-based Learning on BMC

"... In PAGE 5: ...1. Table2 shows the runtime for a few industrial instances. We can see that the induction-based learning can be very powerful, espe- cially for hard UNSAT cases.... ..."