### Table 3. SST-rule Common to All Logic

1994

"... In PAGE 6: ...SST-Rules Table3 shows the rules which are common to all logics viz. those which add local or global assumptions and those which reduce , and formulae.... ..."

Cited by 52

### Table 8. Common nonparametric statistics

"... In PAGE 6: ... This study demonstrated that the use of non- parametric techniques is implicated whenever there is doubt regarding the fulf_illment of parametric assumptions, such as normality or sample size. Which non-parametric test should we use? The most common non-parametric tests can be found in Table8 . Please refer to the following statistical texts for the derivation and calculation of these statistics, as this is beyond the scope or intention of this paper: Nonparametric Statistics for the Behavioural Science (Siegel Sand Castellan NJ, 1988) (6), Applied Nonparametric Statistical Methods (Sprent P and Smeeton NC, 2001) (9), Nonparametric Statistical Inference (Gibbons JD, 1985) (8), Nonparametrics: Statistical Methods Based On Ranks (Lehmann EL, 1975) (18), Practical Nonparametric Statistics (Conover WJ, 1980) (19), Fundamentals of Nonparametric Statistics (Pierce A, 1970) (15), and Essentials of Research Methods in Health, Physical Education, Exercise Science and Recreation (Berg KE and Latin RW, 2003) (10).... ..."

### Table 6. Representation of all integers. Note that although the three square theorem is commonly ascribed to Legendre, his \proof quot; depended on an unsubstantiated assumption only later established by Dirichlet, and the rst complete proof is due to Gauss. We nish by noting that in problems involving sums of two squares, meth- ods more e ective than the circle method can be brought into play (see especially Hooley (1981a,b) and Br udern (1987)).

### Table 7: Common Typing Rules

"... In PAGE 8: ... An example of such a structural rule, using the simple object types of Ob1 lt;:, is the follow- ing modi cation of the rule (Val Update) of Table 8: (Struct Val Update) For A [li : Bi i21:::n] E ` C lt;:A E ` a : C E;x : C ` b : Bj E ` a:lj ( amp;(x : C)b : C In our interpretation, structural assumptions on object types are re ected as structural assumptions on recursive types. Speci cally, structural rules for object types are vali- dated if we strengthen the target calculus with a structural rule for recursive types: (Struct Val Unfold) E ` C lt;: (X)BfXg E ` a : C E ` unfold(a) : BfCg The rule (Struct Val Unfold) can be seen as a consequence of assuming that any subtype of a recursive type arises through the re exivity rule ((Sub Re ) of Table7 ) or the subtyping rule for recursive types ((Sub Rec) of Table 9). For example, suppose that E ` C lt;: (X)BfXg because of (Sub Rec).... In PAGE 11: ... (This is a convenient departure from the original calculus of [AC95a]: the terms described here contain more type information.) The syntax is: Environments E ::= ; j E;x : A Types A;B ::= Top j [li : Bi i21:::n] Variables x;y Terms a;b ::= x j [li = amp;(xi : A)bi i21:::n] j a:l j a:l ( amp;(x : A)b j clone(a) j let x : A = a in b D The Obstr lt;: Calculus The calculus Obstr lt;: consists of the rules given in Table7 , the rules (Env X), (Type X), (Sub X) given in Table 9, and the rules of Table 11. It has the following syntax: Environments E ::= ; j E; x : A j E;X lt;:A Type Variables X;Y Types A;B ::= X j Top j Obj(X)[li i : Bi i21:::n] with i 2 f+;?;0g Variables x;y Terms a; b ::= x j obj(X =A)[li = amp;(xi:X)bi i21:::n] j a:l j a:l ( (Y lt;:A;y : Y ) amp;(x : Y )b... ..."

### Table 1 Material properties of swine brain tissue (Miller, 1999)

1957

"... In PAGE 4: ... The common assumption of brain tissue incompressibility results in the third invariant being equal to 1. The material constants, obtained by rather com- plicated, iterative procedure (Miller, 1999) are listed in Table1 . The proposed model is linear in material coe$- cients Cij0 (Miller, 1999).... ..."

Cited by 2

### Table 1: the common outline of problems (1) and (2).

"... In PAGE 3: ... The common schematic outline dltk abs of dltk 1 | proof of (1) | and of dltk 2 | proof of (2) | is the proof of (8x y: (q(x; y) 8z: (p(x; z) p(y; z))) ^ ^ q(a; b) ^ (p(a; a) _ p(b; b))) 9x: p(b; x) (3) Formula (3) is obtained by applying fdltk 1 to (1). The proof of (3) | which has been given in Natural Deduc- tion [Prawitz, 1965] | is shown in Table1 . Each line of Table 1 represents a proof step of dltk abs and has a la- bel to identify it, a formula, and a justi cation, which explains what inference rule has been applied to ob- tain the formula.... In PAGE 3: ... The proof of (3) | which has been given in Natural Deduc- tion [Prawitz, 1965] | is shown in Table 1. Each line of Table1 represents a proof step of dltk abs and has a la- bel to identify it, a formula, and a justi cation, which explains what inference rule has been applied to ob- tain the formula. The proof starts with assumption 1,... ..."

### Table 1: Selected common expansions of NP as Subject vs. Object

1997

"... In PAGE 2: ... But this context- free assumption is actually quite wrong. For example, Table1 shows how the probabilities of expanding an NP node (in the Penn Treebank) di er wildly between subject position and object position. Pronouns, proper names and de nite NPs appear more commonly in subject position while NPs containing post-head modi ers and bare nouns occur more commonly in object position (this re ects the fact that the subject normally expresses the sentence-internal topic [Manning, 1996]).... ..."

Cited by 26

### Table 8: Breakdown of the error on Rb obtained from the multivariate tagging for each year, and on the combined value. Common systematic errors are only given in the column of the combined analysis. t to the simulation. In order to be consistent in the average, these errors have been recomputed using the method described above. Finally, they were conservatively assumed to be fully correlated. With these assumptions the nal result is Rb = 0:2194 0:0032(stat:) 0:0022(syst:) ? 0:0049Rc ? 0:172 0:172

1996

### Table 8: Endo2m;n showing regions of object location likelyhood computed for each gridpoint m,n by super- imposing locality patterns from Endo1i;j value. and understanding (representation). It is based on the assumption that some deeper representational level or core structure might be identi ed as a common base for di erent notions of meaning developped sofar in theories of referential and situational semantics as well as some structural or stereotype semantics. For the purpose of testing semiotic processes, their situational complexity has to be reduced by abstract- ing away irrelevant constituents, hopefully without

### Table 3: Comparison of our and Calder implementations of Ball/Wu/Larus heuristics. branch misprediction rate. But the values of branch misprediction rates have a wide range and may be very low for some programs and high for others. The branch misprediction rate varies from 44% for the Espresso benchmark to 0.2% for the Alvinn benchmark. This happens because static heuristic-based branch predictions are based on the assumption that all programs have common behavioral characteristics. This assumption is used in developing the heuristic sets through the observation of the behavior of di erent programs and by conclusions based on intuition. All future programs are expected to follow similar behavior patterns. The predictability of branches also a ects branch misprediction rates. In static branch prediction approaches, the predicted directions for branches are set before execution starts and cannot be adjusted to match branch behavior during run time. If a branch follows di erent paths with approximately equal frequencies, static prediction cannot do better than predict the more frequent direction. For example, if a branch is executed 30 times, and 17 times follows the taken path (and 13 times the not taken path), the static branch misprediction rate is 13/30 = 0.43 or 43% in case the taken path is predicted. The upper bound for the performance of static program-based approaches is the performance of the semi-static pro le-based branch prediction. Comparing the branch misprediction rates for pro le-based prediction and the branch misprediction rate for the Ball/Wu/Larus static heuristic-based approach (which up to now has been the best of the

"... In PAGE 5: ... The Store heuristic and Pointer heuristic point in di erent directions for di erent compilers. In Table3 , we present the branch misprediction rates obtained from applying the Wu and Larus heuristic set and using the Dempster-Shafer theorem in order to calculate the probability for every branch to be taken or not taken. We predict that a branch will be taken if the probability value of the branch to be taken is greater than or equal to 0.... ..."