### Table 1. Final (Posterior) Probability of the Null Hypothesis after Observing Various Bayes Factors, as a Function of the Prior Probability of the Null Hypothesis

1999

"... In PAGE 2: ... So it is with the Bayes factor: It modifies prior probabilities, and after seeing how much Bayes factors of certain sizes change various prior probabilities, we begin to understand what repre- sents strong evidence, and weak evidence. Table1 shows us how far various Bayes factors move prior probabilities, on the null hypothesis, of 90%, 50%, and 25%. These correspond, respective- ly, to high initial confidence in the null hypothesis, equivocal confidence, and moderate suspicion that the null hypothesis is not true.... ..."

Cited by 4

### Table 4, the following conclusions are evident:

1988

Cited by 925

### TABLE contains the following relations:

1992

Cited by 19

### TABLE I are as follows:

1993

Cited by 13

### Table VI is shown as follows:

in Ranking function optimization for effective web search by genetic programming: An empirical study

2004

Cited by 10

### Table VI is shown as follows:

in Ranking Function Optimization For Effective Web Search By Genetic Programming: An Empirical Study

2004

Cited by 10

### Table 4 as follows.

1995

"... In PAGE 4: ... Predicting Dynamics We are interested in the degree to which knowing #15 or #1B gives information about the attractor governing the dynamics. Table4 gives the contingency tables relating the dependentvariable, a, and the independentvariables, #15 and #1B. #28The tables are computed for the 256 rules, rather than for the 88 equivalence classes, to weight class multiplicity properly.... In PAGE 5: ...Reconstructability of Elementary Cellular Automata 5 Table4 . Contingency tables: #15 or #1B vs.... ..."

Cited by 3

### Table 1 are as follows.

1995

"... In PAGE 7: ...91 1.00 Table1 : Summary of RMS retrieval errors for both neural net and regression methods, for three channel sets. TIGR error is error retrieving odd numbered pro les.... In PAGE 9: ...04K. Table1 and Figures 2, 3, and 4 give a summary of RMS error for both neural nets and regression, for several channel sets. Training (or regression) is performed on even-numbered TIGR pro les.... In PAGE 13: ... If ^ Tb is Tb with added noise, then let ^ Tb0 = BT 1 ^ Tb, let C be the least squares solution to C ^ T 0 b = T 0, and D = B2CBT 1 . Table1 and Figures 2, 3 and 4 summarize RMS testing error for the regression method, and compare regression results with neural nets. As with the neural nets, the eigenvector bases are determined from and the regression is performed on even-numbered TIGR pro les, while the error shown is for retrievals of the odd-numbered TIGR pro les.... ..."

Cited by 1

### Table 4), are computed as follows:

"... In PAGE 9: ...9,10 This system was initially developed just as a public repository of data, but an increasing amount of additional features have been added over time, urging a redesign of the system given the moving scope. The Molecular Genetics Polymorphisms affect probe intensity Table4 : Short oligonucleotide arrays: ANOVA table for probe set data Source of variation Sum of squares d.f.... ..."