### Table 4. Comparison of Bayesian active learning and Bayesian immediate learning on Profile 83. Bayesian Bayesian

2003

"... In PAGE 6: ...g., Table4 ). This improvement is partly due to the profile (term and term weight) learning algorithm, which also benefits from the additional training data generated by the active learner.... ..."

Cited by 6

### Table 4. Comparison of Bayesian active learning and Bayesian immediate learning on Proflle 83. Bayesian Bayesian

2003

"... In PAGE 6: ...g., Table4 ). This improvement is partly due to the proflle (term and term weight) learning algorithm, which also beneflts from the additional training data generated by the active learner.... ..."

Cited by 6

### Table 3: Bayesian Robustness

1999

"... In PAGE 8: ...7. In Table3 , a design was created assuming a prior of Be(1,1), and then its properties are evaluated for a prior of Be(3,3). A second design was created reversing the roles of the priors.... In PAGE 8: ... One of the reasons for this is that the adaptive nature of 2-stage designs allows the second stage to incorporate information collected during the first. Pointwise operating characteristics of the two designs in Table3 are shown in Figure 4. Despite the robustness of the designs, they also clearly differ.... ..."

Cited by 1

### Table 3: Bayesian Robustness

"... In PAGE 7: ... To illustrate robustness, suppose we specify a unit cost per observation, a72a37a13a91a18 stages, a44 a2 a13a91a44 a4 a13a95a15a22a77a96a77a96a77 , and cut point a24a97a13a98a77a52a21a100a99 . In Table3 , a design was created assuming a prior of Be(1,1), and then its properties are evaluated for a prior of Be(3,3). A second design was created reversing the roles of the priors.... In PAGE 7: ... One of the reasons for this is that the adaptive nature of 2-stage designs allows the second stage to incorporate information collected during the rst. Pointwise operating characteristics of the two designs in Table3 are shown in Figure 4. Despite the robustness of the designs, they also clearly differ.... ..."

### Table 2. CMDB formulations.

"... In PAGE 8: ... Atlantic Research constructed a pilot plant and began development of two CMDB formulations in 1956. Table2 shows what comprised the two formulations. Henderson remembers casting the propellant grains and doing static firing, but because his records do not include the dates, he guesses the work occurred in 1957.... ..."

Cited by 1

### Table 2: Comparison of formulations

2003

"... In PAGE 17: ... For the other formulations, we have a pure branch and bound algorithm. The results are given in Table2 . The columns gap, CPU and first give the duality gap at the root node (i.... ..."

Cited by 4

### Table 3.3 Posterior means (g=m2) of centered main e ects for Bayesian G+G formulation with a separate Gaussian treatment prior for each factor, tting and omitting two-factor interactions Interactions? V1 V2 V3 S1 S2 N1 N2 N3 G1 G2 G3

1993

"... In PAGE 10: ... There is still no evidence of interactions but now all main e ects attain signi cance at the 5% level, largely as a result of the very substantial reduction in standard errors. Results for the standard and spatial analyses, ignoring interactions, are given in Besag and Kempton (1986, Table3 ) and are reproduced in Tables 3.... ..."

Cited by 10

### Table 3. Segmentation Step Based on the estimates given by the ICE procedure, we can compute an unsupervised 3D Markovian segmentation of SPECT volumes. In this framework, the Markovian segmentation can be viewed as a statistical labeling problem according to a global Bayesian formulation in which the posterior distribution PX=G(x=g) / exp ?U(x; g) has to be maximized [10]. The corresponding posterior energy is:

2000

Cited by 4

### Table 3: Bounds and formulations.

1996

"... In PAGE 24: ... While we have focused there on the physical meaning of these relations, we show in this section how they can be used to provide performance bounds for MQNETs by solving appropriate mathematical programming problems. We shall consider in what follows a linear cost function c#28x#29= X j2N c j x j ; and denote by Z the minimum cost achievable under the appropriate class of policies #28dynamic stable or static, nonidling and stable#29 policies, Z = min 8 #3C : X j2N c j x j j x 2X 9 = ; : Wehave summarized in Table3 several lower bounds and their corresponding mathematical pro- gramming formulations, obtained by selecting appropriate subsets of the constraints developed in previous sections.... ..."

### Table 1 Characterization of the most common analytical learning techniques

2003

"... In PAGE 5: ... Understanding the advantages/disadvantages of applying a given machine learning technique to a given planning system may help to make sense of any research bias that becomes apparent in the survey tables. The primary types of analytical learning systems developed to date along with their relative strengths and weaknesses and an indication of their inductive biases are listed in Table1 . The major types of pure inductive learning systems are similarly described in Table 2.... In PAGE 7: ... We discuss this special case, in which planning and learning are inextricably intertwined, in the sidebar on this page. Analogical learning is only represented in Table1 by a specialized and constrained form known as derivational analogy, and the closely related case-based reasoning formulism. More flexible and powerful forms of analogy can be envisioned (c.... ..."

Cited by 7