### Table 2. The Impact of Bayesian Learning on Negotiations

### Table 1: Parameter smoothing with Bayesian learning.

### Table 4. Comparison of Bayesian active learning and Bayesian immediate learning on Proflle 83. Bayesian Bayesian

2003

"... In PAGE 6: ...g., Table4 ). This improvement is partly due to the proflle (term and term weight) learning algorithm, which also beneflts from the additional training data generated by the active learner.... ..."

Cited by 6

### Table 4. Comparison of Bayesian active learning and Bayesian immediate learning on Profile 83. Bayesian Bayesian

2003

"... In PAGE 6: ...g., Table4 ). This improvement is partly due to the profile (term and term weight) learning algorithm, which also benefits from the additional training data generated by the active learner.... ..."

Cited by 6

### Table 4. Comparison of Bayesian active learning and Bayesian immediate learning on Proflle 83. Bayesian Bayesian

2003

"... In PAGE 6: ...g., Table4 ). This improvement is partly due to the proflle (term and term weight) learning algorithm, which also beneflts from the additional training data generated by the active learner.... ..."

Cited by 6

### Table 4. Results for Bayesian learning and decision tree learning on annotated data.

### Table 4. Results for Bayesian learning and decision tree learning on annotated data.

### Table 2 Misclassi cation rates of Bayesian learning and HONEST and the percentage im- provement of HONEST relative to Bayesian learning Game Misclassi cation Rate Percentage

"... In PAGE 8: ...0% to 18.6%, as shown in Table2 . Using the HONEST networks, initialized with the Bayesian coe cients, it was possible to reduce the misclassi cation rates, as shown in Table 2 and illustrated in Figure 8, by a percentage reduction varying from 28.... In PAGE 8: ...6%, as shown in Table 2. Using the HONEST networks, initialized with the Bayesian coe cients, it was possible to reduce the misclassi cation rates, as shown in Table2 and illustrated in Figure 8, by a percentage reduction varying from 28.... In PAGE 8: ... The standard deviations of the learned exponents are shown in Table 4. Examining Table2 , there are several trends that can be observed from the... ..."

### Table 1. The Lazy Bayesian Rule learning algorithm

2000

"... In PAGE 8: ...1. An operational description of LBR Table1 outlines the Lbr algorithm. For a given training set and each test example, Lbr starts from a special Bayesian rule whose antecedent is true.... ..."

Cited by 27

### Table 1: The Lazy Bayesian Rule learning algorithm

1999

"... In PAGE 3: ... LocalNB = a NB classi er trained using Att on D Errors = errors of LocalNB estimated using N-CV on D Cond = true REPEAT TempErrorsbest = the number of cases in D + 1 FOR each attribute A in Att whose value vA on Etest is not missing DO Dsubset = cases in D with A = vA TempNB = a NB classi er trained using Att ? fAg on Dsubset TempErrors = errors of TempNB estimated using N-CV on Dsubset + the portion of Errors in D ? Dsubset IF ((TempErrors lt; TempErrorsbest) AND (TempErrors is signi cantly lower than Errors)) THENTempNBbest = TempNB TempErrorsbest = TempErrors Abest = A IF (an Abest is found) THENCond = Cond ^ (Abest = vAbest) LocalNB = TempNBbest D = Dsubset corresponding to Abest Att = Att ? fAbestg Errors = errors of LocalNB estimated using N-CV on D UNTIL (no Abest is found) classify Etest using LocalNB RETURN the class This local naive Bayesian classi er uses only those at- tributes that do not appear in the rule apos;s antecedent. Table1 outlines the Lbr algorithm. During the gen- eration of a Bayesian rule, only attribute-value pairs that utilize attribute values of the test example are considered.... ..."

Cited by 17