### TABLE I UTTERANCE LEVEL BAYESIAN ADAPTIVE INFERENCE PERFORMANCE

### TABLE III INCREMENTAL BAYESIAN ADAPTIVE INFERENCE PERFORMANCE ON THE COMPLETE DATA SET

### Table II shows the classi cation errors of (1) Bayesian-GT, (2) Bayesian-Sample, and (3) the new SDNLL. It shows that the classi cation errors are very similar among all the measurements. Of course, our adaptive method SDNLL would not be able to reach the error rates of the impractical Bayesian-GT and non-adaptive Bayesian- Sample if there are not enough samples per class.

2006

### TABLE I PSNR[dB] RESULTS OF THE PROPOSED SUBBAND ADAPTIVE ProbShrink AND TWO OTHER SUBBAND ADAPTIVE BAYESIAN METHODS FOR sym8 WAVELET.

### Table 2. The Performance of the Adaptive Bayesian Contextual Classification Procedure. The accuracy measured by the testing samples in each cycle is given followed by the Kappa statistic in parentheses. The results are given for the average of the subclasses (Class) and after the subclasses have been grouped into the final classes (Group). The Resubstitution results are those for the training samples.

2002

Cited by 2

### Table 3: Bayesian Robustness

1999

"... In PAGE 8: ...7. In Table3 , a design was created assuming a prior of Be(1,1), and then its properties are evaluated for a prior of Be(3,3). A second design was created reversing the roles of the priors.... In PAGE 8: ... One of the reasons for this is that the adaptive nature of 2-stage designs allows the second stage to incorporate information collected during the first. Pointwise operating characteristics of the two designs in Table3 are shown in Figure 4. Despite the robustness of the designs, they also clearly differ.... ..."

Cited by 1

### Table 3: Bayesian Robustness

"... In PAGE 7: ... To illustrate robustness, suppose we specify a unit cost per observation, a72a37a13a91a18 stages, a44 a2 a13a91a44 a4 a13a95a15a22a77a96a77a96a77 , and cut point a24a97a13a98a77a52a21a100a99 . In Table3 , a design was created assuming a prior of Be(1,1), and then its properties are evaluated for a prior of Be(3,3). A second design was created reversing the roles of the priors.... In PAGE 7: ... One of the reasons for this is that the adaptive nature of 2-stage designs allows the second stage to incorporate information collected during the rst. Pointwise operating characteristics of the two designs in Table3 are shown in Figure 4. Despite the robustness of the designs, they also clearly differ.... ..."

### Table 1. User-adaptive systems that use ML for BNs

2003

"... In PAGE 2: ...ecome also applied in (user-adaptive) IR systems (e.g., [2]). 2 Learning Bayesian Networks for User Modeling Bayesian networks have become increasingly popular as one of the inference technique of choice for user-adaptive systems. Table1 lists some recent research of UM with BNs in a wide range of application scenarios that applies to some extent ML techniques. Note, that these systems (except our READY-system) use off-the-shelf learning methods that were not developed with the particular UM context in mind.... ..."

Cited by 2

### Table 1: Performance of adaptation methods on the NIST database.

2003

"... In PAGE 4: ... We compared the classical Bayesian Maximum a Posteriori (MAP) principle [17] with two other techniques, Maximum Likelihood Linear Regression (MLLR) [16] and eigenvoices [20] (inspired by eigenfaces [30]). Table1 shows that the simple MAP technique is still the best adaptation method for GMM-based speaker verification.... ..."

### Table 1: Performance of adaptation methods on the NIST database.

"... In PAGE 4: ... We compared the classical Bayesian Maximum a Posteriori (MAP) principle [17] with two other techniques, Maximum Likelihood Linear Regression (MLLR) [16] and eigenvoices [20] (inspired by eigenfaces [31]). Table1 shows that the simple MAP technique is still the best adaptation method for GMM-based speaker verification. One explanation for the poor results of MLLR and EigenVoices might be that both methods force the parameters of the client models to be in a smaller parameter space, defined by training clients (previously seen but not used during testing); this may be good for discriminating clients from everything else, but not necessarily good for discriminating clients from each other.... ..."