### Table 1. Performance of Bayesian Belief Network

2005

"... In PAGE 15: ....3.1. Modeling IDS Using Bayesian Classifier Furthermore, Bayesian network classifier is constructed using the training data and then the classifier is used on the test data set to classify the data as an attack or normal. Table1 depicts the performance of Bayesian belief network by using the original 41 variable data set and the 17 variables reduced data set. The training and testing times for each classifier are decreased when 17 variable data set is used.... ..."

Cited by 2

### Table 1. Performance of Bayesian Belief Network

2004

"... In PAGE 4: ... Further Bayesian network classifier is constructed using the training data and then the classifier is used on the test data set to classify the data as an attack or normal. Table1 depicts the performance of Bayesian belief network by using the original 41 variable data set and the 17 variables reduced data set. The training and testing times for each classifier are decreased when 17 variable data set is used.... ..."

Cited by 2

### Table 2 Performance of Bayesian belief network Attack class 41 variables 17 variables

2004

"... In PAGE 10: ...69 Input Feature Reduction Ensemble Based Intrusion Detection System Bayesian Network Trees Figure 1 Ensemble approach for IDS. Table2 depicts the performance of the Bayesian belief network by using the original 41-variable data set and the 17-variable reduced data set. The training and testing times for each classifier decreases when the 17-variable data set is used.... ..."

### Table 2 Performance of Bayesian belief network Attack class 41 variables 17 variables

2004

"... In PAGE 10: ...69 Input Feature Reduction Ensemble Based Intrusion Detection System Bayesian Network Trees Figure 1 Ensemble approach for IDS. Table2 depicts the performance of the Bayesian belief network by using the original 41-variable data set and the 17-variable reduced data set. The training and testing times for each classifier decreases when the 17-variable data set is used.... ..."

### Table 1. Data and beliefs: an overview

2004

"... In PAGE 5: ...The basic distinction between data and beliefs yields a rich picture of epistemic dynamics (Fig. 1 and Table1 ). From a computational viewpoint, such distinction opens the way for blended approaches to implementation [20]: data structures present remarkable similarities with Bayesian networks and neural networks, while belief sets are a well-known hallmark of AGM-style belief revision [13].... ..."

Cited by 4

### Table 2: Model Performance by Fold

"... In PAGE 4: ... A value of 0 indicates that the models performed similarly. Table2 shows the paired results for each fold. Fold SVM Bayesian Belief Network d 1 84.... ..."

### Table 3: Results on Bayesian Network Repository.

"... In PAGE 6: ... We compare the algorithms using the mini-bucket based heuristics generators, namely s- AOMB, d-AOMB, s-BBMB and d-BBMB. Notice that s- BBMB is currently one of the best performing complete al- gorithms for this domain [Kask and Dechter, 2001] Table3 summarizes the results for experiments on 6 real- world belief networks from the Bayesian Network Reposi- tory3. The time limit was set to 600 seconds.... ..."

### Table 4. Classification rates on Hammal-Caplier database. left: Bayesian classifier; right HMM classifier. Bayesian HMM

"... In PAGE 6: ... This process is repeated 21 times, considering a different test subject each time. The classification rate is the average over 21 results ( Table4 ). 6.... In PAGE 7: ... Results of Bayesian Theory and HMM. Classification rates of Bayesian classifier are lower than those of belief theory classifier ( Table4 left). The best results are those of the neutral expression.... ..."

### Table 1 illustrates how Bayesian approximation works. The example starts with an initial conflguration with equal a priori beliefs for U = 5 (case a). Then, it shows how the beliefs have been adapted after a suspicion (case b). Since the real probability must fall into some probability interval of Table 1, we have that P u Ck[pi]:PB[u] = 1 is an invariant of Algorithm 4. u Ck[pi]:PFjB[u] Ck[pi]:PB[u]

"... In PAGE 15: ...0.8 , 1.0] 0.36 (b) After a failure suspicion Table1 : Adapting failure beliefs after a suspicion 5 Simulation Results In order to evaluate the performance of our adaptive algorithm we built a discrete-event simu- lation model and conducted several experiments with it. Our model simulates the behavior of processes and links in a distributed system, associating a crash probability to each process and a loss probability to each link.... ..."

### Table 1. Bayesian Networks Repository (left); SPOT5 benchmarks (right).

2005

"... In PAGE 9: ... The result of this process is a tree of hypergraph separators which is also a pseudo-tree of the original model since each separator corresponds to a subset of variables chained together. In Table1 we computed the height of the pseudo-tree obtained with the hypergraph and minfill heuristics for 10 belief networks from the UAI Repository2 and 10 constraint networks derived from the SPOT5 benchmark [17]. For each pseudo-tree we also com- puted the induced width of the elimination order obtained from the depth-first traversal of the tree.... ..."

Cited by 2