### TABLE 4 Experimental and simulation values for the coherence length scale in a Markov random eld description of the interface statistics.

### Table 1. Sample topics from the APT model with 200 top- ics on a corpus of about 500,000 words. The documents consist of titles and abstracts from papers written by NIPS reviewers. The column on the left is the total number of words in each topic, while the column on the right is a listing of the most probable words for each topic.

2007

"... In PAGE 6: ... To generate a document, an author chooses a persona p, distributed according to , and then selects topics from p. Examples of topics from a model trained with 200 top- ics are shown in Table1 . The model is able to identify and separate common methodological words (\perfor- mance, data, results quot; and \data, model, algorithm quot;) while also identifying clusters of words related to spe- ci c machine learning algorithms: there are topics for hidden markov models, support vector machines, in- formation bottleneck and conditional random elds.... ..."

Cited by 1

### Table 2b. Hidden Markov recurrence probabilities and event matrices: War Crises

"... In PAGE 18: ... This is presumably due to the fact that various combinations of recurrence probabilities and observed symbol probabilities can produce almost identical likelihoods for the training sequences. Results Discriminating BCOW War and Nonwar Crises The HMMs estimated from the nonwar and war BCOW crises (translated into WEIS codes) are reported in Table2 and Figure 3; Table 2 also reports the events in the transition vectors that have relatively high probabilities. The matrices are quite plausible, as are the differences between them; both models generated large recurrence probabilities on all six states.... In PAGE 20: ... The war model classifies somewhat more accurately than the nonwar model, but both models do quite well and the cases that are incorrectly classified are concentrated in a set of plausible exceptions rather than distributed randomly. Table2 a. Hidden Markov recurrence probabilities and event matrices: Nonwar Crises A B C D E Abs recurrence probability 0.... ..."

### Table 2. Performance of the hierarchical Markov model

2003

"... In PAGE 5: ...e., row 1 and 2 of Table2 ) provide a reference value for performance evaluation of the hMM. It is obvious that the hMM incurs approximately 60% (Table 2 column I - row 3 and 4) over- head for the inter-arrival-rate and, therefore, is rendering unsatis- factory performance.... In PAGE 5: ...races (i.e., row 1 and 2 of Table 2) provide a reference value for performance evaluation of the hMM. It is obvious that the hMM incurs approximately 60% ( Table2 column I - row 3 and 4) over- head for the inter-arrival-rate and, therefore, is rendering unsatis- factory performance. The burst-length random variable usually takes on small values since most of the bits are not corrupted dur- ing transmission and, hence, result in small (bit error) bursts.... In PAGE 5: ... Therefore, it is important to quantify the hMM burst-length per- formance with respect to the source-based traces. It is obvious that for the burst-length random variable, the ENK distance between the hMM- and source-based traces ( Table2 column B - row 3 and 4) is much larger as opposed to the ENK between two source- based traces (Table 2 column B - row 1 and 2). We conclude that although the hMM performs adequately in characterizing hybrid (i.... In PAGE 5: ... Therefore, it is important to quantify the hMM burst-length per- formance with respect to the source-based traces. It is obvious that for the burst-length random variable, the ENK distance between the hMM- and source-based traces (Table 2 column B - row 3 and 4) is much larger as opposed to the ENK between two source- based traces ( Table2 column B - row 1 and 2). We conclude that although the hMM performs adequately in characterizing hybrid (i.... In PAGE 7: ... Table 6 enumerates the performance of the HMM. Comparing the I col- umn of Table2 (row 3 and 4) with Table 6 outlines that the HMM shows clear improvement in the inter-arrival-rate performance, for instance, 40.... In PAGE 7: ...nstance, 40.33% as opposed to 58.72% for the hMM. However, the ENK for the burst-length random variable in the HMM case (Table 6 column B ) is orders of magnitude greater than the re- spective ENK for the hMM traces ( Table2 column B - row 3 and 4). Hence we conclude that, while the HMM improves the model- ing of good bursts , (when compared to the hMM) the hidden Markov model can not approximate the bad bursts adequately.... ..."

Cited by 11

### Table 2: Log-likelihood of the experiments on the hidden Markov simulation data

"... In PAGE 13: ...009 0.005 Table2 shows the mean and the standard deviation of log-likelihood of the test set using the naive method, the log-likelihood of the gated experts and the hidden Markov experts. It indicates that the likelihood of the gated experts and the hidden Markov experts are signi#0Ccantly better than the naive model.... ..."

### Table 7 : Results obtained with Discrete Hidden Markov Model

Cited by 1