### Table 1: Number of sampling locations for various fea- ture types. MRF: Markov Random Field model, IG: in- tensity gradient. Gabor and IG are extracted only from femur images.

2004

"... In PAGE 3: ... Markov Random Field (MRF) texture model ex- tracts features from moderate-sized sampling regions. In the current implementation, the number of sampling lo- cations is set as shown in Table1 . Figure 2 illustrates an example of adaptive sampling at the femoral neck.... ..."

Cited by 4

### Table 1: Coefficients fhk;lg of the Markov random field wood model [14].

1997

"... In PAGE 21: ... The 64x64 pixels were divided into groups g1 and g2: g1 contains the pixels in the upper left and lower right of the image, and g2 contains the pixels in the diagonal band running through the center of the image. The prior model for g1 is the wood model of Table1 ; the prior model for g2 uses the same coefficients in Table 1, but with the table rotated by 90 degrees. The cross correlation between groups g1 and g2 is zero.... ..."

Cited by 34

### Table 1: Comparison of Markov Chains and Random Fields on four di erent collections of polyphonic music. For Markov Chains we show the N-gram model that gave best performance on the testing set. For Random Fields we specify the number of iterations of the induction algorithm. For every collection, Random Fields result in lower testing perplexity and higher area under the ROC curve

"... In PAGE 9: ... We observe that Random Fields noticeably outperform Markov Chains. The lower portion of Table1 summarizes the quantitativedifferencebetweentheROCcurveson the fourdatasets. We use area under the ROC curve as a single-number measure of relative performance.... ..."

### Table 1: Parallel implementation. Where each value represents the step number at which the Markov Random Field is updated.

1994

Cited by 3

### Table 1 Confusion matrix for the classification result in Figure 4 (c) obtained with the Markov random fields method.

### Table 3 Confusion matrix for the classification result in Figure 5 (c) obtained with the Markov random fields method.

### Table 2. Performance of the hierarchical Markov model

2003

"... In PAGE 5: ...e., row 1 and 2 of Table2 ) provide a reference value for performance evaluation of the hMM. It is obvious that the hMM incurs approximately 60% (Table 2 column I - row 3 and 4) over- head for the inter-arrival-rate and, therefore, is rendering unsatis- factory performance.... In PAGE 5: ...races (i.e., row 1 and 2 of Table 2) provide a reference value for performance evaluation of the hMM. It is obvious that the hMM incurs approximately 60% ( Table2 column I - row 3 and 4) over- head for the inter-arrival-rate and, therefore, is rendering unsatis- factory performance. The burst-length random variable usually takes on small values since most of the bits are not corrupted dur- ing transmission and, hence, result in small (bit error) bursts.... In PAGE 5: ... Therefore, it is important to quantify the hMM burst-length per- formance with respect to the source-based traces. It is obvious that for the burst-length random variable, the ENK distance between the hMM- and source-based traces ( Table2 column B - row 3 and 4) is much larger as opposed to the ENK between two source- based traces (Table 2 column B - row 1 and 2). We conclude that although the hMM performs adequately in characterizing hybrid (i.... In PAGE 5: ... Therefore, it is important to quantify the hMM burst-length per- formance with respect to the source-based traces. It is obvious that for the burst-length random variable, the ENK distance between the hMM- and source-based traces (Table 2 column B - row 3 and 4) is much larger as opposed to the ENK between two source- based traces ( Table2 column B - row 1 and 2). We conclude that although the hMM performs adequately in characterizing hybrid (i.... In PAGE 7: ... Table 6 enumerates the performance of the HMM. Comparing the I col- umn of Table2 (row 3 and 4) with Table 6 outlines that the HMM shows clear improvement in the inter-arrival-rate performance, for instance, 40.... In PAGE 7: ...nstance, 40.33% as opposed to 58.72% for the hMM. However, the ENK for the burst-length random variable in the HMM case (Table 6 column B ) is orders of magnitude greater than the re- spective ENK for the hMM traces ( Table2 column B - row 3 and 4). Hence we conclude that, while the HMM improves the model- ing of good bursts , (when compared to the hMM) the hidden Markov model can not approximate the bad bursts adequately.... ..."

Cited by 11

### Table 1: Coefficients a177 a12a11a32a250a14a13 a31a138a179 of the Markov random field wood model [14].

"... In PAGE 21: ... The 64x64 pixels were divided into groups a198 a24 and a198a66a27 : a198 a24 contains the pixels in the upper left and lower right of the image, and a198a66a27 contains the pixels in the diagonal band running through the center of the image. The prior model for a198 a24 is the wood model of Table1 ; the prior model for a198a66a27 uses the same coefficients in Table 1, but with the table rotated by 90 degrees. The cross correlation between groups a198 a24 and a198 a27 is zero.... In PAGE 21: ... The 64x64 pixels were divided into groups a198 a24 and a198a66a27 : a198 a24 contains the pixels in the upper left and lower right of the image, and a198a66a27 contains the pixels in the diagonal band running through the center of the image. The prior model for a198 a24 is the wood model of Table 1; the prior model for a198a66a27 uses the same coefficients in Table1 , but with the table rotated by 90 degrees. The cross correlation between groups a198 a24 and a198 a27 is zero.... ..."