### Table 9.2 Hypothesis space search: PFOIL vs. GORB Step1 Step2 Step3 Step4 Step5 Step6

2007

### Table 13: Average percentage of standard cost for the binary search experiment.

1995

"... In PAGE 25: ...393 than when searching in binary bias space. Table13 shows that this hypothesis was not con- firmed. It appears to be better to search in binary bias space, rather than real bias space.... ..."

Cited by 105

### Table 13: Average percentage of standard cost for the binary search experiment.

1995

"... In PAGE 25: ...393 than when searching in binary bias space. Table13 shows that this hypothesis was not con- firmed. It appears to be better to search in binary bias space, rather than real bias space.... ..."

Cited by 105

### Table 13: Average percentage of standard cost for the binary search experiment.

1995

"... In PAGE 25: ...393 than when searching in binary bias space. Table13 shows that this hypothesis was not con- firmed. It appears to be better to search in binary bias space, rather than real bias space.... ..."

Cited by 105

### Table 13: Average percentage of standard cost for the binary search experiment.

1995

Cited by 105

### Table 1 The search algorithm.

"... In PAGE 2: ... If this is the case, the hypothesis is to be pruned as well. Table1 shows our search algorithm with candidate generation adapted from the Apriori algorithm. The algorithm conducts a complete search of the hypoth- esis space.... ..."

### Table 4. Results for the HMM topology with contextual silences (HMM-CS) and the 4 noise models (N- HMMs) incorporated in the search space: Percentage of Substitutions (Sub), Insertions (Ins) and Deletions (Del), Letter Accuracy (LA), Name Accuracy (NA). Name Recognition Rate (NRR) and Processing Time (PT), consumed by the whole hypothesis stage (considering the 1,000 dictionary), are also shown.

"... In PAGE 18: ...he Letter Accuracy 6.8 points (from 68.7% to 75.5%) obtaining a 88.6% Name Recognition Rate for the hypothesis stage. Table4 shows the results obtained without removing from the HMM training set the files from speakers contained in the development and final testing sets. When removing the files, the Letter Accuracy and the Name Recognition Rate have decreased a little.... ..."

### Table 2. Search Space Properties

2004

"... In PAGE 9: ... Among the benchmarks there are four kernels: advect3d, lud, mm, vpenta and two full ap- plications: swim and mgrid. Table2 lists the applied transformations and the dimensions of the search space for each application. The total number of points... ..."

Cited by 8

### Table 1. Dimension of search spaces.

"... In PAGE 3: ... We used the min- imized lexicon and language model WFSTs. Table1 shows the dimension of the search spaces. We observe that the network ob- tained with tail-sharing algorithm has only 5% more states and 3% more edges than the minimum, and we also observe a huge reduc- tion on the size of the network relative to the previous version.... ..."

### Table 1: Growth of search spaces.

2001

"... In PAGE 4: ... Larger values for nR are generally not possible as even nR = 40 already took up to a few seconds on average for the com- putations from OPT (on a 400 MHz Pentium-II processor). That OPT is increasingly expensive to compute can be seen when observing that the average size of the space of covers SO(n) searched by OPT for a CDC with n steps is recursively defined as (corresponding to the operation of the algorithm) (5) A comparison of SO(n) with ST(n), the size of the space of tight covers, for some example values of n is given in Table1 . This is intended to give an illustration of how much is saved by OPT when compared to a total enumeration of tight covers.... ..."

Cited by 3