Results 11 - 20
of
56,134
Table 1: Ordering of top clauses for di erent texts computed by coherence and cohesion methods and human
1998
"... In PAGE 4: ... For each text t, we determine the top n t salient clauses ranked bythehuman, the top n t computed by coher- ence, and the top n t computed byeach of the cohesion methods. The top clauses retrieved in these cases are shown in Table1 . As can be seen, on the mars text, the spreading picks up two of the four 20% salient clauses, predicting correctly their relative ordering, as does the coherence method.... ..."
Cited by 10
Table 1 Text Retrieval Times
1999
"... In PAGE 16: ... Two les were used: one containing a single byte of text, and the other the entire King James apos; Bible (5073934 bytes). The results of these experiments are given in Table1 . It can be seen that the overhead of using the proxylet infrastructure seems to be around three seconds when using a single byte le.... In PAGE 17: ...f Technology, Sydney. We transfered a whole bible in this way a few times. We in fact found a wide variance in our measurements, which we believe are re ective of transient changes in network congestion across the global Internet. The measurement shown in the nal line of Table1 is about the average of our measurements. We can see that the retrieval time seems to be substantially improved by the use of compression using the DPS infrastructure.... ..."
Cited by 32
Table 9: Estimation of Capital Adequacy after the Lump Sum Write-Off (trillion yen, %/6)
"... In PAGE 13: ... 6) Stock prices at the end of March 1995 are used to derive latent gains of listed securities. The projected reductions in capital adequacy ratios after the banks make lump sum write- offs of bad loans are presented in the last two columns in Table9 . The simulated worsening in risk assets/own capital ratios are so substantial under this scenario that none of the three types of major bank groups (in aggregates) could satisfy the BIS capital adequacy criteria after the lump sum write- off.... ..."
Table 2. Results for different retrieval methods (AP: average precision, WER: word error rate, TER: term error rate).
2002
"... In PAGE 7: ... The results did not significantly change depending on whether or not we used lower-ranked transcriptions as queries. Table2 shows the non-interpolated average precision values and word error rate in speech recognition, for different retrieval methods. As with existing ex- periments for speech recognition, word error rate (WER) is the ratio between the number of word errors (i.... In PAGE 7: ...o query terms (i.e., keywords used for retrieval), which we shall call term error rate (TER). In Table2 , the first line denotes results of the text-to-text retrieval, which were relatively high compared with existing results reported in the NTCIR work- shops [11, 12]. The remaining lines denote results of speech-driven text retrieval combined with the NTCIR-based language model (lines 2-5) and the newspaper-based model (lines 6-9), respectively.... In PAGE 8: ... Figures 3 and 4 show recall-precision curves of different retrieval methods, for the NTCIR- 1 and 2 collections, respectively. In these figures, the relative superiority for precision values due to different language models in Table2 was also observable, regardless of the recall. However, the effectiveness of the on-line adaptation remains an open question and needs to be explored.... ..."
Cited by 6
Table 2. Results for difierent retrieval methods (AP: average precision, WER: word error rate, TER: term error rate).
2002
"... In PAGE 7: ... The results did not signiflcantly change depending on whether or not we used lower-ranked transcriptions as queries. Table2 shows the non-interpolated average precision values and word error rate in speech recognition, for difierent retrieval methods. As with existing ex- periments for speech recognition, word error rate (WER) is the ratio between the number of word errors (i.... In PAGE 7: ...o query terms (i.e., keywords used for retrieval), which we shall call \term error rate (TER). quot; In Table2 , the flrst line denotes results of the text-to-text retrieval, which were relatively high compared with existing results reported in the NTCIR work- shops [11, 12]. The remaining lines denote results of speech-driven text retrieval combined with the NTCIR-based language model (lines 2-5) and the newspaper-based model (lines 6-9), respectively.... In PAGE 8: ... Figures 3 and 4 show recall-precision curves of difierent retrieval methods, for the NTCIR- 1 and 2 collections, respectively. In these flgures, the relative superiority for precision values due to difierent language models in Table2 was also observable, regardless of the recall. However, the efiectiveness of the on-line adaptation remains an open question and needs to be explored.... ..."
Cited by 6
Table 2: retrieval with PRF for Text and OCR collections.
2002
"... In PAGE 4: ...Table 2: retrieval with PRF for Text and OCR collections. Table2 retrieval shows retrieval performance for each collection after the application of PRF with 5 and 20 expansion terms. For text retrieval there is an improvement in both average precision and the number of relevant documents retrieved when either 5 or 20 terms are added.... In PAGE 4: ... Figure 2 shows a much greater variability in results between the two media than observed previously in Figure 1. In Figure 2 the results are quite different for most queries, in some cases OCR text retrieval is much better than for the original text, overall the results for electronic text are superior on more occasions producing the overall averages shown in Table2 . For this small number of expansion terms presumably on some occasions some better expansion terms are selected from the OCR baseline run than the electronic text run, leading to better results for some queries in the PRF run.... In PAGE 5: ...3 Effect of Expansion Term Selection Results for the baseline runs shown in Table 1 show that there is little different in average retrieval performance prior to the application of feedback. Results in Table2 show that there is a large difference in behaviour between electronic and OCR text following the application of PRF. In this next experiment we investigated the effect of the expansion term selection in retrieval behaviour.... ..."
Cited by 1
Table 3.1 Effects of using document structure on named page finding task Method Content based retrieval Anchor + special fields Combined MRR 0.690 0.530 0.717 To topic distillation task, fields of bold font ( lt;B gt;) and keywords in lt;meta gt; are also useful to the retrieval. It does help by giving a different term weight from the other full text.
2002
Cited by 4
Tables 6.3a and 6.3b show respectively the results of running the 100 TREC queries and the 1,000 WT10g queries for text-based federated search in a hierarchical P2P network using different methods. Both cooperative and uncooperative environments were studied. The single collection baseline which returned 50 top- ranked documents for each query by retrieval using the single
2004
Table 1: Bigram and Short-Word Segmentation Retrieval Results Averaged over 28 Queries
1997
"... In PAGE 3: ... The process is like having a dynamic thesaurus bringing in synonymous or related terms to enrich the raw query. As an example of a retrieval, we have shown in Table1 comparing the TREC-5 Chinese experiment using bigram representation with our method of text segmentation in the PIRCS system. The table is a standard for the TREC evaluation.... ..."
Cited by 6
Results 11 - 20
of
56,134