Results 1 - 10
of
1,489
Table 2. Non-interpolated average precision values for different text-to-text retrieval methods targeting the 10GB and 100GB collections.
"... In PAGE 5: ... As a result, for each of the above four relevance assessment types, we investigated non-interpolated av- erage precision values of four different methods, as shown in Table 2. By looking at Table2 , there was no significant difference among the four methods in performance. However, by comparing two indexing methods, the use of both words and bi-words generally improved the performance of that obtained with only words, ir- respective of the collection size, topic field used, and relevance assessment type.... In PAGE 5: ... In addition, we used both bi-words and words for indexing, because experimen- tal results in Section 4.1 showed that the use of bi- words for indexing improved the performance of that obtained with only words (see Table2 for details). In cases of speech-driven text retrieval methods, queries dictated by the ten speakers were used inde- pendently, and the final result was obtained by aver- aging results for all the speakers.... ..."
Cited by 1
Table 5. Summary of run result submission Task RunID QMethod TopicPart LinkInfo Task RunID QMethod TopicPart LinkInfo
"... In PAGE 12: ... The run results of the text retrieval module were included in the pool, but the speech-driven retrieval results were not. Summaries of the run result submissions of each par- ticipating group can be found in Table5 , and the de- tails can be found in papers of the participating groups in this proceedings. 5.... ..."
Table 2: IR features
"... In PAGE 3: ... 3 IR signal processing 3.1 Features Table2 shows the nine IR features used in this study. The distance independent features, 1, 3, 4, 5 and 7, have proven to perform especially well for classification purpose.... ..."
Table 2. IR ranking
"... In PAGE 6: ...2.1 Overall Precision Table2 shows the result for IR. %X denotes the accumu- lated percentage of identifications for the query set X.... ..."
Table 3. IR summary
Table 1 IRS classes Graphs
1999
"... In PAGE 23: ...9. Summary In Table1 , only shortest paths IRS are considered, and n denotes the number of nodes of the graph. 4.... ..."
Cited by 60
Table 1: Comparison of IR and non-IR preprocessing
"... In PAGE 5: ...lar to that of e.g. Molla and Hess (Aliod and Hess, 1999), who first partition the index space into separate documents, and use the IR component of queries as a filter. Table1 shows the difference in processing times on two queries for two different datasets. Times for pro- cessing each query on each dataset are labelled Old for Highlight with no IR and no preprocessed files, New for Highlight with preprocessed files and NewIR for Highlight with both an IR stage and preprocessed files.... ..."
Table 1: Comparison of IR and non-IR preprocessing
"... In PAGE 5: ...lar to that of e.g. Molla and Hess (Aliod and Hess, 1999), who first partition the index space into separate documents, and use the IR component of queries as a filter. Table1 shows the difference in processing times on two queries for two different datasets. Times for pro- cessing each query on each dataset are labelled Old for Highlight with no IR and no preprocessed files, New for Highlight with preprocessed files and NewIR for Highlight with both an IR stage and preprocessed files.... ..."
Results 1 - 10
of
1,489