• Documents
  • Authors
  • Tables
  • Log in
  • Sign up
  • MetaCart
  • DMCA
  • Donate

CiteSeerX logo

Tools

Sorted by:
Try your query at:
Semantic Scholar Scholar Academic
Google Bing DBLP
Results 1 - 10 of 1,489
Next 10 →

Table 2. Non-interpolated average precision values for different text-to-text retrieval methods targeting the 10GB and 100GB collections.

in Evaluating Speech-Driven IR in the NTCIR-3 Web Retrieval Task
by Atsushi Fujii, Katunobu Itou
"... In PAGE 5: ... As a result, for each of the above four relevance assessment types, we investigated non-interpolated av- erage precision values of four different methods, as shown in Table 2. By looking at Table2 , there was no significant difference among the four methods in performance. However, by comparing two indexing methods, the use of both words and bi-words generally improved the performance of that obtained with only words, ir- respective of the collection size, topic field used, and relevance assessment type.... In PAGE 5: ... In addition, we used both bi-words and words for indexing, because experimen- tal results in Section 4.1 showed that the use of bi- words for indexing improved the performance of that obtained with only words (see Table2 for details). In cases of speech-driven text retrieval methods, queries dictated by the ten speakers were used inde- pendently, and the final result was obtained by aver- aging results for all the speakers.... ..."
Cited by 1

Table 5. Summary of run result submission Task RunID QMethod TopicPart LinkInfo Task RunID QMethod TopicPart LinkInfo

in Overview of the Web Retrieval Task at the Third NTCIR Workshop
by unknown authors
"... In PAGE 12: ... The run results of the text retrieval module were included in the pool, but the speech-driven retrieval results were not. Summaries of the run result submissions of each par- ticipating group can be found in Table5 , and the de- tails can be found in papers of the participating groups in this proceedings. 5.... ..."

Table 2: IR features

in Ground Target Classification Using Combined Radar and IR with Simulated Data
by Andris Lauberts, Mikael Karlsson, Fredrik Näsström, Rahman Aljasmi
"... In PAGE 3: ... 3 IR signal processing 3.1 Features Table2 shows the nine IR features used in this study. The distance independent features, 1, 3, 4, 5 and 7, have proven to perform especially well for classification purpose.... ..."

Table 2. IR ranking

in Fast Approximate Matching of Programs for Protecting Libre/Open Source Software by Using Spatial Indexes
by Arnoldo José, Müller Molina, Takeshi Shinohara
"... In PAGE 6: ...2.1 Overall Precision Table2 shows the result for IR. %X denotes the accumu- lated percentage of identifications for the query set X.... ..."

Table 1: Construction of IRS

in On the Spectra of Certain Classes of Room Frames
by Jeffrey H. Dinitz, Douglas R. Stinson, L. Zhu 1994
Cited by 3

Table 3. IR summary

in Fast Approximate Matching of Programs for Protecting Libre/Open Source Software by Using Spatial Indexes
by Arnoldo José, Müller Molina, Takeshi Shinohara

Table 1. OPUS IR

in Pruning Derivative Partial Rules During Impact Rule Discovery
by Shiying Huang, Geoffrey I. Webb

Table 1 IRS classes Graphs

in A survey on interval routing
by Cyril Gavoille 1999
"... In PAGE 23: ...9. Summary In Table1 , only shortest paths IRS are considered, and n denotes the number of nodes of the graph. 4.... ..."
Cited by 60

Table 1: Comparison of IR and non-IR preprocessing

in From Information Retrieval to Information Extraction
by David Milward And, David Milward, James Thomas, Suite Millers Yard, Mill Lane
"... In PAGE 5: ...lar to that of e.g. Molla and Hess (Aliod and Hess, 1999), who first partition the index space into separate documents, and use the IR component of queries as a filter. Table1 shows the difference in processing times on two queries for two different datasets. Times for pro- cessing each query on each dataset are labelled Old for Highlight with no IR and no preprocessed files, New for Highlight with preprocessed files and NewIR for Highlight with both an IR stage and preprocessed files.... ..."

Table 1: Comparison of IR and non-IR preprocessing

in From Information Retrieval to Information Extraction
by David Milward And, David Milward, James Thomas, Suite Millers Yard, Mill Lane
"... In PAGE 5: ...lar to that of e.g. Molla and Hess (Aliod and Hess, 1999), who first partition the index space into separate documents, and use the IR component of queries as a filter. Table1 shows the difference in processing times on two queries for two different datasets. Times for pro- cessing each query on each dataset are labelled Old for Highlight with no IR and no preprocessed files, New for Highlight with preprocessed files and NewIR for Highlight with both an IR stage and preprocessed files.... ..."
Next 10 →
Results 1 - 10 of 1,489
Powered by: Apache Solr
  • About CiteSeerX
  • Submit and Index Documents
  • Privacy Policy
  • Help
  • Data
  • Source
  • Contact Us

Developed at and hosted by The College of Information Sciences and Technology

© 2007-2019 The Pennsylvania State University