• Documents
  • Authors
  • Tables

CiteSeerX logo

Tools

Sorted by:
Try your query at:
Semantic Scholar Scholar Academic
Google Bing DBLP
Results 1 - 10 of 20,055
Next 10 →

Table 11: Results of the of cial runs for long queries.

in University of Glasgow at TREC2004: Experiments in Web, Robust and Terabyte tracks with Terrier
by Vassilis Plachouras, Ben He, Iadh Ounis 2004
Cited by 11

Table 6 Statistical tests of the results for the long queries

in Downloaded from
by Kimmo Kettunen, Kimmo Kettunen 2005

Table 7: Overall average non-interpolated precision for the long query.

in Interactive Retrieval using IRIS: TREC-6 Experiments
by Robert G. Sumner, Jr., Kiduk Yang, Roger Akers, W. M. Shaw 1998
"... In PAGE 20: ... We varied the number of iterations and the window size (the value for X) in our tests. Table7 has the results for the long query (title, description, and narrative) and for the adaptive linear and the probabilistic model. Firstly, the adaptive linear model performed much better than the probabilistic model, probably because nonrelevant documents are generally given lower ranks by the probabilistic model as opposed to the adaptive linear model.... ..."
Cited by 14

Table 7: Overall average non-interpolated precision for the long query.

in Interactive Retrieval using IRIS: TREC-6 Experiments
by Robert Sumner, Kiduk Yang, Roger Akers, W. M. Shaw 1998
"... In PAGE 20: ... We varied the number of iterations and the window size (the value for X) in our tests. Table7 has the results for the long query (title, description, and narrative) and for the adaptive linear and the probabilistic model. Firstly, the adaptive linear model performed much better than the probabilistic model, probably because nonrelevant documents are generally given lower ranks by the probabilistic model as opposed to the adaptive linear model.... ..."
Cited by 14

Table 6. Response Times for Short, Medium, and Long Queries (seconds)

in Evaluating the Performance of Distributed Architectures for Information Retrieval using a Variety of Workloads
by Brendon Cahoon, Kathryn S. McKinley, Zhihong Lu 2000
"... In PAGE 21: ... The maximum number of results sent to the connection server from the Inquery servers at any single point in time is jInquery serversj jthreadsj. The rst two rows of Table6 show the response times for query, summary, and document commands for short queries when the system contains 8 and 128 Inquery servers. Using short queries, the architecture achieves the best response times using 8 Inquery servers.... In PAGE 22: ... Response Times for Short, Medium, and Long Queries (seconds) to degrade after 4 commands/seconds. Table6 also illustrates the summary and document response times grow at a similar rate as the query commands. This trend occurs in each of the experiments we perform.... In PAGE 22: ...9 seconds which is larger than the best time for short queries by a factor of 86. However, the third row of Table6 shows that the system achieves a response time of 11 seconds or less with a command rate less than 2 commands/second. The reason for the poor performance when clients issue commands quickly is that the Inquery servers are unable to process commands fast enough.... ..."
Cited by 32

Table 6. Response Times for Short, Medium, and Long Queries (seconds)

in Evaluating the Performance of Distributed Architectures for Information Retrieval using a Variety of Workloads
by Brendon Cahoon, Kathryn S. McKinley 2000
"... In PAGE 21: ... The maximum number of results sent to the connection server from the Inquery servers at any single point in time is jInquery serversj jthreadsj. The rst two rows of Table6 show the response times for query, summary, and document commands for short queries when the system contains 8 and 128 Inquery servers. Using short queries, the architecture achieves the best response times using 8 Inquery servers.... In PAGE 22: ... Response Times for Short, Medium, and Long Queries (seconds) to degrade after 4 commands/seconds. Table6 also illustrates the summary and document response times grow at a similar rate as the query commands. This trend occurs in each of the experiments we perform.... In PAGE 22: ...9 seconds which is larger than the best time for short queries by a factor of 86. However, the third row of Table6 shows that the system achieves a response time of 11 seconds or less with a command rate less than 2 commands/second. The reason for the poor performance when clients issue commands quickly is that the Inquery servers are unable to process commands fast enough.... ..."
Cited by 32

Table 3: Performance comparison in post-submission experiments with long queries

in Revisiting Again Document Length Hypotheses: TREC-2004 Genomics Track Experiments at Patolis
by Sumio Fujita 2004
"... In PAGE 4: ... This causes slightly poorer performance in test collection based evaluation where usually relevance assessments tend to prefer longer documents. Table3 shows the performance comparison combining pseudo-relevance feedback and reference database feedback as well as different retrieval models TF*IDF/KL-Dir on the basis of the pllsgen4a2 setting. The pseudo relevance feedback procedure contributes to 4.... ..."
Cited by 16

Table 3: Performance comparison ( Long query, l2 parameter set )

in Reflections on "Aboutness"
by Trec- Evaluation Experiments, Sumio Fujita 2000
"... In PAGE 5: ...6 phrasal terms in average ( maximum 239 single word terms and 218 phrasal terms, minimum 25 single word terms and 5 phrasal terms ). Table3 shows the results. Supplemental phrasal runs are consistently better than single word term runs both in average precision and R-precision.... ..."
Cited by 3

Table 6. Evaluation results for long queries on the three collections.

in Term frequency normalisation tuning for BM25 and DFR model
by Ben He, Iadh Ounis 2005
Cited by 2

Table 4: Comparison of Methods a and b for long query Average

in Extracting Key Semantic Terms from Chinese Speech Query for Web Searches
by Gang Wang, Tat-Seng Chua, Yong-Cheng Wang 2003
Cited by 1
Next 10 →
Results 1 - 10 of 20,055
Powered by: Apache Solr
  • About CiteSeerX
  • Submit and Index Documents
  • Privacy Policy
  • Help
  • Data
  • Source
  • Contact Us

Developed at and hosted by The College of Information Sciences and Technology

© 2007-2019 The Pennsylvania State University