Results 1 - 10
of
20,055
Table 11: Results of the of cial runs for long queries.
2004
Cited by 11
Table 7: Overall average non-interpolated precision for the long query.
1998
"... In PAGE 20: ... We varied the number of iterations and the window size (the value for X) in our tests. Table7 has the results for the long query (title, description, and narrative) and for the adaptive linear and the probabilistic model. Firstly, the adaptive linear model performed much better than the probabilistic model, probably because nonrelevant documents are generally given lower ranks by the probabilistic model as opposed to the adaptive linear model.... ..."
Cited by 14
Table 7: Overall average non-interpolated precision for the long query.
1998
"... In PAGE 20: ... We varied the number of iterations and the window size (the value for X) in our tests. Table7 has the results for the long query (title, description, and narrative) and for the adaptive linear and the probabilistic model. Firstly, the adaptive linear model performed much better than the probabilistic model, probably because nonrelevant documents are generally given lower ranks by the probabilistic model as opposed to the adaptive linear model.... ..."
Cited by 14
Table 6. Response Times for Short, Medium, and Long Queries (seconds)
2000
"... In PAGE 21: ... The maximum number of results sent to the connection server from the Inquery servers at any single point in time is jInquery serversj jthreadsj. The rst two rows of Table6 show the response times for query, summary, and document commands for short queries when the system contains 8 and 128 Inquery servers. Using short queries, the architecture achieves the best response times using 8 Inquery servers.... In PAGE 22: ... Response Times for Short, Medium, and Long Queries (seconds) to degrade after 4 commands/seconds. Table6 also illustrates the summary and document response times grow at a similar rate as the query commands. This trend occurs in each of the experiments we perform.... In PAGE 22: ...9 seconds which is larger than the best time for short queries by a factor of 86. However, the third row of Table6 shows that the system achieves a response time of 11 seconds or less with a command rate less than 2 commands/second. The reason for the poor performance when clients issue commands quickly is that the Inquery servers are unable to process commands fast enough.... ..."
Cited by 32
Table 6. Response Times for Short, Medium, and Long Queries (seconds)
2000
"... In PAGE 21: ... The maximum number of results sent to the connection server from the Inquery servers at any single point in time is jInquery serversj jthreadsj. The rst two rows of Table6 show the response times for query, summary, and document commands for short queries when the system contains 8 and 128 Inquery servers. Using short queries, the architecture achieves the best response times using 8 Inquery servers.... In PAGE 22: ... Response Times for Short, Medium, and Long Queries (seconds) to degrade after 4 commands/seconds. Table6 also illustrates the summary and document response times grow at a similar rate as the query commands. This trend occurs in each of the experiments we perform.... In PAGE 22: ...9 seconds which is larger than the best time for short queries by a factor of 86. However, the third row of Table6 shows that the system achieves a response time of 11 seconds or less with a command rate less than 2 commands/second. The reason for the poor performance when clients issue commands quickly is that the Inquery servers are unable to process commands fast enough.... ..."
Cited by 32
Table 3: Performance comparison in post-submission experiments with long queries
2004
"... In PAGE 4: ... This causes slightly poorer performance in test collection based evaluation where usually relevance assessments tend to prefer longer documents. Table3 shows the performance comparison combining pseudo-relevance feedback and reference database feedback as well as different retrieval models TF*IDF/KL-Dir on the basis of the pllsgen4a2 setting. The pseudo relevance feedback procedure contributes to 4.... ..."
Cited by 16
Table 3: Performance comparison ( Long query, l2 parameter set )
2000
"... In PAGE 5: ...6 phrasal terms in average ( maximum 239 single word terms and 218 phrasal terms, minimum 25 single word terms and 5 phrasal terms ). Table3 shows the results. Supplemental phrasal runs are consistently better than single word term runs both in average precision and R-precision.... ..."
Cited by 3
Table 6. Evaluation results for long queries on the three collections.
2005
Cited by 2
Table 4: Comparison of Methods a and b for long query Average
2003
Cited by 1
Results 1 - 10
of
20,055