• Documents
  • Authors
  • Tables
  • Log in
  • Sign up
  • MetaCart
  • DMCA
  • Donate

CiteSeerX logo

Tools

Sorted by:
Try your query at:
Semantic Scholar Scholar Academic
Google Bing DBLP
Results 11 - 20 of 13,325
Next 10 →

Table 1: Example queries and results.

in Anchor Point Indexing in Web Document Retrieval
by Ben Kao Joseph, Joseph Lee, Chi-yuen Ng, David Cheung 2000
"... In PAGE 3: ... Users do not usually have the patience to toil through more than the rst thirty hits returned bya search engine. Table1 illustrates this problem byshowing the results obtained from querying four popular search engines with some sample queries. In eachrow of the table, we show a searchgoal(Goal), the keyword-based query submitted (Query), the search engine that processed the query (Search Engine), the number of pages returned (# of hits), the number of relevant pages that satis ed our goal in the rst thirtyhits... In PAGE 4: ...ovember day in some year. (The numbers 19 and 1997 are logically ignored by the engines.) One would nd that all these pages share the same pre x in their URL apos;s, and that they belong to a single logical cluster of a big hyper-document. The last column of Table1 refers to the number of clusters that the rst thirty hits can be grouped into. From the table we see that the answer sets are huge.... In PAGE 4: ... Of course, one would argue that the screening would stop as soon as one good recommendation is found. Still, as suggested by Table1 , the rst relevant page maynot be found until a couple dozens pages have been examined, many more if one is unlucky. Also, the rst relevant page may not be the best page that can be found in the answer set.... In PAGE 4: ... More screening is required if one would like to compare relevant hits looking for a better match. Besides illustrating the large answer set problem, Table1 also gives us a hintonhowto avoid overwhelming the users with the numerous recommendations. The last column of the table suggests that the large number of pages can be grouped into a small number of logical clusters.... In PAGE 5: ... The implication is that users cannot a ord to examine only the rst few, or any small subset, of the answer set. Table1 illustrates this problem by showing the number of pages among the rst 30 hits that are relevant to a search goal. We see that, for some queries, the numbers are less than honorable.... ..."
Cited by 1

Table 2: Average F1, 10-precision, and R-precision values for all the Q0 and Q1 queries, in groups G1 and G2 together.

in The Effectiveness of Automatically Structured Queries in Digital Libraries ABSTRACT
by Marcos André, Gonçalves Edward, A. Fox, Aaron Krowne, Pável Calado, Alberto H. F. Laender, Altigran S. Silva, Berthier Ribeiro-neto
"... In PAGE 7: ...qual to Q0 in 83.4% of the searches. We can conclude that, without the need of user intervention (except from entering the query keywords), the system is able to automatically find a structured query in the top of the ranked list that outperforms a simple keyword-based search in the majority of cases. In fact, as shown in Table2 , the average F1, 10- precision, and R-precision values for Q0 were 28.9, 31.... ..."

Table 2: Average F1, 10-precision, and R-precision values for all the Q0 and Q1 queries, in groups G1 and G2 together.

in The Effectiveness of Automatically Structured Queries in Digital Libraries ABSTRACT
by Marcos André, Gonçalves Edward, A. Fox, Aaron Krowne, Pável Calado, Alberto H. F. Laender, Altigran S. Silva, Berthier Ribeiro-neto
"... In PAGE 7: ...without the need of user intervention (except from entering the query keywords), the system is able to automatically find a structured query in the top of the ranked list that outperforms a simple keyword-based search in the majority of cases. In fact, as shown in Table2 , the average F1, 10- precision, and R-precision values for Q0 were 28.9, 31.... ..."

Table 1 Recall/Precision/F score for Two Search Techniques

in Retrieval Effectiveness of an Ontology-Based Model for Information Selection
by Latifur Khan, Dennis Mcleod, Eduard Hovy 2004
"... In PAGE 38: ... In Figures 5, 6, and 7 for each query the first and second bars represent the recall/precision/F score for ontology and keyword-based search technique respectively. Corresponding numerical values are reported in Table1 . Although the vector space model is ranked- based and our ontology-based model is a Boolean retrieval model, in the former case we report precision for maximum recall in order to make a fair comparison.... ..."
Cited by 19

Table 1. Test queries.

in unknown title
by unknown authors 2004
"... In PAGE 7: ... Starting from such a relatively small annotation base we deployed a search test involving 10 people in the evaluation phase. Each evaluator has been given a set of queries ( Table1 ) to be performed and has been asked to judge results relevance with respect to queries, assigning, to retrieved documents, values that range from 0 (non- relevant) to 100 (fully relevant). The search interface was a simple PHP front-end for the DOSE architecture allowing keyword based searches.... ..."
Cited by 4

Table 4: The di erence in single-keyword read query response time between the lock-based and timestamp-based implementations of the Batch approach when an insert of a document of di erent size proceeds simultaneously.

in Efficient Real-Time Index Updates in Text Retrieval Systems
by Tzi-cker Chiueh, Lan Huang 1998
Cited by 16

Table 1: Example queries and results. 2 Shortcomings of Search Engines In this section we identify and discuss four sources of ine ectiveness of traditional search engines. These include:

in Anchor Point Indexing in Web Document Retrieval
by Ben Kao , Joseph Lee, Chi-yuen Ng, David Cheung 2000
"... In PAGE 3: ... Users do not usually have the patience to toil through more than the rst thirty hits returned by a search engine. Table1 illustrates this problem by showing the results obtained from querying four popular search engines with some sample queries. In each row of the table, we show a search goal (Goal), the keyword-based query submitted (Query), the search engine that processed the query (Search Engine), the number of pages returned (# of hits), the number of relevant pages that satis ed our goal in the rst thirty hits... In PAGE 4: ...ovember day in some year. (The numbers 19 and 1997 are logically ignored by the engines.) One would nd that all these pages share the same pre x in their URL apos;s, and that they belong to a single logical cluster of a big hyper-document. The last column of Table1 refers to the number of clusters that the rst thirty hits can be grouped into. From the table we see that the answer sets are huge.... In PAGE 4: ... Of course, one would argue that the screening would stop as soon as one good recommendation is found. Still, as suggested by Table1 , the rst relevant page may not be found until a couple dozens pages have been examined, many more if one is unlucky. Also, the rst relevant page may not be the best page that can be found in the answer set.... In PAGE 4: ... More screening is required if one would like to compare relevant hits looking for a better match. Besides illustrating the large answer set problem, Table1 also gives us a hint on how to avoid overwhelming the users with the numerous recommendations. The last column of the table suggests that the large number of pages can be grouped into a small number of logical clusters.... In PAGE 5: ... The implication is that users cannot a ord to examine only the rst few, or any small subset, of the answer set. Table1 illustrates this problem by showing the number of pages among the rst 30 hits that are relevant to a search goal. We see that, for some queries, the numbers are less than honorable.... ..."
Cited by 1

Table 1 shows the precision (% of relevant documents) in the top 10 results.

in Date Accepted Acknowledgements
by Devanand Rajoo Ravindran
"... In PAGE 51: ... Table1 . Average Precisions of 1-word, 2-word and 3-word queries * K = Pure Keyword Based K+P = Keyword + Pruning K+C = Keyword + Conceptual Retrieval K+C+P = Keyword + Conceptual Retrieval with Pruning We can see that, compared to keyword search alone, we get an improvement of 82.... In PAGE 55: ... The most marked increase in precision is obtained in the final case, where keyword retrieval is used along with conceptual retrieval and pruning. Furthermore, from Table1 , we see that the maximum precision among all experiments performed is obtained for single word queries, when pruned at Level 2 (71%). The precision results previously described in section 5.... ..."

Table 1 provides a summary of the test bed of fifteen collec- tions used for the experiments.

in Improving text collection selection with coverage and overlap statistics. pc-recommended poster. WWW 2005. Full version available at http://rakaposhi.eas.asu.edu/thomas-www05-long.pdf
by Thomas Hernandez 2005
"... In PAGE 7: ... Table1 : Complete test bed of collections with the number of documents they contain. The nine synthetic collections were considered as being complete bibliographies and a keyword-based search engine was built on top of each.... ..."
Cited by 19

Table 1. Types of queries and matching methods in image-based search Query Matched with: Matching algorithm

in New Use for the Pen: Outline-Based Image Queries
by Lambert Schomaker, Louis Vuurpijl, eta l., Edward De Leau 1999
"... In PAGE 1: ... On the world-wide web (WWW), various experi- mental approaches are already available[9], from which some lessons can be drawn. Textual methods which are based on keyword queries ( Table1 , A) to nd images are potentially very powerful. However, a textual annotation of images produced by a single content provider, al- though already very costly by itself, usually does not cover a su cient number of views or perspec- tives on the same pictorial material.... ..."
Cited by 1
Next 10 →
Results 11 - 20 of 13,325
Powered by: Apache Solr
  • About CiteSeerX
  • Submit and Index Documents
  • Privacy Policy
  • Help
  • Data
  • Source
  • Contact Us

Developed at and hosted by The College of Information Sciences and Technology

© 2007-2019 The Pennsylvania State University